Techniques for assessing operational risk have come a long way in the past ten years. Today, many companies are going beyond the regulatory minimum to implement sophisticated models that contribute to better understanding and management of operational risk across the business.
One question that tends to push the limits of existing models, however, is identifying emerging operational risk before it produces a loss. Given that risk events are typically not entirely new but rather simply new combinations of known risks, an approach that enables us to analyze which risk drivers exhibit evolutionary change can identify which ones are most likely to create emergent risks. By borrowing a technique from biology—phylogenetics, the study of evolutionary relationships—we can understand how certain characteristics of risk drivers evolve over time to generate new risks. The success of such an approach is heavily dependent on the degree to which operational risk loss data is available, coherent, compatible, and comprehensive. A well-structured loss data collection (LDC) framework can be a key asset in attempting to understand and manage emergent risks.
Broadening the definition of operational risk
In the financial industry, where operational risk has been a significant target of regulators for more than a decade, operational risk is typically defined as “the risk of loss resulting from inadequate or failed internal processes, people, and systems, or from external events.” However, this definition doesn’t consider all the productive inputs of an operation, and, more critically, does not account for the interaction between internal and external factors.
A broader, more useful definition is “the risk of loss resulting from inadequate or failed productive inputs used in an operational activity.” Operational risk includes a very broad range of occurrences, from fraud to human error to information technology failures. Different production factors can be more or less important among various industries and companies, and relationships among them—particularly where labor is concerned—are changing rapidly. To be effective as tools for managing operational risk day-to-day, models need to account for the specific risk characteristics of a given company as well as how those characteristics can change over time.
Examples productive inputs relevant for operational risk
||The physical space used to carry out the production process that may be owned, rented, or otherwise utilized.
||Naturally occurring goods such as water, air, minerals, flora, and fauna.
||Physical work performed by people.
||The value that employees provide through the application of their personal skills that are not owned by an organization.
||The supportive infrastructure, brand, patents, philosophies, processes, and databases that enable human capital to function.
||The stock of trust, mutual understanding, shared values, and socially held knowledge, commonly transmitted throughout an organization as part of its culture.
||The stock of intermediate goods and services used in the production process such as parts, machines, and buildings.
||The stock of public goods and services used but not owned by the organizations such as roads and the Internet.
Every organization tries to reduce operational risk as a basic part of day-to-day operations whether that means enforcing safety procedures or installing antivirus software. Yet not as many take the next steps to holistically assess operational risk, quantify the severity, likelihood, and frequency of different risks, and understand the interdependencies among risk drivers. Companies may see operational risk modelling as an unnecessary cost, or they may not have considered it at all. Yet the right approach to modelling operational risk can support a wide range of best practices within an organization, including:
• Risk assessment: Measuring an organization’s exposure to the full range of operational risks to support awareness and action.
• Economic capital calculation: Setting capital reserves that enable organizations to survive adverse operational events without tying up excessive capital.
• Business continuity and resilience planning: Discovering where material risks lie and changing systems, processes, and procedures to minimize the damage to operations caused by an adverse event.
• Risk appetite and risk limit setting: Creating a coherent policy concerning the amount of operational risk an organization is willing to accept, and monitoring it to ensure the threshold is not breached.
• Stress testing: Modelling how an organization performs in an adverse situation to aid in planning and capital reserving.
• Reverse stress testing: Modelling backward from a catastrophic event to understand which risks are most material to an organization’s solvency.
• Dynamic operational risk management: Monitoring, measuring, and responding to changing characteristics of operational risk that is due to shifts in the operating environment, risk management policies, or company structure.
At the more basic level, having a detailed understanding of operational risk simply supports efforts to manage and reduce it—a worthy goal for almost any organization. Modelling enables an organization to consciously set an appropriate balance between operational resilience and profitability.
In order to achieve these goals, it is important to choose a methodology for which the results are accessible and actionable for the decision makers on the front lines of operational risk. Even financial organizations that once chose models primarily to meet regulatory requirements are beginning to move toward models that help the organization actively understand and reduce operational risk. The tangible business benefits are simply too great to ignore.
The state of operational risk modeling in the financial industry today
Basel II allows banks to choose from three approaches to operational risk: the Basic Indicator Approach (BIS), the Standardized Approach (SA) and the Advanced Measurement Approach (AMA) While the BIS and SA are attractively simple and inexpensive to implement, they are ultimately very blunt tools.
While adopting an Advanced Measurement Approach is much more labor-intensive and requires regulatory approval, large institutions recognize that these challenges are outweighed by the benefits of a more sophisticated approach to measuring operational risk. These include improved reputation among investors and other stakeholders, significantly reduced operational risk capital requirements, and, most importantly, better risk management processes that can actually help reduce losses.
The Advanced Measurement Approach brings with it many requirements, but does not require banks to use a specific modeling methodology. Nevertheless, most banks today have converged on the loss distribution approach (LDA). In the LDA, the severity and frequency of operational risk losses are analyzed and modeled separately. Once severity and frequency have been calculated, the aggregate loss distribution is typically generated using Monte Carlo simulation techniques.
Market risk, counterparty risk, and technical risks specific to health, life, and property and casualty lines of business have long been quantified by insurers in response to regulatory requirements. The measurement of operational risk, on the other hand, has only been incorporated into insurance regulatory frameworks over the past decade, and approaches to modeling it are in their relative infancy. The most common approaches for modeling operational risk focus predominantly on prediction of extreme losses, which provides little in the way of practical guidance to management. In this post, we examine standard methods, and introduce a sophisticated and relatively new approach known as structural or causal modeling.
What is operational risk?
Most regulatory frameworks define risk along the lines of “the risk of loss resulting from inadequate or failed internal processes, people, and systems, or from external events.” This definition is somewhat limited as it doesn’t consider the full range of potential productive inputs that constitute typical operations—and, just as importantly, how operational activities interact with environmental factors outside the organization’s control. In many cases, internal operational failures create a heightened sensitivity to external factors, and it is the interplay among them that can cause severe loss. Therefore, it is useful to define operational risk as “the risk of loss resulting from inadequate or failed productive inputs used in an operational activity.” This accounts for the broad and heterogeneous nature of risk among different industries and even amongst different companies in the same industry.
As advances in enterprise risk management (ERM) continue, insurers will encounter new strategic challenges. This Insurance News article highlights Josh Corrigan’s discussion on ERM strategy at the 2013 Actuaries Institute summit in Sydney.
Here is an excerpt:
“During the 1990s [risk management] captured balance sheet interactions, combined with the acceleration of financial risk techniques,” he said. “In the past 10 years the concept of risk appetite has developed and there is a focus on management and governance.”
ERM is now moving towards embedding and understanding how risk fits into an organisation’s culture, Mr Corrigan says.
It is also concerned with risk dynamics and the way various components relate to one another.
“Risk governance is largely focusing on the regulatory framework in which insurers work.
“But organisations need to think about the social structure around ERM and how to deliver risk insight and value to executives and boards.”
Actuaries will play a significant role developing ERM strategies and must engage with people outside the profession as part of that process, Mr Corrigan says.
“We still have a way to go to develop ERM in insurers, and operational risk still needs a lot of work.”
The presentation by Josh Corrigan (Milliman) and Michael Payne (of the Pru) showed how investment hedging, when used together with other levers of risk management, can reduce the amount of capital at risk in a with-profits fund. This avoided having the need to drop equity backing ratios and allowed guarantee benefit levels to be maintained.
Of particular interest was how the capital resources of the estate were used to smooth assets, which then reduced the volatility of the underlying assets and reduced the cost of hedging.
The capital reduction figures were very large — some people in the audience questioned whether there was real modelling behind the numbers!
Some other people in the audience challenged whether it was really possible to hedge out long term volatility. One point to consider on this is that a lot of the vol risk was reduced by the estate’s smoothing mechanism, but after that it wasn’t possible to fully hedge out all the long term vol — only up to about 10 years. The residual ICA capital figures included the effect of this residual risk, at very highly stressed levels, per annum, every annum, for the next 30 years. That seems a very tough stress to me.