Techniques for assessing operational risk have come a long way in the past ten years. Today, many companies are going beyond the regulatory minimum to implement sophisticated models that contribute to better understanding and management of operational risk across the business.
One question that tends to push the limits of existing models, however, is identifying emerging operational risk before it produces a loss. Given that risk events are typically not entirely new but rather simply new combinations of known risks, an approach that enables us to analyze which risk drivers exhibit evolutionary change can identify which ones are most likely to create emergent risks. By borrowing a technique from biology—phylogenetics, the study of evolutionary relationships—we can understand how certain characteristics of risk drivers evolve over time to generate new risks. The success of such an approach is heavily dependent on the degree to which operational risk loss data is available, coherent, compatible, and comprehensive. A well-structured loss data collection (LDC) framework can be a key asset in attempting to understand and manage emergent risks.
Broadening the definition of operational risk
In the financial industry, where operational risk has been a significant target of regulators for more than a decade, operational risk is typically defined as “the risk of loss resulting from inadequate or failed internal processes, people, and systems, or from external events.” However, this definition doesn’t consider all the productive inputs of an operation, and, more critically, does not account for the interaction between internal and external factors.
A broader, more useful definition is “the risk of loss resulting from inadequate or failed productive inputs used in an operational activity.” Operational risk includes a very broad range of occurrences, from fraud to human error to information technology failures. Different production factors can be more or less important among various industries and companies, and relationships among them—particularly where labor is concerned—are changing rapidly. To be effective as tools for managing operational risk day-to-day, models need to account for the specific risk characteristics of a given company as well as how those characteristics can change over time.
Examples productive inputs relevant for operational risk
||The physical space used to carry out the production process that may be owned, rented, or otherwise utilized.
||Naturally occurring goods such as water, air, minerals, flora, and fauna.
||Physical work performed by people.
||The value that employees provide through the application of their personal skills that are not owned by an organization.
||The supportive infrastructure, brand, patents, philosophies, processes, and databases that enable human capital to function.
||The stock of trust, mutual understanding, shared values, and socially held knowledge, commonly transmitted throughout an organization as part of its culture.
||The stock of intermediate goods and services used in the production process such as parts, machines, and buildings.
||The stock of public goods and services used but not owned by the organizations such as roads and the Internet.
Every organization tries to reduce operational risk as a basic part of day-to-day operations whether that means enforcing safety procedures or installing antivirus software. Yet not as many take the next steps to holistically assess operational risk, quantify the severity, likelihood, and frequency of different risks, and understand the interdependencies among risk drivers. Companies may see operational risk modelling as an unnecessary cost, or they may not have considered it at all. Yet the right approach to modelling operational risk can support a wide range of best practices within an organization, including:
• Risk assessment: Measuring an organization’s exposure to the full range of operational risks to support awareness and action.
• Economic capital calculation: Setting capital reserves that enable organizations to survive adverse operational events without tying up excessive capital.
• Business continuity and resilience planning: Discovering where material risks lie and changing systems, processes, and procedures to minimize the damage to operations caused by an adverse event.
• Risk appetite and risk limit setting: Creating a coherent policy concerning the amount of operational risk an organization is willing to accept, and monitoring it to ensure the threshold is not breached.
• Stress testing: Modelling how an organization performs in an adverse situation to aid in planning and capital reserving.
• Reverse stress testing: Modelling backward from a catastrophic event to understand which risks are most material to an organization’s solvency.
• Dynamic operational risk management: Monitoring, measuring, and responding to changing characteristics of operational risk that is due to shifts in the operating environment, risk management policies, or company structure.
At the more basic level, having a detailed understanding of operational risk simply supports efforts to manage and reduce it—a worthy goal for almost any organization. Modelling enables an organization to consciously set an appropriate balance between operational resilience and profitability.
In order to achieve these goals, it is important to choose a methodology for which the results are accessible and actionable for the decision makers on the front lines of operational risk. Even financial organizations that once chose models primarily to meet regulatory requirements are beginning to move toward models that help the organization actively understand and reduce operational risk. The tangible business benefits are simply too great to ignore.
The state of operational risk modeling in the financial industry today
Basel II allows banks to choose from three approaches to operational risk: the Basic Indicator Approach (BIS), the Standardized Approach (SA) and the Advanced Measurement Approach (AMA) While the BIS and SA are attractively simple and inexpensive to implement, they are ultimately very blunt tools.
While adopting an Advanced Measurement Approach is much more labor-intensive and requires regulatory approval, large institutions recognize that these challenges are outweighed by the benefits of a more sophisticated approach to measuring operational risk. These include improved reputation among investors and other stakeholders, significantly reduced operational risk capital requirements, and, most importantly, better risk management processes that can actually help reduce losses.
The Advanced Measurement Approach brings with it many requirements, but does not require banks to use a specific modeling methodology. Nevertheless, most banks today have converged on the loss distribution approach (LDA). In the LDA, the severity and frequency of operational risk losses are analyzed and modeled separately. Once severity and frequency have been calculated, the aggregate loss distribution is typically generated using Monte Carlo simulation techniques.
Market risk, counterparty risk, and technical risks specific to health, life, and property and casualty lines of business have long been quantified by insurers in response to regulatory requirements. The measurement of operational risk, on the other hand, has only been incorporated into insurance regulatory frameworks over the past decade, and approaches to modeling it are in their relative infancy. The most common approaches for modeling operational risk focus predominantly on prediction of extreme losses, which provides little in the way of practical guidance to management. In this post, we examine standard methods, and introduce a sophisticated and relatively new approach known as structural or causal modeling.
What is operational risk?
Most regulatory frameworks define risk along the lines of “the risk of loss resulting from inadequate or failed internal processes, people, and systems, or from external events.” This definition is somewhat limited as it doesn’t consider the full range of potential productive inputs that constitute typical operations—and, just as importantly, how operational activities interact with environmental factors outside the organization’s control. In many cases, internal operational failures create a heightened sensitivity to external factors, and it is the interplay among them that can cause severe loss. Therefore, it is useful to define operational risk as “the risk of loss resulting from inadequate or failed productive inputs used in an operational activity.” This accounts for the broad and heterogeneous nature of risk among different industries and even amongst different companies in the same industry.
As companies implement Solvency II programs, operational risk, often seen as a catch-all for ‘other’ risks, is being recognized as having greater impact than was previously realized.
Modeling and management of operational risks—and preparing companies to be more robust to these risks—are now seen as a key aspects of sound insurance management.
Operational risk is also moving up companies’ agendas because the capital charge under the Solvency II Pillar I standard-formula calculation is a rather crude measure—it is essentially based on business volumes. While this has the benefit of simplicity, it may lead to what could be considered excessive capital requirements and falls short of the principles underlying the Own Risk and Solvency Assessment (ORSA).
A new white paper by Milliman consultants provides a brief summary of how companies are currently approaching operational risk under Solvency II, and offers some suggestions for improvements using innovative techniques.
Here is an excerpt from the paper:
The modeling and management is rapidly moving up companies’ priority lists as recognition is growing of the potentially lethal nature of these risks, their often inherent unknowability and, if nothing else, the significant capital charges that can emerge from the standard-formula approach.
More sophisticated approaches are becoming available that not only integrate the modeling and management of operational risk but also generate insights into the complex risk stream running unseen through the bedrock of a company. This approach allows appropriate risk mitigation and increasingly robust measures to be developed and embedded into business processes.
Download and read the white paper here.