This Milliman Asia ERM Newsletter highlights the latest developments in enterprise risk management (ERM) across the Asia Pacific region. ERM activity in the insurance sector is accelerating at a rapid pace around the region, especially since a number of regulators have introduced Own Risk and Solvency Assessments (ORSA). Even in countries where ORSA has not been introduced yet, there is an increased interest among risk managers who realize the value that ERM can add to their business through enhanced business resilience.
The newsletter features regulatory and market developments related to ERM from India, Singapore, and Thailand. An article by Neil Cantle on the complexity of risk within businesses is also included.
In this interview with InsuranceERM (subscription required), Milliman’s Neil Cantle and Elliot Varnell reflect on key issues impacting Europe’s insurance industry in 2013. They also discuss some challenges the industry may face in 2014.
Here’s an excerpt from the interview:
What will 2013 be remembered for?
Varnell: I would suggest that it was the year that Solvency II was finally “agreed” at the top level after a few years of debate and wrangling between the Council, Commission and Parliament.
Ironically, it was also the year when economically based regulatory capital was to some extent de-emphasised as the PRA published on Early Warning Indicators (see IERM, 4 October) and the FSB announced its G-SII list (see IERM, 19 July) and kicked off a project through the IAIS to come up with a global metric for regulatory capital (see IERM, 12 December.)
But also the year that many insurers – especially life insurers – rebalanced their focus away from Solvency II and regulatory capital and turned to looking for the best opportunities for value creation in their business. The refocus on product development and investment in infrastructure stand out as examples of areas that insurers have re-focused onto value creation.
What will be the biggest ERM challenge of 2014?
Cantle: I think many firms are still struggling to bring ERM to life and make it truly operational. If ERM is done simply as a compliance exercise then it can cost a lot of money and simply be a burden. If it is done to bring insights to the business and improve the opportunity for discussion about performance uncertainty then it can improve resilience and add significant long-term value to the business. The challenge is therefore to look beyond templates and documentation and make it strategic. Concepts like risk appetite require a multi-variate view of performance, so that indicators are seen in context, and many firms still cannot do that.
Techniques for assessing operational risk have come a long way in the past ten years. Today, many companies are going beyond the regulatory minimum to implement sophisticated models that contribute to better understanding and management of operational risk across the business.
One question that tends to push the limits of existing models, however, is identifying emerging operational risk before it produces a loss. Given that risk events are typically not entirely new but rather simply new combinations of known risks, an approach that enables us to analyze which risk drivers exhibit evolutionary change can identify which ones are most likely to create emergent risks. By borrowing a technique from biology—phylogenetics, the study of evolutionary relationships—we can understand how certain characteristics of risk drivers evolve over time to generate new risks. The success of such an approach is heavily dependent on the degree to which operational risk loss data is available, coherent, compatible, and comprehensive. A well-structured loss data collection (LDC) framework can be a key asset in attempting to understand and manage emergent risks.
Broadening the definition of operational risk
In the financial industry, where operational risk has been a significant target of regulators for more than a decade, operational risk is typically defined as “the risk of loss resulting from inadequate or failed internal processes, people, and systems, or from external events.” However, this definition doesn’t consider all the productive inputs of an operation, and, more critically, does not account for the interaction between internal and external factors.
A broader, more useful definition is “the risk of loss resulting from inadequate or failed productive inputs used in an operational activity.” Operational risk includes a very broad range of occurrences, from fraud to human error to information technology failures. Different production factors can be more or less important among various industries and companies, and relationships among them—particularly where labor is concerned—are changing rapidly. To be effective as tools for managing operational risk day-to-day, models need to account for the specific risk characteristics of a given company as well as how those characteristics can change over time.
Examples productive inputs relevant for operational risk
||The physical space used to carry out the production process that may be owned, rented, or otherwise utilized.
||Naturally occurring goods such as water, air, minerals, flora, and fauna.
||Physical work performed by people.
||The value that employees provide through the application of their personal skills that are not owned by an organization.
||The supportive infrastructure, brand, patents, philosophies, processes, and databases that enable human capital to function.
||The stock of trust, mutual understanding, shared values, and socially held knowledge, commonly transmitted throughout an organization as part of its culture.
||The stock of intermediate goods and services used in the production process such as parts, machines, and buildings.
||The stock of public goods and services used but not owned by the organizations such as roads and the Internet.
Every organization tries to reduce operational risk as a basic part of day-to-day operations whether that means enforcing safety procedures or installing antivirus software. Yet not as many take the next steps to holistically assess operational risk, quantify the severity, likelihood, and frequency of different risks, and understand the interdependencies among risk drivers. Companies may see operational risk modelling as an unnecessary cost, or they may not have considered it at all. Yet the right approach to modelling operational risk can support a wide range of best practices within an organization, including:
• Risk assessment: Measuring an organization’s exposure to the full range of operational risks to support awareness and action.
• Economic capital calculation: Setting capital reserves that enable organizations to survive adverse operational events without tying up excessive capital.
• Business continuity and resilience planning: Discovering where material risks lie and changing systems, processes, and procedures to minimize the damage to operations caused by an adverse event.
• Risk appetite and risk limit setting: Creating a coherent policy concerning the amount of operational risk an organization is willing to accept, and monitoring it to ensure the threshold is not breached.
• Stress testing: Modelling how an organization performs in an adverse situation to aid in planning and capital reserving.
• Reverse stress testing: Modelling backward from a catastrophic event to understand which risks are most material to an organization’s solvency.
• Dynamic operational risk management: Monitoring, measuring, and responding to changing characteristics of operational risk that is due to shifts in the operating environment, risk management policies, or company structure.
At the more basic level, having a detailed understanding of operational risk simply supports efforts to manage and reduce it—a worthy goal for almost any organization. Modelling enables an organization to consciously set an appropriate balance between operational resilience and profitability.
In order to achieve these goals, it is important to choose a methodology for which the results are accessible and actionable for the decision makers on the front lines of operational risk. Even financial organizations that once chose models primarily to meet regulatory requirements are beginning to move toward models that help the organization actively understand and reduce operational risk. The tangible business benefits are simply too great to ignore.
The state of operational risk modeling in the financial industry today
Basel II allows banks to choose from three approaches to operational risk: the Basic Indicator Approach (BIS), the Standardized Approach (SA) and the Advanced Measurement Approach (AMA) While the BIS and SA are attractively simple and inexpensive to implement, they are ultimately very blunt tools.
While adopting an Advanced Measurement Approach is much more labor-intensive and requires regulatory approval, large institutions recognize that these challenges are outweighed by the benefits of a more sophisticated approach to measuring operational risk. These include improved reputation among investors and other stakeholders, significantly reduced operational risk capital requirements, and, most importantly, better risk management processes that can actually help reduce losses.
The Advanced Measurement Approach brings with it many requirements, but does not require banks to use a specific modeling methodology. Nevertheless, most banks today have converged on the loss distribution approach (LDA). In the LDA, the severity and frequency of operational risk losses are analyzed and modeled separately. Once severity and frequency have been calculated, the aggregate loss distribution is typically generated using Monte Carlo simulation techniques.
Market risk, counterparty risk, and technical risks specific to health, life, and property and casualty lines of business have long been quantified by insurers in response to regulatory requirements. The measurement of operational risk, on the other hand, has only been incorporated into insurance regulatory frameworks over the past decade, and approaches to modeling it are in their relative infancy. The most common approaches for modeling operational risk focus predominantly on prediction of extreme losses, which provides little in the way of practical guidance to management. In this post, we examine standard methods, and introduce a sophisticated and relatively new approach known as structural or causal modeling.
What is operational risk?
Most regulatory frameworks define risk along the lines of “the risk of loss resulting from inadequate or failed internal processes, people, and systems, or from external events.” This definition is somewhat limited as it doesn’t consider the full range of potential productive inputs that constitute typical operations—and, just as importantly, how operational activities interact with environmental factors outside the organization’s control. In many cases, internal operational failures create a heightened sensitivity to external factors, and it is the interplay among them that can cause severe loss. Therefore, it is useful to define operational risk as “the risk of loss resulting from inadequate or failed productive inputs used in an operational activity.” This accounts for the broad and heterogeneous nature of risk among different industries and even amongst different companies in the same industry.
As Solvency II priorities move out, it’s time to refocus enterprise risk management (ERM) back on strategic issues, says Milliman’s Neil Cantle. But he warns of regulators’ increasing focus on liquidity and resolution as a new trend that firms will need to incorporate into their thinking.
In this recent InsuranceERM (subscription required) interview, Cantle talks about the developments of Solvency II in 2012, the biggest challenges facing ERM in 2013, and more.
Here is an excerpt from the interview:
What will be the biggest ERM challenge of 2013?
Many firms have now tackled the obvious parts of their risk frameworks and are back to the traditionally difficult areas like emerging risk, operational risk, risk appetite and limit frameworks and the whole area of risk culture. These are less mechanical or quantifiable due to their fundamental complexity and inherent interactions with people. Traditional methods simply break down when applied to these areas.
Solvency II provided a useful imperative, which has helped firms to focus on improving the maturity of their ERM frameworks. Although some had higher aspirations, many firms had aimed to have a workable basic solution in place by the Solvency II launch date with improvement programmes running thereafter. There is a danger that delays in Solvency II could weaken their resolve to move ahead and leave many companies with only adequate solutions in place for much longer than they had intended. However, it seems likely that many of the major markets will press on with the risk management requirements ahead of full implementation, so perhaps all is not lost.
The continuation of subdued economic conditions also poses a serious challenge. The hope of a return to pre-crisis market levels is no longer realistic in the medium term, so firms are having to think creatively about how to deliver attractive products without historical return levels to support them. This needs to be achieved without taking unduly high risk onto the balance sheet or passing risks back to policyholders that they are not expecting.
At the UK Actuarial Profession’s 2012 Life Convention in Brussels we ran our iPad survey again to poll attendees’ views on a range of topics. Around half of those who attended completed the survey, which makes the results likely representative of prevailing moods.
First we asked the question everyone wants to know: When is Solvency II actually going to commence? Almost everyone is now expecting it to be 2016 or later (over 10% have given up all hope!).
We were also interested in finding out what firms intended to do if Solvency II is delayed to 2016. Over 30% will scale down their development activities and about the same proportion will integrate them into business as usual (BAU). Over 5% would actually stop development altogether.
Asking about the length of time taken to produce quarterly Solvency II results it seems that people are looking at run times of either 1-4 weeks or 1-2 months. Worryingly though, nearly 20% look as though they are going to be running models almost continuously.
In ERM the topic of risk culture is rising rapidly up the agenda, with over 30% saying that this aspect of ERM challenged them most. Nearly 25% of respondents are still concerned about risk appetite and over 20% are getting to grips with operational risk. Around 17% have yet to crack emerging risks.
Looking at the issue of retirement income provision nearly 70% think that working longer is the most likely response to insufficient retirement income.
Views on the Euro seemed somewhat split with 50% thinking it will endure and 30% fearing its demise is imminent. It would be interesting to know whether the scores changed significantly after Andrew Goodwin gave us glimmers of hope in his plenary session.
It seems that the visitors to our stand are hot on geography as over 75% accurately placed “the other Brussels” in Canada.
Reverse stress testing focuses on the sustainability of a business model and may help identify obscure underlying risks. Neil Cantle is quoted in this Risk.net (subscription needed) article discussing the importance of recognizing “tipping point” scenarios that could damage a business. Here is an excerpt:
It is also important not to focus on only big events. “It is possible for non-viability to arise from a slow buildup of smaller events too,” says Cantle.
… An important element of the detailed analysis can be identifying tipping points at which a slow build-up of events or circumstances reaches a critical mass and undermines the business. “An organisation can be apparently operating quite normally, but slowly becoming more sensitive to a particular stimulus. When that trigger happens, the company could unravel very quickly indeed,” says Cantle of Milliman.
This is a key insight from the science of complex systems, so understanding how these types of scenarios occur is crucially important for reverse stress testing, Cantle says. “If an insurer is close to a tipping point, it is unlikely that it will be able to avoid it and so it should focus its resources on resilience. If it is sufficiently far from the tipping point and the speed of onset permits it to take action in the time remaining, then it may be able to devote resources to avoidance or mitigation. Knowing which path to follow is a key learning exercise,” he adds.
Neil Cantle and Elliot Varnell are among the ten actuaries to receive the Chartered Enterprise Risk Actuary (CERA) qualification from the Institute and Faculty of Actuaries “in recognition of their exceptional roles as thought leaders in the field of enterprise risk management.”
Here is an excerpt from the official press release:
Philip Scott, President of the Institute and Faculty of Actuaries said:
“This award not only recognises the major contribution these individuals have made to thought leadership in the field of Enterprise Risk Management, but also their commitment to embedding ERM within industry practice.
“There are now 108 CERA qualified actuaries working in a wide variety of roles. From regulators and consultants to insurers and asset managers, the CERA qualification is proving an invaluable asset to actuaries as they apply their skill-sets to new challenges
“Our new ERM thought leaders will act as ambassadors for both ERM and the CERA qualification, impressing on both actuaries and the wider business community the value of the qualification.”