Apr 16, 2017

All Models are Wrong

A 2016 paper by James Stuart (et al) from the 'Institute & Faculty of Actuaries' about 'Ersatz Models' (Substitute Models) states....

Al models are deliberate simplifications of the real world. Attempts to demonstrate a model’s correctness can be expected to fail, or apparently to succeed because of test limitations, such as insufficient data.

We can explain this using an analogy involving milk. Cows’ milk is a
staple part of European diets. For various reasons some people avoid it, preferring substitutes, or ersatz milk, for example made from soya. In a chemical laboratory, cows’ milk and soya milk are easily distinguished.
Despite chemical differences, soya milk physically resembles cows’ milk in many ways - colour, density, viscosity for example. For some purposes, soya milk is a good substitute, but other recipes will produce acceptable results only with cows’ milk. The acceptance criteria for soya milk should depend on how the milk is to be used.



In the same way, with sufficient testing, we can always distinguish an
ersatz model from whatever theoretical process drives reality. We should be concerned with a more modest aim: whether the ersatz model is good enough in the aspects that matter, that is, whether the modelling objective has been achieved.

The Model Problem
The paper starts with stories of models gone bad. Can our proposed
generated data tests prevent a recurrence?



The Model Risk Working party has explained how model risks arise not only from quantitative model features but also social and cultural aspects relating to how a model is used. When a model fails, a variety of narratives may be offered to describe what went wrong. There may be disagreements between experts about the causes of any crisis, depending on who knew, or could have known, about model limitations. Possible elements include:
  • A new risk emerged from nowhere and there is nothing anyone could have done to anticipate it - sometimes called a “black swan”.
  • The models had unknown weaknesses, which could have been revealed by more thorough testing.
  • Model users were well acquainted with model weaknesses, but these were not communicated to senior management accountable for the business
  • Everyone knew about the model weaknesses but they continued to take excessive risks regardless.
Ersatz testing can address some of these, as events too rare to feature in actual data may still occur in generated data. Testing on generated data can also help to improve corporate culture towards model risk, as:
  • Hunches about what might go wrong are substantiated by objective analysis. While a hunch can be dismissed, it is difficult to suppress objective evidence or persuade analysts that the findings are irrelevant.
  • Ersatz tests highlight many model weaknesses, of greater or lesser importance. Experience with generated data testing can de-stigmatise test failure and so reduce the cultural pressure for cover-ups.
We recognise that there is no mathematical solution to determine how extreme the reference models should be. This is essentially a social decision.

Corporate cultures may still arise where too narrow a selection of reference models is tested, and so model weaknesses remain hidden.

  1. Source: Ersatz Model Tests
    https://www.actuaries.org.uk/documents/ersatz-model-tests-0
    (Conclusions : page 35)
  2. Model Risk Working Party
    https://www.actuaries.org.uk/documents/sessional-paper-model-risk-daring-open-black-box
  3. The skinny kids are all drinking full cream milk
    http://heffalumpgeneration.co.za/the-skinny-kids-are-all-drinking-full-cream-milk/


Jan 1, 2017

Happy Risk New Year 2017

Happy New Year to all Actuary-Info readers.


The year 2017 will be the another year that'll empower us to develop new insights on risk management. Driven by economic turbulence and desperate rule-based regulation we will probably keep trying to capture, control and even eliminate risk, instead of trying to understand and anticipate risk.

Dutch Insurance Merger
At the end of 2016, two large Dutch insurers - Nationale Nederlanden (NN) and Delta Lloyd (DL) - decided to go ahead with their merger. Formally it's called a take over of DL by NN.

Driven by a declining DL-Solvency-II rate and supported by the concerned Dutch regulator (DNB), DL now finds shelter within NN. Besides the take over price of € 2.5 billion, NN Group faces a decline in solvency ratio from around 250% (pre-merger; Q2 2016) to 185 percent 
(post-merger, Q3 2016).

A strong merger (background) driver is
DL's expectation: "Delta Lloyd's 4Q16 Solvency II ratio is to be adversely affected by the LAC-DT review by DNB, the possible removal of the risk margin benefit of the longevity hedge and adverse longevity developments."

However, keep in mind that 'all' life insurance companies with 'long tails' have a serious (business case) problem. A problem that's not only solvable with money (capital), but necessitates the formulation of a new strategy that goes beyond just "cost control".

As low interest rates will continue and Solvency-II requirements will only increase, more mergers of life companies (with long tail risks) are to be expected.

When is a merger the right solution?
Although a merger often looks like a perfect solution for 'the problem', it not always is.....


Several studies estimate the failure rate of mergers and acquisitions somewhere between 70% and 90% (at least more than 50%).

The most common general merger fallacies and attention points are addressed in a McKinsey presentation:
 


When a merger or take over is considered, first check the next key-points from a risk perspective :

1. Strategy: Bigger is not always better
It's surprising how inherently correct analyzes always lead to 'bigger' is 'better', while we know that "bigger" contributes to 'too big to fail ',' decreasing cost efficiency ',' less flexibility (less agile) and 'less innovative capacity ' (like Fintech applications).

For successful mergers or take-overs, 
just applying traditional capital management (and Solvency II rules) just isn't enough. In all cases a well defined checked and supported 'new strategy' (plan) including a strong 'business case', are a first requirement.  
   


Always investigate these (adverse) merger effects and concept new strategy in the due diligence phase of a merger.


2. Increasing complexity effects
Is the change in complexity (IT, communication, products, distribution channels, etc.) measured and addressed in the merger/'take-over? If the complexity increases beyond certain  levels, targeted cost reductions may not be met. Often these costs are underestimated.

Always try to measure and address complexity
 in the due diligence phase of a merger.

3. Consistency 
Always check upon the consistency of (financial) analyses. If certain (actuarial) analyses, audits or valuation methods are only applied (one-sided) for the to be acquired company and not  for the acquiring company, consistency clearly fails and merger conclusions are probably biased.

Whether it's a "takeover" or 'merger', or how the power in the board is managed, doesn't really matter. Both companies should be compared on the same basis.
Always check on consistency in the due diligence phase of a merger.

Finally
Success with risk is on your merger-table in 2017 !! 


Links/Sources:
- Bigger is better wine glass
The Big Idea: The New M&A Playbook
Mergers and Acquisitions failure rates and perspectives on why they fail
http://tinyurl.com/NNDLtakeover
NN Group and Delta Lloyd agree on recommended transaction
DNB esearch; Bikker; Is there an optimal pension fund size?
DNB examination into complex IT environments
70% of Transformation Programs Fail - McKinsey
McKinsey: Where mergers go wrong