Showing posts with label fallacy. Show all posts
Showing posts with label fallacy. Show all posts

Jun 28, 2013

Confidence Level Crisis

When you're - like me - a born professional optimist, but nevertheless sometimes worry about the unavoidable misery in the world, you ask yourself this question:

Why does God not act? 

Think about this question and try to answer it, before reading any further..

The answer to this question is very simple:

God does not act because he's conscious of everything  

The moral of this anecdote is that when you're fully aware of all the risks and their possible impact, chances are high you'll not be able to take any well-argued decision at all, as any decision will eventually fail when your objective is to rule out all possible risks.

You see, a question has come up that we can't agree on,
perhaps because we've read too many books.

Bertolt Brecht, Life of Galileo (Leben des Galilei)

On the other hand, if you're not risk-conscious at all regarding a decision to be taken, most probably you'll take the wrong decision.

'Mathematical Confident'
So this leaves us with the inevitable conclusion that in our eager to take risk-based decisions, a reasoned decision is nothing more than the somehow optimized outcome of a weighted sum of a limited number of subjective perceived risks. 'Perceived' and 'Weighted', thanks to the fact that we're unaware of certain risks, or 'filter', 'manipulate' or 'model' risks in such a way that we can be 'mathematical confident'. In other words, we've become victims of the "My calculator tells me I'm right! - Effect".

Risk Consciousness Fallacy
This way of taking risk based decisions has the 'advantage' that practice will prove it's never quite right. Implying you can gradually 'adjust' and 'improve' or 'optimize' your decision model endlessly.
Endlessly, up to the point where you've included so much new or adjusted risk sources and possible impacts, that the degrees in freedom of being able to take a 'confident' decision have become zero.

Risk & Investment Management Crisis
After a number of crises - in particular the 2008 systemic crisis - we've come to the point that we realize:
  • There are much more types of risk than we thought there would be
  • Most type of risks are nonlinear instead of linear
  • New risks are constantly 'born'
  • We'll not ever be able to identify or significantly control every possible kind of risk
  • Our current (outdated) investment model can't capture nonlinear risk
  • Most (investment) risks depend heavily on political measures and policy
  • Investment risks are more artificial and political based and driven, than statistical
  • Market Values are 'manipulable' and therefore 'artificial'
  • Risk free rates are volatile, unsure and decreasing
  • Traditional mathematical calculated 'confidence levels' fall short (model risk)
  • As Confidence Levels rise, Confidence Intervals and Value at Risk increase

One of the most basic implicit fallacies in investment modeling, is that mathematical confidence levels based on historical data are seen as 'trusted' confidence levels regarding future projections. Key point is that a confidence level (itself) is a conditional (Bayesian) probability .

Let's illustrate this in short.
A calculated model confidence level (CL) is only valid under the 'condition' that the 'Risk Structure' (e.g. mean, standard deviation, moments, etc.) of our analysed historical data set (H) that is used for modeling, is also valid in the future (F). This implies that our traditional confidence level is in fact a conditional probability : P(confidence level = x% | F=H ).

  • The (increasing) Basel III confidence level is set at P( x ∈ VaR-Confidence-Interval | F=H) = 99.9% in accordance with a one year default level of 0.1% (= 1-99,9%).
  • Now please estimate roughly the probability P(F=H), that the risk structure of the historical (asset classes and obligations) data set (H) that is used for Basel III calculations, will also be 100% valid in the near future (F).
  • Let's assume you rate this probability based on the enormous economic shifts in our economy (optimistic and independent) at P(F=H)=95% for the next year.
  • The actual unconditional confidence level now becomes P( x ∈ VaR-Confidence-Interval) = P( x ∈ VaR-Confidence-Interval | F=H) × P(F=H) = 99.9% × 95% = 94.905%
Although a lot of remarks could be made whether the above method is scientifically 100% correct, one thing is sure: traditional risk methods in combination with sky high confidence levels fall short in times of economic shifts (currency wars, economic stagnation, etc). Or in other words:

Unconditional Financial Institutions Confidence Levels will be in line with our own poor economic forecast confidence levels. 

A detailed Societe Generale (SG) report tells us that not only economic forecasts like GDP growth, but also stocks can not be forecasted by analysts.

Over the period 2000-2006 the US average 24-month forecast error is 93% (12-month: 47%). With an average 24-month forecast error of 95% (12-month: 43%), Europe doesn't do any better. Forecasts with this kind of scale of error are totally worthless.

Confidence Level Crisis
Just focusing on sky high risk confidence levels of 99.9% or more is prohibiting financial institutions to take risks that are fundamental to their existence. 'Taking Risk' is part of the core business of a financial institution. Elimination of risk will therefore kill financial institutions on the long run. One way or the other, we have to deal with this Confidence Level Crisis.

The way out
The way for financial institutions to get out of this risk paradox is to recognize, identify and examine nonlinear and systemic risks and to structure not only capital, but also assets and obligations in such a (dynamic) way that they are financial and economic 'crisis proof'. All this without being blinded by a 'one point' theoretical Confidence Level..

Actuaries, econometricians and economists can help by developing nonlinear interactive asset models that demonstrate how (much) returns and risks and strategies are interrelated in a dynamic economic environment of continuing crises.

This way boards, management and investment advisory committees are supported in their continuous decision process to add value to all stakeholders and across all assets, obligations and capital.

Calculating small default probabilities in the order of the Planck Constant (6.626 069 57 x 10-34 J.s) are useless. Only creating strategies that prevent defaults, make sense.

Let's get more confident! ;-)

- SG-Report: Mind Matters (Forecasting fails)
Are Men Overconfident Users?

Dec 3, 2012

Solvency II or Basel III ? Model Fallacy

Managing investment models - ALM models in particular - is a professional art. One of the most tricky risk management fallacies when dealing with these models, is that they are being used for identifying so called 'bad scenarios', which are then being 'hedged away'.

To illustrate what is happening, join me in a short every day ALM thought experiment...

Before that, I must warn you... this is going to be a long, technical, but hopefully interesting Blog. I'll try to keep the discussion on 'high school level'. Stay with me, Ill promise: it actuarially pays out in the end!

ALM Thought Experiment
  • Testing the asset Mix
    Suppose the board of our Insurance Company or Pension Fund is testing the current strategic asset mix with help of an ALM model in order to find out more about the future risk characteristics of the chosen portfolio.
  • Simulation
    The ALM model runs a 'thousands of scenarios simulation', to find out under which conditions and in which scenarios the 'required return' is met and to test if results are in line with the defined risk appetite.
  • Quantum Asset Return Space
    In order to stay as close to reality as possible, let's assume that the 'Quantum Asset Return Space' in which the asset mix has to deliver its required returns for a fixed chosen duration horizon N, consists of: 
    1. 999,900 scenarios with Positive Outcomes ( POs ),
      where the asset returns weigh up to the required return) and 
    2. 100 scenarios with Negative Outcomes ( NOs ),
      where the asset returns fail to weigh up to the required return.
    Choose 'N' virtual anywhere between 1 (fraction) of a year up to 50 years, in line with your liability duration.

  • Confidence (Base) Rate
    From the above example, we may conclude that the N-year confidence base rate of a positive scenario outcome (in short: assets meet liabilities) in reality is 99.99% and the N-year probability of a company default due to a lack of asset returns in reality is 0.01%.
  • From Quantum Space to Reality
    As the strategic asset mix 'performs' in a quantum reality, nobody - no board member or expert - can tell which of the quantum ('potential') scenarios will come true in the next N years or (even) what the exact individual quantum scenarios are.

    Nevertheless, these quantum scenarios all exist in "Quantum Asset Return Space" (QARS) and only one of those quantum scenarios will finally turn out as the one and only 'return reality'.

    Which one...(?), we can only tell after the scenario has manifested itself after N years.
  • Defining the ALM Model
    Now we start defining our ALM Model. As any model, our ALM model is an approach of reality (or more specific: the above defined 'quantum reality') in which we are forced to make simplifications, like: defining an 'average return', defining 'risk' as standard deviation, defining a 'normal' or other type of model as basis for drawing 'scenarios' for our ALM's simulation process.
    Therefore our ALM Model is and cannot be perfect.

    Now, because of the fact that our model isn't perfect, let's assume that our 'high quality' ALM Model has an overall Error Rate of 1% (ER=1%), more specific simplified defined as:
    1. The model generates Negative Scenario Outcomes (NSOs) (= required return not met) with an error rate of 1%. In other words: in 1% of the cases, the model generates a positive outcome scenario when it should have generated a negative outcome scenario
    2. The model generates Positive Scenario Outcomes (PSOs) (= required return met) with an error rate of 1%. In other words: in 1% of the cases, the model generates a negative outcome scenario when it should have generated a positive outcome scenario

The Key Question!
Now that we've set the our ALM model, we run it in a simulation with no matter how much runs. Here is the visual outcome:

As you may notice, the resulting ALM graph tells us more than a billion numbers....At once it's clear that one of the scenarios (the blue one) has a very negative unwanted outcome.
The investment advisor suggests to 'hedge this scenario away'. You as an actuary raise the key question:

What is the probability that a Negative Outcome (NO) scenario in the ALM model is indeed truly a negative outcome and not a false outcome due to the fact that the model is not perfect?

With this question, you hit the nail (right) on the head...
Do you know the answer? Is it 99% exactly, more or less?

Before reading further, try to answer the question and do not cheat by scrolling down.....

To help you prevent reading further by accident, I have inserted a pointful youtube movie:

Now here is the answer: The probability that any of the NOs (Negative Outcomes) in the ALM study - and not only the very negative blue one - is a truly a NO and not a PO (Positive Outcome) and therefore false NO, is - fasten your seat belts  - 0.98%! (no misspelling here!)

So there's a 99.02% (=100%-0.98%) probability that any Negative Outcome from our model is totally wrong, Therefore one must be very cautious and careful with drawing conclusions and formulating risk management actions upon negative scenarios from ALM models in general.

Here's the short Excel-like explanation, which is based on Bayes' Theorem.
You can download the Excel spreadsheet here.

There is MORE!
Now you might argue that the low probability (0.98%) of finding true Negative Outcomes is due to the high (99,99%) Positive Outcome rate and that 99,99% is unrealistic much higher than - for instance - the Basel III confidence level of 99,9%. Well..., you're absolutely right. As high positive outcome rates correspond one to one with high confidence levels, here are the results for other positive outcome rates that equal certain well known (future) standard confidence levels (N := 1 year):

What can we conclude from this graph?
If the relative part of positive outcomes and therefore the confidence levels rise, the probability that an identified Negative Output Scenario is true, decreases dramatically fast to zero. To put it in other words:

At high confidence levels (ALM) models can not identify negative scenarios anymore!!!

Higher Error Rates
Now keep in mind we calculated all this still with a high quality error rate of 1%. What about higher model error rates. Here's the outcome:

As expected, at higher error rates, the situation of non detectable negative scenarios gets worse as the model error rate increases......

U.S. Pension Funds
The 50% Confidence Level is added, because a lot of U.S. pension funds are in this confidence area. In this case we find - more or less regardless of the model error rate level - a substantial probability ( 80% - 90%) of finding true negative outcome scenarios. Problem here is, it's useless to define actions on individual negative scenarios. First priority should be to restructure and cut ambition in the current pension agreement, in order to realize a higher confidence level. It's useless to mop the kitchen when your house is flooded with water.....

Model Error Rate Determination
One might argue that the approach in this blog is too theoretical as it's impossible to determine the exact (future) error rate of a model. Yes, it's true that the exact model error rate is hard to determine. However, with help of backtesting the magnitude of the model error rate can be roughly estimated and that's good enough for drawing relevant conclusions.

A General Detectability Equation
The general equation for calculating the Detectability (Rate) of Negative Outcome Scenarios (DNOS) given the model error rate (ER)  and a trusted Confidence Level (CL) is:

DNOS = (1-ER) (1- CL) / ( 1- CL + 2 ER CL -ER )

So a model error rate of 1%, combined with Basel III confidence level of 99.9% results in a low 9.02% [ =(1-0.01)*(1-0.999)/(1-0.999+2*0.01*0.999-0.01) ] detectability of Negative Outcome scenarios.

Detectability Rates
Here's a more complete oversight of detectability rates:

It would take (impossible?) super high quality model error rates of 0.1% or lower to regain detectability power in our (ALM) models, as is shown in the next table:

Required  Model Confidence Level
If we define the Model Confidence Level as MCL = 1 - MER, the rate of Detectability of Negative Outcome Scenarios as DR= Detectability Rate = DNOS and the CL as CL=Positive Outcome Scenarios' Confidence Level, we can calculate an visualize the required  Model Confidence Levels (MCL) as follows:

From this graph it's at a glance clear that already modest Confidence Levels (>90%) in combination with a modest Detectability Rate of 90%, leads to unrealistic required Model Confidence Rates of around 99% or more. Let's not discuss the required Model Confidence Rates for Solvency II and/or Basel II/III.

  1. Current models lose power
    Due to the effect that (ALM) models are limited (model error rates 1%-5%) and confidence levels are increasing (above > 99%) because of more severe regulation, models significantly lose power an therefore become useless in detecting true negative outcome scenarios in a simulation. This implies that models lose their significance with respect to adequate risk management, because it's impossible to detect whether any negative outcome scenario is realistic.
  2. Current models not Solvency II and Basel II/III proof
    From (1) we can conclude in general that - despite our sound methods -our models probably are not Solvency II and Basel II/III proof. First action to take, is to get sight on the error rate of our models in high confidence environments...
  3. New models?
    The alternative and challenge for actuaries and investment modelers is to develop new models with substantial lower model error rates (< 0.1%).

    Key Question: Is that possible?

    If you are inclined to think it is, please keep in mind that human beings have an error rate of 1% and computer programs have an error rate of about 3%.......

Links & Sources: