Showing posts sorted by relevance for query calculator. Sort by date Show all posts
Showing posts sorted by relevance for query calculator. Sort by date Show all posts

Nov 22, 2010

What's that, an actuary? Kamikaze Investors

'Housing' is probably one of the most complex assets and also one of the most interesting.

Wake up...
At the next birthday party when somebody asks you the regular line: 'What's that, an actuary?....'  Don't answer the obligatory way, but demonstrate your actuarial risk management abilities in an interactive way....

Just ask who of your birthday friends would call himself a private - non professional - risky investor?........

After some hesitation and discussion, probably all of them will answer something like:  'No, I would not dare to risk much money, I put most of my savings in a 'safe - as possible - bank account'.

Than, your next question is: "Who owns a house?"
Now, probably more than 60% of your friends will raise their finger......

Congratulations! Now you may congratulate this 60% of your friends with the fact that they are probably a more risk taking investor than an average pension fund, because they are most likely (by far) overfunded  in the asset category "Housing".

After grasping the point of your little quiz, most of your friends will first laugh, than think, and after a while some of them will ask you what they should do about being a Kamikaze-investor?

Now you get to the tricky part of being an actuary:

  1. Never tell anyone what to do, 
  2. Just show them the possible scenarios
  3. Point out and quantify the risks, and 
  4. Help them take their own decisions 

House-Pricing
 A lot of research has been done around House pricing and risk.

Although their seems to be a positive relationship between interest rate and housing-price growth, the housing risk is much more complicated than that.

Also housing prices differ strongly by country, as the next Economist table shows:



And because as actuaries, we're little Kamikaze-investors as well, the Economist has developed an interactive application to get sight at the housing-price development in your country relative to others.

Jul 8, 2009

Swine Flu Counter update 06-07-2009

Want a simple global Swine Flu Counter on your web page?

You may find the old (July 6, 2009) Counter/Calculator Here.
There is already a new counter on a more recent model available.
Look at : Swine Flu Counter Update-sept-2009

The (old) counter is based on a 'July 6, 2009 estimation' as described on Actuary-Info. However, now the data have been updated based on the official, more reliable and accurate WHO reports.



If necessary, counters will be updated again on a on a regular basis. The latest data you'll find in this XLS spreadsheet.

Install Swine Flu Counter
How to implement this old Swine Flu Counter on your web site?

  • Put the next HTML-script (without the outer quotes) just before the end of the body tag:' <script language="javascript" type="text/javascript" src="http://jos.blogspot.googlepages.com/swine-flu-2009.js"> </script>'

  • Put the next HTML-line (without the outer quotes) where you want the Swine Flu table to appear on your site :
    ' <div id="swineflutable"></div> '

  • Ready!

Remember, you may only install one counter on your website, either the old or the new.

Paradox
The best what could and will happen with regard to the original swine flu model and corresponding counter, is that they don't turn out to be valid. This way the model and counter will have proven their 'reason for existence'. Simply just by contributing to the necessary awareness and prevention measures to diminish or stop the exponential swine flu infections growth.

Contrary, developing but not publishing models or counters will create a lack of warning and attention and would therefore prove the (exponential) model to become true. This is the inevitable paradox of modeling with our without follow up actions.

This paradox is the main reason why an 'actuarial advice' should therefore alway be presented in a (minimal) "two-way scenario" form:
  • Estimation of results without follow up actions
  • Estimation of results including advised follow up actions

Anyway, have fun with your Swine Flu Counter!

Joshua Maggid

ADD July 18, 2009
On July 16, 2009 WHO reports:
  • Further spread of the pandemic, within affected countries and to new countries, is considered inevitable.
  • This assumption is fully backed by experience. The 2009 influenza pandemic has spread internationally with unprecedented speed. In past pandemics, influenza viruses have needed more than six months to spread as widely as the new H1N1 virus has spread in less than six weeks.
  • The increasing number of cases in many countries with sustained community transmission is making it extremely difficult, if not impossible, for countries to try and confirm them through laboratory testing. Moreover, the counting of individual cases is now no longer essential in such countries for monitoring either the level or nature of the risk posed by the pandemic virus or to guide implementation of the most appropriate response measures.
In short: now h1n1 really gets important and probably is running out of hand, WHO stops reporting.....
Let's see if we can find another source....

ADD July 21, 2009
Wikipedia's 2009 flu pandemic reports (based on ECDC reports, as WHO reports fail) an accumulated number of 143,652 reported infections and 899 deaths on July 21, 2009. As the WHO has decided not to registrate the number of infections anymore (as from july 9) and, except for the US, reports are based on confirmed laboratory test results, the actual number of infections will be much higher.

That's why, as long as the actual deaths are in line with the modelled estimated death, the 'July 6th exponential model', used as basis for the swine flu counter, seems still realistic and valid!

ADD Sept 06, 2009
The data have structurally changed from exponential to linear.
Take a look at the new counter at:

Sep 25, 2011

Compliance: Sample Size

How to set an adequate sample size in case of a compliance check?

This simple question has ultimately a simple answer, but can become a "mer à boire" (nightmare) in case of a 'classic' sample size approach.....

In my last-but-one blog called 'Pisa or Actuarial Compliant?', I already stressed the importance of checking compliance in the actuarial work field.

Not only from a actuarial perspective compliance is important, but also from a core business viewpoint:

Compliance is the main key driver for sustainable business

Minimizing Total Cost by Compliance
A short illustration: We all know that compliance cost are a part of Quality Control Cost (QC Cost) and that the cost of NonCompliance (NC Cost) increase with the noncompliance rate. 

Mainly 'NC cost' relate to:
  • Penalties or administrative fines of the (legal) regulators
  • Extra  cost of complaint handling
  • Client claims
  • Extra administrative cost 
  • Cost of legal procedures

Sampling costs - on their turn -  are a (substantial) part of QC cost.

More in general now it's the art of  good practice compliance management, to determine that level of maximal noncompliance rate, that minimizes the total cost of a company.



Although this approach is more or less standard, in practice companies revenues depend strongly on the level of compliance. In other words: If compliance increases, revenues increase and variable costs decrease.

This implies that introducing 'cost driven compliance management' - in general - will (1) reduce  the total cost and (2) mostly make room for additional investments in 'QC Cost' to improve compliance and to lower variable and total cost.

In practice you'll probably have to calibrate (together with other QC investment costs) to find the optimal cost (investment) level that minimizes the total cost as a percentage of the revenues.


As is clear, modeling this kind of stuff is no work for amateurs. It's real risk management crafts-work. After all, the effect of cost investments is not sure and depends on all kind o probabilities and circumstances that need to be carefully modeled and calibrated.

From this more meta perspective view, let's descend to the next down to earth 'real life example'.

'Compliance Check' Example
As you probably know, pension advisors have to be compliant and  meet strict federal, state and local regulations.

On behave of the employee, the sponsoring employer as well as the insurer or pension fund, all have a strong interest that the involved 'Pension Advisor' actually is, acts and remains compliant.

PensionAdvice
A professional local Pension Advisor firm, 'PensionAdvice' (fictitious name), wants 'compliance' to become a 'calling card' for  their company. Target is that 'compliance' will become a competitive advantage over its rivals.

You, as an actuary, are asked to advise on the issue of how to verify PensionAdvice's compliance....... What to do?


  • Step 1 : Compliance Definition
    First you ask the board of PensionAdvice  what compliance means.
    After several discussions compliance is in short defined as:

    1. Compliance Quality
      Meeting the regulator's (12 step)  legal compliance requirements
      ('Quality Advice Second Pillar Pension')

    2. Compliance Quantity
      A 100% compliance target of PensionAdvice's portfolio, with a 5% non-compliance rate (error rate) as a maximum on basis of a 95% confidence level.

    The board has no idea about the (f)actual level of compliance. Compliance was- until now - not addressed on a more detailed employer dossier level.
    Therefore you decide to start with a simple sample approach.

  • Step 2 : Define Sample Size
    In order to define the right sample size, portfolio size is important.
    After a quick call PensionAdvice gives you a rough estimate of their portfolio: around 2.500 employer pension dossiers.

    You pick up your 'sample table spreadsheet' and are confronted with the first serious issue.
    An adequate sample (95% confidence level) would urge a minimum of 334 samples. With around 10-20 hours research per dossiers, the costs of this size of this sampling project would get way out of hand and become unacceptable as they would raise the total cost of  PensionAdvice (check this, before you conclude so!).

    Lowering confidence level doesn't solve the problem either. Sample sizes of 100 and more are still too costly and confidence levels of less than 95% are of no value in relation to the clients ambition (compliance= calling card).
    The same goes for higher - more than 5% - 'Error Tolerance' .....

    By the way, in case of samples for small populations things will not turn out better. To achieve relevant confidence levels (>95%) and error tolerances (<5%), samples must have a substantial size in relation to the population size.


    You can check all this out 'live', on the next spreadsheet to modify sampling conditions to your own needs. If you don't know the variability of the population, use a 'safe' variability of 50%. Click 'Sample Size II' for modeling the sample size of PensionAdvice.



  • Step 3: Use Bayesian Sample Model
    The above standard approach of sampling could deliver smaller samples if we would be sure of a low variability.

    Unfortunately we (often) do not know the variability upfront.

    Here comes the help of a method based on efficient sampling and Bayesian statistics, as clearly described by Matthew Leitch.

    A more simplified version of Leitch's approach is based on the Laplace's famous  'Rule of succession', a classic application of the beta distribution ( Technical explanation (click) ).

    The interesting aspects of this method are:
    1. Prior (weak or small) samples or beliefs about the true error rate and confidence levels, can be added in the model in the form of an (artificial) additional (pre)sample.

    2. As the sample size increases, it becomes clear whether  the defined confidence level will be met or not and if adding more samples is appropriate and/or cost effective.
  • This way unnecessary samples are avoided, sampling becomes as cost effective as possible and auditor and client can dynamically develop a grip on the distribution. Enough talk, let's demonstrate how this works.

Sample Demonstration
The next sample is contained in an Excel spreadsheet that you can download and that is presented in a simplified  spreadsheet at the end of this blog. You can modify this spreadsheet (on line !) to your own needs and use it for real life compliance sampling. Use it with care in case of small populations (n<100).

A. Check on the prior believes of management
Management estimates the actual NonCompliance rate at 8% with 90% confidence that the actual NonCompliance rate is 8% or less:



If management would have no idea at all, or if you would not (like to) include management opinion, simply estimate both (NonCompliance rate and confidence) at 50% (= indifferent) in your model.

B. Define Management Objectives
After some discussion, management defines the (target) Maximum acceptable NonCompliance rate at 5% with a 95% confidence level (=CL)



C. Start ampling
Before you start sampling, please notice how prior believes of management are rendered into a fictitious sample (test number = 0) in the model:
  • In this case prior believes match a fictitious sample of size 27 with zero noncompliance observations. 
  • This fictitious sample corresponds to a confidence level of 76% on basis of a maximum (population) noncompliance rate of 5%.
[ If you think the rendering is to optimistic, you can change the fictitious number of noncompliance observations from zero into 1, 2 or another number (examine in the spreadsheet what happens and play around).]

To lift the 76% confidence level to 95%, it would take an additional sample size of 31 with zero noncompliance outcomes (you can check this in the spreadsheet).
As sampling is expensive, your employee Jos runs a first test (test 1) with a sample size of 10 with zero noncompliance outcomes. This looks promising!
The cumulative confidence level has risen from 76% to over 85%.



You decide to take another limited sample with a sample size of 10. Unfortunately this sample contains one noncompliant outcome. As a result, the cumulative confidence level drops to almost 70% and another sample of size 45 with zero noncompliant outcomes is necessary to reach the desired 95% confidence level.

You decide to go on and after a few other tests you finally arrive at the intended 95%cumulative confidence level. Mission succeeded!



The great advantage of this incremental sampling method is that if noncompliance shows up in an early stage, you can
  • stop sampling, without having made major sampling cost
  • Improve compliance of the population by means of additional measures on basis of the learnings from the noncompliant outcomes
  • start sampling again (from the start) 

If - for example -  test 1 would have had 3 noncompliant outcomes instead of zero, it would take an additional test of size 115 with zero noncompliant outcomes tot achieve a 95% confidence level.  It's clear that in this case it's better to first learn from the 3 noncompliant outomes, what's wrong or needs improvement, than to go on with expensive sampling against your better judgment.



D. Conclusions
On basis of a prior believe that - with 90% confidence - the population is  8% noncompliant, we can now conclude that after an additional total sample of size 65, PensionAdvice's noncompliance rate is 5% or less with a 95% confidence level.

If we want to be 95% sure without 'prior believe', we'll have to take an additional sample of size 27 with zero noncompliant outcomes as a result.

E. Check out

Check out, download the next spreadsheet. Modify sampling conditions to your own needs and download the Excel spreadsheet.


Finally
Excuses for this much too long blog. I hope I've succeeded in keeping your attention....


Related links / Resources

I. Download official Maggid Excel spreadsheets:
- Dynamic Compliance Sampling (2011)
- Small Sample Size Calculator

II. Related links/ Sources:
- 'Efficient Sampling' spreadsheet by Matthew Leitch
- What Is The Right Sample Size For A Survey?
- Sample Size
- Epidemiology
- Probability of adverse events that have not yet occurred
- Progressive Sampling (Pdf)
- The True Cost of Compliance
- Bayesian modeling (ppt)

Apr 29, 2012

Why Life Cycle Funds are Second Best

Life Cycle Funds (LCFs) are seen as the ideal solution for pension planning. Unfortunately they aren't..... They're Second Best....

Pension Funds solutions (PFs), are far more superior to LCFs, as will be shown in this blog with regard to the performance of a pension plan.

Life Cycle
A Life Cycle approach presumes that, while your young and still have a long time before retirement, you can risk to invest more than an average pension fund in risky assets like stocks, with an assumed higher long term return than bonds,

As you come closer to the retirement age, you'll have to be more careful and decrease your stock portfolio incrementally to zero in favor of (assumed) more solid fixed income asset classes like government bonds.

A well known classic life cycle investment scheme is "100-Age", where the investment in stocks depends on your age. Percentage stocks = 100 -  actual age.
E.g.: If you're 30 years old, your portfolio consists of 70% stocks and 30% bonds.

Here's what the average return of a life cycle '100-Age' investment looks like when you start your pension plan at the age of 30 and assume a long term 7% average yearly return on stocks and 4% on bonds.
The return of this life cycle fund is compared to a pension fund with continuously 50% in stocks.


Key question is however, is the younger generation also risk minded and the older generation risk averse?

As often in life and also in this case, what would be logical to expect, turns out just to be a little bit more complicated in practice....

Misunderstanding:Younger people have a high risk attitude
Research by Bonsang (et al.; 2011) of the University of Maastricht and Netspar shows that on average 25% of the 50+ generation is willing to take risk.
 The research report shows evidence  that  the  change  in  risk  attitude  at  older age  is driven by 'cognitive decline'.  About 40 to 50% of the change in risk attitude can be attributed to cognitive aging.

Unfortunately other recent research also shows that only 30% of people under age 35 say they're willing to take substantial or above-average risks in their portfolios (source:Investment Company Institute).



This implies that -  although they would theoretically be better of on the long run - younger people will certainly not put all their eggs in one basket, by investing all or most of their money in stocks.

Pension Fund Investment Horizon
In contrast to individual pension member investors, a pension fund has a long term perspective of more than 20-50 years as new members (employees) keep joining the pension fund in the future. Therefore a pension fund can keep its strategic allocation in stocks relatively constant over time instead of decreasing it.


This implies that a pension fund on the long term has an advantage (longer horizon) above a life cycle fund. Let's try to find the order of magnitude of this difference.


Comparing a Life Cycle fund with a Pension Fund
First of all, we have to take into account that younger people will not over invest in stocks.

Let's assume:
  • A 30 year old 'pension plan starter', retiring at age 65
  • Contribution level   (€, $, £, ¥,): 1000 a year
  • A long term 7% average yearly return on stocks and 4% on bonds
  • Life Cycle Investment scheme
    A modest 50% stocks, with a yearly 2% decrease as  from age 50
  • Pension Fund Investment Scheme
    A constant 50% investment in stocks (and 50% in bonds)
  • Inflation 3%, Pension and Contribution indexation: 3%

 This leads to the next yearly return of these portfolios, as follows:



To find out the overall difference in return between LCF en PFS, we calculate the Return on Investments (ROI) of both investment schemes with help of the:


The outcome looks like this:

As you can see the ROI outcomes (left axis) on the investments (yearly contribution) from 'dying age' 65 to age 69 are negative as the cumulative payed pensions (compared to your contribution) didn't (yet) result in a positive balance. Or to put it in another way, if you die between age 65 and 69, you died too early to have a positive return on your paid contribution.

Overperformance
The right axis shows the difference between the LC ROIs and the PF ROIs.
As you may notice,  the pension fund has a structural yearly overperformance of more than 0.3%  and an average overperformance between 0.4% and 0.5% per year.

Overperformance expressed in pension benefits
Expressed in terms of yearly pensions the differences are as follows:


Investment SchemePension at 65Relative
LC 55year -2% p/y1167383%
LC '100-Age'1230493%
PF 50% stocks13172100%


For a 40 year old pension plan starter, the differences are:

Investment SchemePension at 65Relative
LC 55year -2% p/y535982%
LC '100-Age'557892%
PF 50% stocks6040100%


Conclusion
Investing in life cycle funds ends up in a 7% to 18% lower pension than investing in a pension fund with 50% investment in stocks.


So..., Be wise and choose a pension fund for your investment if you can!


Aftermath
Of course, every pension vehicle has its pros and cons ... So do Life Cycle AND Pension Funds.....



Related Links/Sources
- CNNMoney:The young and the riskless shun the market (2011)
- Cognitive Aging and Risk Attitude (2011)
- America’s Comm. to Ret.Security: Investor Attitudes and Action (2012) 
“Saving/investing over the life cycle and the role of pension funds” (2007)
- Excel Pension Calculator Blog
- Benny AND Boone Comic Strips
- Study: Public employee pensions a bargain (2011)

Jun 28, 2013

Confidence Level Crisis

When you're - like me - a born professional optimist, but nevertheless sometimes worry about the unavoidable misery in the world, you ask yourself this question:

Why does God not act? 

Think about this question and try to answer it, before reading any further..



The answer to this question is very simple:

God does not act because he's conscious of everything  

The moral of this anecdote is that when you're fully aware of all the risks and their possible impact, chances are high you'll not be able to take any well-argued decision at all, as any decision will eventually fail when your objective is to rule out all possible risks.

You see, a question has come up that we can't agree on,
perhaps because we've read too many books.


Bertolt Brecht, Life of Galileo (Leben des Galilei)

On the other hand, if you're not risk-conscious at all regarding a decision to be taken, most probably you'll take the wrong decision.

'Mathematical Confident'
So this leaves us with the inevitable conclusion that in our eager to take risk-based decisions, a reasoned decision is nothing more than the somehow optimized outcome of a weighted sum of a limited number of subjective perceived risks. 'Perceived' and 'Weighted', thanks to the fact that we're unaware of certain risks, or 'filter', 'manipulate' or 'model' risks in such a way that we can be 'mathematical confident'. In other words, we've become victims of the "My calculator tells me I'm right! - Effect".

Risk Consciousness Fallacy
This way of taking risk based decisions has the 'advantage' that practice will prove it's never quite right. Implying you can gradually 'adjust' and 'improve' or 'optimize' your decision model endlessly.
Endlessly, up to the point where you've included so much new or adjusted risk sources and possible impacts, that the degrees in freedom of being able to take a 'confident' decision have become zero.


Risk & Investment Management Crisis
After a number of crises - in particular the 2008 systemic crisis - we've come to the point that we realize:
  • There are much more types of risk than we thought there would be
  • Most type of risks are nonlinear instead of linear
  • New risks are constantly 'born'
  • We'll not ever be able to identify or significantly control every possible kind of risk
  • Our current (outdated) investment model can't capture nonlinear risk
  • Most (investment) risks depend heavily on political measures and policy
  • Investment risks are more artificial and political based and driven, than statistical
  • Market Values are 'manipulable' and therefore 'artificial'
  • Risk free rates are volatile, unsure and decreasing
  • Traditional mathematical calculated 'confidence levels' fall short (model risk)
  • As Confidence Levels rise, Confidence Intervals and Value at Risk increase

Fallacy
One of the most basic implicit fallacies in investment modeling, is that mathematical confidence levels based on historical data are seen as 'trusted' confidence levels regarding future projections. Key point is that a confidence level (itself) is a conditional (Bayesian) probability .

Let's illustrate this in short.
A calculated model confidence level (CL) is only valid under the 'condition' that the 'Risk Structure' (e.g. mean, standard deviation, moments, etc.) of our analysed historical data set (H) that is used for modeling, is also valid in the future (F). This implies that our traditional confidence level is in fact a conditional probability : P(confidence level = x% | F=H ).

Example
  • The (increasing) Basel III confidence level is set at P( x ∈ VaR-Confidence-Interval | F=H) = 99.9% in accordance with a one year default level of 0.1% (= 1-99,9%).
  • Now please estimate roughly the probability P(F=H), that the risk structure of the historical (asset classes and obligations) data set (H) that is used for Basel III calculations, will also be 100% valid in the near future (F).
  • Let's assume you rate this probability based on the enormous economic shifts in our economy (optimistic and independent) at P(F=H)=95% for the next year.
  • The actual unconditional confidence level now becomes P( x ∈ VaR-Confidence-Interval) = P( x ∈ VaR-Confidence-Interval | F=H) × P(F=H) = 99.9% × 95% = 94.905%
Although a lot of remarks could be made whether the above method is scientifically 100% correct, one thing is sure: traditional risk methods in combination with sky high confidence levels fall short in times of economic shifts (currency wars, economic stagnation, etc). Or in other words:

Unconditional Financial Institutions Confidence Levels will be in line with our own poor economic forecast confidence levels. 



A detailed Societe Generale (SG) report tells us that not only economic forecasts like GDP growth, but also stocks can not be forecasted by analysts.


Over the period 2000-2006 the US average 24-month forecast error is 93% (12-month: 47%). With an average 24-month forecast error of 95% (12-month: 43%), Europe doesn't do any better. Forecasts with this kind of scale of error are totally worthless.

Confidence Level Crisis
Just focusing on sky high risk confidence levels of 99.9% or more is prohibiting financial institutions to take risks that are fundamental to their existence. 'Taking Risk' is part of the core business of a financial institution. Elimination of risk will therefore kill financial institutions on the long run. One way or the other, we have to deal with this Confidence Level Crisis.

The way out
The way for financial institutions to get out of this risk paradox is to recognize, identify and examine nonlinear and systemic risks and to structure not only capital, but also assets and obligations in such a (dynamic) way that they are financial and economic 'crisis proof'. All this without being blinded by a 'one point' theoretical Confidence Level..

Actuaries, econometricians and economists can help by developing nonlinear interactive asset models that demonstrate how (much) returns and risks and strategies are interrelated in a dynamic economic environment of continuing crises.

This way boards, management and investment advisory committees are supported in their continuous decision process to add value to all stakeholders and across all assets, obligations and capital.

Calculating small default probabilities in the order of the Planck Constant (6.626 069 57 x 10-34 J.s) are useless. Only creating strategies that prevent defaults, make sense.

Let's get more confident! ;-)

Sources/Links
- SG-Report: Mind Matters (Forecasting fails)
Are Men Overconfident Users?

Oct 23, 2022

Why VaR fails and actuaries can do better

Perhaps the most important challenge of an actuary is to develop and train the capability to explain complex matters in a simple way. One of the best examples of practicing this 'complexity reduction ability' has been given by David Einhorn, president of Greenlight Capital. In a nutshell David explains with a simple example why VaR models fail. Take a look at the next excerpt of David's interesting article in Point-Counterpoint.

Why Var fails
A risk manager’s job is to worry about whether the bank is putting itself at risk in unusual times - or, in statistical terms, in the tails of the distribution. Yet, VaR ignores what happens in the tails. It specifically cuts them off. A 99% VaR calculation does not evaluate what happens in the last1%. This, in my view, makes VaR relatively useless as a risk management tool and potentially catastrophic when its use creates a false sense of security among senior managers and watchdogs.
VaR is like an airbag that works all the time, except when you have a car accident
By ignoring the tails, VaR creates an incentive to take excessive but remote risks.
Example
Consider an investment in a coin flip. If you bet $100 on tails at even money, your VaR to a 99% threshold is $100, as you will lose that amount 50% of the time, which obviously is within the threshold. In this case, the VaR will equal the maximum loss.

Compare that to a bet where you offer 127 to 1 odds on $100 that heads won’t come up seven times in a row. You will win more than 99.2% of the time, which exceeds the 99% threshold. As a result, your 99% VaR is zero, even though you are exposed to a possible $12,700 loss.

In other words, an investment bank wouldn’t have to put up any capital to make this bet. The math whizzers will say it is more complicated than that, but this is the idea. Now we understand why investment banks held enormous portfolios of “super-senior triple A-rated” whatever. These securities had very small returns. However, the risk models said they had trivial VaR, because the possibility of credit loss was calculated to be beyond the VaR threshold. This meant that holding them required only a trivial amount of capital, and a small return over a trivial capital can generate an almost infinite revenue-to-equity ratio. VaR-driven risk management encouraged accepting a lot of bets that amounted to accepting the risk that heads wouldn’t come up seven times in a row. In the current crisis, it has turned out that the unlucky outcome was far more likely than the backtested models predicted. What is worse, the various supposedly remote risks that required trivial capital are highly correlated; you don’t just lose on one bad bet in this environment, you lose on many of them for the same reason. This is why in recent periods the investment banks had quarterly write-downs that were many times the firm-wide modelled VaR.

The Real Risk Issues
What. besides the 'art of simple communication', can we - actuaries - learn from David Einhorn? What David essentially tries to tell us, is that we should focus on the real Risk Management issues that are in the x% tail and not on the other (100-x)%. Of course, we're inclined to agree with David. But are we actuaries truly focusing on the 'right' risks in the tail? I'm afraid the answer to this question is most often: No! Let's look at a simple example that illustrates the way we are (biased) focusing on the wrong side of the VaR curve.

Example Longevity
For years (decades) now, longevity risk has been structurally underestimated. Yes, undoubtedly we have learned some of our lessons. Today's longevity calculations are not (anymore) just based on simple straight-on mortality observations of the past. Nevertheless, in our search to grasp, analyze and explain the continuous life span increase, we've got caught in an interesting but dangerous habit of examining more and more interesting details that might explain the variance of future developments in mor(t)ality rates. As 'smart' longevity actuaries and experts, we consider a lot of sophisticated additional elements in our projections or calculations. Just a small inventory of actuarial longevity refinement:
  • Difference in mortality rates: Gender, Marital or Social status, Income or Health related mortality rates
  • Size: Standard deviation, Group-, Portfolio-size
  • Selection effects, Enhanced annuities
  • Extrapolation: Generation tables, longitudinal effects, Autocorrelation, 'Heat Maps'
X-Tails In our increasing enthusiasm to capture the longevity monster, we got engrossed in our work. As experienced actuaries we know the devil is always in the De-Tails, however the question is: In which details? We all know perfectly well that probably the most essential triggers for longevity risk in the future, can not be found in our data. These triggers depend on the effect of new developments like :

It's clear that investigating and modeling the soft risk indicators of extreme longevity is no longer a luxury, as also an exploding increase in lifespan of 10-20% in the coming decades seems not unlikely. By stretching our actuarial research to the medical arena, we would be able to develop new (more) future- and shock-proof longevity models and stress tests. Regrettably, we don't like to skate on thin ice..... Ostrich Management If we - actuaries - would take longevity and our profession as 'Risk Manager' more seriously, we would warn the world about the global estimated (financial) impact of these medical developments on Pension- and Health topics. We would advise on which measures to take, in order to absorb and manage this future risk. Instead of taking appropriate actions, we hide in the dark, maintaining our belief in Fairy-Tails. As unworldly savants, we joyfully keep our eyes on the research of relative small variances in longevity, while neglecting the serious mega risks ahead of us. This way of Ostrich Management is a worrying threat to the actuarial profession. As we are aware of these kinds of (medical) future risks, not including or disclaiming them in our models and advice, could even have a major liability impact. In order to be able to prevent serious global loss, society expects actuaries to estimate and advise on risk, instead of explaining afterward what, why and how things went wrong, what we 'have learned' and what we 'could or should' have done. This way of denying reality reminds me of an amusing Jewish story of the Lost Key...

The lost Key
One early morning, just before dawn, as the folks were on their way to the synagogue for the Shaharit (early morning prayer) they notice Herscheleh under the lamp post, circling the post and scanning the ground. “Herschel” said the rabbi “What on earth are you doing here this time of the morning?” “I lost my key” replied Herscheleh “Where did you lose it?” inquired the rabbi “There” said Herscheleh, pointing into the darkness away from the light of the lamp post. “So why are looking for your key in here if you lost it there”? persisted the puzzled rabbi. “Because the light is here Rabbi, not there” replied Herschel with a smug.





Let's conclude with a quote, that - just as this blog- probably didn't help either:

Risk is not always apparent,
but its invisibility is no longer an excuse for ignoring it.

-- Bankers Trust on risk management, 1995 --


Interesting additional links:


Jun 9, 2012

Default Risk at Risk

What's the default rate of Europe?
Let's try to answer this question by examining the (spread on) 10 year Government Bonds for different European countries.


Some simple observations:
  • A Diverging Europe
    The above chart clearly shows that EU-country interest rates are diverging. The 'spread' between relative financial healthy countries and their weaker brothers is increasing.
     
  • A strong EU Base?
    Key (rhetorical) question  is whether the low interest rates of countries like Denmark, Sweden and Germany are the result of their strong economic performance or the effect of fact that other EU-countries are in real trouble...
     
  • Rewarding Debt
    The current real interest rate of  relatively 'healthy' countries (interest rates less than 2%) is negative at long term inflation levels between 2% and 4%.
    Negative real interest rates imply a non-sustainable  debt rewarding economic system for governments and banks. Perhaps most important: in a negative real rate economy financial institutions like pension funds, lose their rationale for existence!!

  • Risk Free Rate?
    Also remarkable is that these low interest rates are far under what was once qualified as risk free rate (3%-6%, whatever.....)


Risk Free Rate
In order to be able to calculate a country's default probability, we need to estimate the so called 'risk free rate'. As I've Illustrated earlier (how to catch risk)  the idea of a 'risk free rate' is an illusion:

Every asset has some kind of risk


Relative Risk
However risk could be relatively defined from one country to another. In order to do so, let's analyze a more worldwide picture (table on the right) of 10-y bond rates on June 1 2012.

From this table we may conclude that the best risk free country 10-year  bond rate is the 0.55% Swiss rate.

As we know that even this rate is not completely free of risk, let's not settle for the traditional  mistake of  'one point estimates', but calculate a country's default risk on basis of different risk free rate levels, varying between 0% and +1%.

Calculating Country Default Risk
A country's semiannually paid default risk (dh) can be calculated from a country's 10-year (semiannually paid) Bond Rate (semiannually paid coupon rate = ch ) and a semiannually paid Risk Free Rate (semiannually rate = rh) on basis of the next relationship:


leading to:

Expressed in the yearly semiannually paid coupon rate (c=2.ch) and risk free rate (r=2. rh):


Finally resulting in a formula (1) regarding the one year default risk (d):


Country10Y
Bond
(%)
Greece 30.83
Pakistan 13.27
Brazil 12.55
Portugal 12.03
Hungary 8.71
India 8.5
Ireland 8.21
South Africa 8.2
Colombia 7.6
Peru 6.76
Spain 6.56
Indonesia 6.51
Mexico 6.04
Russia 6
Italy 5.92
Poland 5.45
Israel 4.46
Thailand 3.78
South Korea 3.69
Malaysia 3.55
New Zealand 3.54
China 3.38
Czech Republic3.27
Belgium 2.94
Australia 2.9
Norway 2.38
France 2.36
Austria 2.12
Canada 1.76
Netherlands 1.61
United States 1.58
United Kingdom 1.57
Finland 1.49
Singapore 1.46
Sweden 1.29
Germany 1.2
Denmark 1.03
Hong Kong 1
Japan 0.82
Switzerland 0.55

Now, let's calculate the default risks for the top-5 worrisome EU countries given a risk free rate of 0%:

GreecePortugalIrelandSpainItaly
10Y Bonds30.8%12.0%8.2%6.6%5.9%
1YR Default Risk , r=0%24.9%11.0%7.7%6.3%5.7%
1YR Default Risk , r=1%24.2%10.1%6.8%5.3%4.7%

As is clear from this table, in practice there's no substantial impact-difference between a 0% or a 1% risk free rate, with regard to calculating a one year default rate.

This helps us to define a really simple rule of thumb to translate a 10Y Bond rate (c) into 1 year default rate (d) at a 0% 'free interest rate level' :

Examples
  • 10Y Bond rate = 30% =0.3
    d= 0.3 -0.3*o.3/2 = 0.3 - 0.045 =0,255 ≈ 25%
     
  • 10Y Bond rate = 10% =0.1
    d= 0.1 -0.1*o.1/2 = 0.1 - 0.005 =0,095 ≈ 9.5%
     
  • Higher risk free rates  (r>0%)
    At higher than 0% risk free rates, simply subtract the risk free rate from the default rate, to find the default rate at that higher risk free rate.
    Example: Risk free rate = r =1%, 10Y Bond Rate = 30% : d ≈ 25%-1% ≈ 24%
     
  • Compare country relatively default rates
    10Y Bond Rate Ireland = 8.21%
    10Y Bond Rate Greece = 30.8%
    Probability (d) that Greece defaults relatively more than Ireland:
    d ≈ 0.31 -0.5*0.31^2 -8 ≈ 0.18 ≈ 18% (more exact formula 1: 18.62%)

Of course we have to realize that all this hocus-pocus 'default math' is only based on strongly artificial managed market perceptions.... Probably the real default rates of Greece are much higher than 25%. In other words:

                               Default Risks are at Risk

N-year default probability
For those of you who still believe that financial Europe will survive, let's calculate default probabilities for more than one year. In formula the N-year default probability (dN) can be defined as:  dN = 1-(1-d)N
Conclusion
There's no hope for a Greece Euro-survival. Main problem is that even if Greece's debts would be covered by the stronger EU countries, Greece is not in the position of realizing a financial stable and positive economy.

Other financial weak and 'temporarily more or less out of the spotlight' countries like Portugal, Ireland and Spain will follow. No matter the development of default rates, it's an illusion to think that Germany is financial able to carry Europe through this crisis. Perhaps it's time to introduce country linked euros like the DE-Euro.......




Related Links
- Download Excel Spreadsheet used for this blog
On line Bond Default Probability Calculator
- Greece’s bond exchange
- Actual 10Y Government Bonds (all countries)
- The Greek debt crisis and the hypocrisy of the EU bureaucrats (2010)

Dec 21, 2014

Actuarial Readability

As an actuary, accountant or financial consultant, deep knowledge, expert skills and experience are key to writing an interesting article or paper advice.

However, no matter how much you're an expert, finally you're as good as you can get your message across to your audience.

The art of the expert is to simplify the complexity of his/her research into simple, and for the audience understandable text.

In practice this implies that the expert will have to measure the readability of his papers before publishing.

The two most important issues to tackle are 'readability' and 'text-level'.

Although there are many sorts of tests, both topics are simply covered by the so called  Flesch-Kincaid Readability Test.

Let's take a look ate the two simple test formulas of this test:



Flesch-Kincaid Readability Test



Flesch Reading Ease Score

FRES = 206.835 – (1.015 x ASL) – (84.6 x ASW)



Flesch-Kincaid Grade Level

FKGL = (0.39 x ASL) + (11.8 x ASW) – 15.59


With:
ASL  = average sentence length
number of words divided by the number of sentences

ASW = average number of syllables per word

number of syllables divided by number of words


Texts with a FRES-score of 90-100 are easily understandable by an average 5th grader and scores between 0 and 30 are best understood by college graduates.

Some examples of readability index scores of magazines:
- Reader's Digest Magazine: FRES = 65
- Time magazine: FRES = 52
- Harvard Law Review: FRES = 30

The FRES-test has become a U.S. governmental standard. Many government agencies require documents or forms to meet specific readability levels. Most states require insurance forms to score 40-50 on the test.


Where to test your documents?

Besides matching the FRES and FKTL scores in your document, as a guideline try to establish the next English text-test-characteristics
  • Average sentence length 15-20 words, 25-33 syllables and 75-100 characters.
  • Characters per word: < 7
  • Syllables per word: 1.5 - 2.0
  • Words per sentence: 15 - 20

This blog text resulted in scores:
- Flesch-Kincaid Reading Ease 64.7
- Flesch-Kincaid Grade Level 7.2
- Characters per Word 4.4
- Syllables per Word 1.5
- Words per Sentence 11.8


Example
As an example we test the readability of one of the articles of the Investment Fallacies e-book, as published by the Society of Actuaries (SOA) :

By Max J. Rudolph, published in 2014

The readability outcome is as follows:


Readability Score 'The Best Model Doesn’t Win'

Reading Ease
A higher score indicates easier readability; scores usually range between 0 and 100.

Readability Formula
Score
48.1

Grade Levels

A grade level (based on the USA education system) is equivalent to the number of years of education a person has had. Scores over 22 should generally be taken to mean graduate level text.

Readability Formula
Grade
10.3
12.9
14.2
9.5
__________________________
10.2
____
Average Grade Level
11.4

Text Statistics
Character Count 7,611
Syllable Count 2,531
Word Count 1,495
Sentence Count 98
Characters per Word 5.1
Syllables per Word 1.7
Words per Sentence 15.3


Actuarial Texts
With regard to public financial or actuarial publications a FRES-score of around 50 assures, that your publication reaches a wide audience. Even in case you're publishing an article at university level, try to keep the FRES-score as high as possible.

If you write an academic paper, you may use the online application Word and Phrase to measure the percentage of academic words. Try to keep this percentage below 20% to keep your document readable. The publication 'The Best Model Doesn’t Win' would score 17% on academic words......


Finally
Next time you write a document or make a PPT presentation, don't forget to




Links:
WORD AND PHRASE