Showing posts with label compliance. Show all posts
Showing posts with label compliance. Show all posts

Sep 26, 2011

Small Population Compliance Samples

My last post, Compliance Sample Size, demonstrated the set up of an efficient sample method for compliance tests in case of large populations.

What if population size is relatively small ?, some actuaries asked me....

In this case you can ( instead of the beta distribution) make use of the hypergeometric distribution for calculating confidence levels.

Here's the same example as I used in my blog 'Compliance Sample Size', but now for a population of 100 .


'Compliance Check' Example (N=100)
As you probably know, pension advisors have to be compliant and  meet strict federal, state and local regulations.

On behave of the employee, the sponsoring employer as well as the insurer or pension fund, all have a strong interest that the involved 'Pension Advisor' actually is, acts and remains compliant.

PensionAdvice
A professional local Pension Advisor firm, 'PensionAdvice' (fictitious name), wants 'compliance' to become a 'calling card' for  their company. Target is that 'compliance' will become a competitive advantage over its rivals.

You, as an actuary, are asked to advise on the issue of how to verify PensionAdvice's compliance....... What to do?

  • Step 1 : Compliance Definition
    First you ask the board of PensionAdvice  what compliance means.
    After several discussions compliance is in short defined as:

    1. Compliance Quality
      Meeting the regulator's (12 step)  legal compliance requirements
      ('Quality Advice Second Pillar Pension')

    2. Compliance Quantity
      A 100% compliance target of PensionAdvice's portfolio, with a 5% non-compliance rate (error rate) as a maximum on basis of a 95% confidence level.
    .
  • Step 2: Check on the prior believes of management
    On basis of earlier experiences, management estimates the actual NonCompliance rate at 8% with 90% confidence that the actual NonCompliance rate is 8% or less:

    If management would have no idea at all, or if you would not (like to) include management opinion, simply estimate both (NonCompliance rate and confidence) at 50% (= indifferent) in your model.

  • Step 3: Define Management Objectives
    After some discussion, management defines the (target) Maximum acceptable NonCompliance rate at 5% with a 95% confidence level (=CL).

  • Step 4: Define population size
    In this case it's simple. PensionAdvice management knows for sure the portfolio they want to check for compliance, consists of 100 files: N=100.

    This is how step 2 to 4 look in your spreadsheet...



  • Step 5 : Define Sample Size
    Now we get to the testing part....

    Before you start sampling, please notice how prior believes of management are rendered into a fictitious sample (test number = 0) in the model:
  • In this case prior believes match a fictitious sample of size 25 with zero noncompliance observations. 
  • This fictitious sample corresponds to a confidence level of 77% on basis of a maximum (population) noncompliance rate of 5%.
[ If you think the rendering is to optimistic, you can change the fictitious number of noncompliance observations from zero into 1, 2 or another number (examine in the spreadsheet what happens and play around).]


To lift the 77% confidence level to 95%, it would take an additional sample size of 20 - with zero noncompliance outcomes (you can check this in the spreadsheet).
As sampling is expensive, your employee Jos runs a first test (test 1) with a sample size of 10 with zero noncompliance outcomes. This looks promising!
The cumulative confidence level has risen from 76% to over 89%.


You decide to take another limited sample with a sample size of 10. Unfortunately this sample contains one noncompliant outcome. As a result, the cumulative confidence level drops to almost 75% and another sample of size 20 with zero noncompliant outcomes is necessary to reach the desired 95% confidence level.

You decide to go on and after a few other tests you finally arrive at the intended 95%cumulative confidence level. Mission succeeded!

Evaluation
The interesting aspects of this method are:

  1. Prior (weak or small) samples or beliefs about the true error rate and confidence levels, can be added in the model in the form of an (artificial) additional (pre)sample.

  2. As the sample size increases, it becomes clear whether  the defined confidence level will be met or not and if adding more samples is appropriate and/or cost effective.
This way unnecessary samples are avoided, sampling becomes as cost effective as possible and auditor and client can dynamically develop a grip on the distribution. Enough talk, let's demonstrate how this works.

Another great advantage of this incremental sampling method is that if noncompliance shows up in an early stage, you can
  • stop sampling, without having made major sampling cost
  • Improve compliance of the population by means of additional measures on basis of the learnings from the noncompliant outcomes
  • start sampling again (from the start) 

If - for example -  test 1 would have had 3 noncompliant outcomes instead of zero, it would take an additional test of size 57 with zero noncompliant outcomes tot achieve a 95% confidence level.  It's clear that in this case it's better to first learn from the 3 noncompliant outomes, what's wrong or needs improvement, than to go on with expensive sampling against your better judgment.


D. Conclusions
On basis of a prior believe that - with 90% confidence - the population is  8% noncompliant, we can now conclude that after an additional total sample of size 40, PensionAdvice's noncompliance rate is 5% or less with a 95% confidence level.

If we want to be 95% sure without 'prior believe', we'll have to take an additional sample of size 25 with zero noncompliant outcomes as a result.

E. Check out: DOWNLOAD EXCEL

You can download the next Excel spreadsheets to check the Demo or tot set up your own compliance test:

- Small population Compliance test DEMO
- Small population Compliance test BLANK
- Large population Compliance test

Enjoy!

Sep 25, 2011

Compliance: Sample Size

How to set an adequate sample size in case of a compliance check?

This simple question has ultimately a simple answer, but can become a "mer à boire" (nightmare) in case of a 'classic' sample size approach.....

In my last-but-one blog called 'Pisa or Actuarial Compliant?', I already stressed the importance of checking compliance in the actuarial work field.

Not only from a actuarial perspective compliance is important, but also from a core business viewpoint:

Compliance is the main key driver for sustainable business

Minimizing Total Cost by Compliance
A short illustration: We all know that compliance cost are a part of Quality Control Cost (QC Cost) and that the cost of NonCompliance (NC Cost) increase with the noncompliance rate. 

Mainly 'NC cost' relate to:
  • Penalties or administrative fines of the (legal) regulators
  • Extra  cost of complaint handling
  • Client claims
  • Extra administrative cost 
  • Cost of legal procedures

Sampling costs - on their turn -  are a (substantial) part of QC cost.

More in general now it's the art of  good practice compliance management, to determine that level of maximal noncompliance rate, that minimizes the total cost of a company.



Although this approach is more or less standard, in practice companies revenues depend strongly on the level of compliance. In other words: If compliance increases, revenues increase and variable costs decrease.

This implies that introducing 'cost driven compliance management' - in general - will (1) reduce  the total cost and (2) mostly make room for additional investments in 'QC Cost' to improve compliance and to lower variable and total cost.

In practice you'll probably have to calibrate (together with other QC investment costs) to find the optimal cost (investment) level that minimizes the total cost as a percentage of the revenues.


As is clear, modeling this kind of stuff is no work for amateurs. It's real risk management crafts-work. After all, the effect of cost investments is not sure and depends on all kind o probabilities and circumstances that need to be carefully modeled and calibrated.

From this more meta perspective view, let's descend to the next down to earth 'real life example'.

'Compliance Check' Example
As you probably know, pension advisors have to be compliant and  meet strict federal, state and local regulations.

On behave of the employee, the sponsoring employer as well as the insurer or pension fund, all have a strong interest that the involved 'Pension Advisor' actually is, acts and remains compliant.

PensionAdvice
A professional local Pension Advisor firm, 'PensionAdvice' (fictitious name), wants 'compliance' to become a 'calling card' for  their company. Target is that 'compliance' will become a competitive advantage over its rivals.

You, as an actuary, are asked to advise on the issue of how to verify PensionAdvice's compliance....... What to do?


  • Step 1 : Compliance Definition
    First you ask the board of PensionAdvice  what compliance means.
    After several discussions compliance is in short defined as:

    1. Compliance Quality
      Meeting the regulator's (12 step)  legal compliance requirements
      ('Quality Advice Second Pillar Pension')

    2. Compliance Quantity
      A 100% compliance target of PensionAdvice's portfolio, with a 5% non-compliance rate (error rate) as a maximum on basis of a 95% confidence level.

    The board has no idea about the (f)actual level of compliance. Compliance was- until now - not addressed on a more detailed employer dossier level.
    Therefore you decide to start with a simple sample approach.

  • Step 2 : Define Sample Size
    In order to define the right sample size, portfolio size is important.
    After a quick call PensionAdvice gives you a rough estimate of their portfolio: around 2.500 employer pension dossiers.

    You pick up your 'sample table spreadsheet' and are confronted with the first serious issue.
    An adequate sample (95% confidence level) would urge a minimum of 334 samples. With around 10-20 hours research per dossiers, the costs of this size of this sampling project would get way out of hand and become unacceptable as they would raise the total cost of  PensionAdvice (check this, before you conclude so!).

    Lowering confidence level doesn't solve the problem either. Sample sizes of 100 and more are still too costly and confidence levels of less than 95% are of no value in relation to the clients ambition (compliance= calling card).
    The same goes for higher - more than 5% - 'Error Tolerance' .....

    By the way, in case of samples for small populations things will not turn out better. To achieve relevant confidence levels (>95%) and error tolerances (<5%), samples must have a substantial size in relation to the population size.


    You can check all this out 'live', on the next spreadsheet to modify sampling conditions to your own needs. If you don't know the variability of the population, use a 'safe' variability of 50%. Click 'Sample Size II' for modeling the sample size of PensionAdvice.



  • Step 3: Use Bayesian Sample Model
    The above standard approach of sampling could deliver smaller samples if we would be sure of a low variability.

    Unfortunately we (often) do not know the variability upfront.

    Here comes the help of a method based on efficient sampling and Bayesian statistics, as clearly described by Matthew Leitch.

    A more simplified version of Leitch's approach is based on the Laplace's famous  'Rule of succession', a classic application of the beta distribution ( Technical explanation (click) ).

    The interesting aspects of this method are:
    1. Prior (weak or small) samples or beliefs about the true error rate and confidence levels, can be added in the model in the form of an (artificial) additional (pre)sample.

    2. As the sample size increases, it becomes clear whether  the defined confidence level will be met or not and if adding more samples is appropriate and/or cost effective.
  • This way unnecessary samples are avoided, sampling becomes as cost effective as possible and auditor and client can dynamically develop a grip on the distribution. Enough talk, let's demonstrate how this works.

Sample Demonstration
The next sample is contained in an Excel spreadsheet that you can download and that is presented in a simplified  spreadsheet at the end of this blog. You can modify this spreadsheet (on line !) to your own needs and use it for real life compliance sampling. Use it with care in case of small populations (n<100).

A. Check on the prior believes of management
Management estimates the actual NonCompliance rate at 8% with 90% confidence that the actual NonCompliance rate is 8% or less:



If management would have no idea at all, or if you would not (like to) include management opinion, simply estimate both (NonCompliance rate and confidence) at 50% (= indifferent) in your model.

B. Define Management Objectives
After some discussion, management defines the (target) Maximum acceptable NonCompliance rate at 5% with a 95% confidence level (=CL)



C. Start ampling
Before you start sampling, please notice how prior believes of management are rendered into a fictitious sample (test number = 0) in the model:
  • In this case prior believes match a fictitious sample of size 27 with zero noncompliance observations. 
  • This fictitious sample corresponds to a confidence level of 76% on basis of a maximum (population) noncompliance rate of 5%.
[ If you think the rendering is to optimistic, you can change the fictitious number of noncompliance observations from zero into 1, 2 or another number (examine in the spreadsheet what happens and play around).]

To lift the 76% confidence level to 95%, it would take an additional sample size of 31 with zero noncompliance outcomes (you can check this in the spreadsheet).
As sampling is expensive, your employee Jos runs a first test (test 1) with a sample size of 10 with zero noncompliance outcomes. This looks promising!
The cumulative confidence level has risen from 76% to over 85%.



You decide to take another limited sample with a sample size of 10. Unfortunately this sample contains one noncompliant outcome. As a result, the cumulative confidence level drops to almost 70% and another sample of size 45 with zero noncompliant outcomes is necessary to reach the desired 95% confidence level.

You decide to go on and after a few other tests you finally arrive at the intended 95%cumulative confidence level. Mission succeeded!



The great advantage of this incremental sampling method is that if noncompliance shows up in an early stage, you can
  • stop sampling, without having made major sampling cost
  • Improve compliance of the population by means of additional measures on basis of the learnings from the noncompliant outcomes
  • start sampling again (from the start) 

If - for example -  test 1 would have had 3 noncompliant outcomes instead of zero, it would take an additional test of size 115 with zero noncompliant outcomes tot achieve a 95% confidence level.  It's clear that in this case it's better to first learn from the 3 noncompliant outomes, what's wrong or needs improvement, than to go on with expensive sampling against your better judgment.



D. Conclusions
On basis of a prior believe that - with 90% confidence - the population is  8% noncompliant, we can now conclude that after an additional total sample of size 65, PensionAdvice's noncompliance rate is 5% or less with a 95% confidence level.

If we want to be 95% sure without 'prior believe', we'll have to take an additional sample of size 27 with zero noncompliant outcomes as a result.

E. Check out

Check out, download the next spreadsheet. Modify sampling conditions to your own needs and download the Excel spreadsheet.


Finally
Excuses for this much too long blog. I hope I've succeeded in keeping your attention....


Related links / Resources

I. Download official Maggid Excel spreadsheets:
- Dynamic Compliance Sampling (2011)
- Small Sample Size Calculator

II. Related links/ Sources:
- 'Efficient Sampling' spreadsheet by Matthew Leitch
- What Is The Right Sample Size For A Survey?
- Sample Size
- Epidemiology
- Probability of adverse events that have not yet occurred
- Progressive Sampling (Pdf)
- The True Cost of Compliance
- Bayesian modeling (ppt)

Sep 12, 2011

Pisa or Actuarial Compliant?

When we talk about actuarial compliance, we usually limit this to our strict actuarial work field.
In a broader sense as 'risk managers', we (actuaries) have a more general responsibility for the sustainability of the company we work for.

Compliance is not just about security, checks, controls, protection, preventing fraud, ethical behavior. Moreover  compliance is the basis of adequate risk management and delivering high standard service and products to your companies clients.

Pisa Compliant
No matter how brilliant and professional our calculations, if the data - on which these calculations are based on -  are 'limited', 'of insufficient quality' or 'too uncertain', we as actuaries will finally fail.

Therefore , building actuarial sandcastles is great art, however completely useless. Matthew 7:26 tells us :  it's a foolish man who builds his actuarial house on the sand....

And so, let's take a look if we have indeed become 'Pisa Compliant' by checking if our actuarial compliance is build on sand or on solid ground. In other words: let's check if actuarial compliance itself is compliant...nd.

Actuarial Data Governance
To open discussion, let's start with some challenging Data Governance questions:

  • Data quality compliance
    How is 'data quality compliance' integrated in your actuarial daily work? Have you addressed this issue? And if so, do you just rely on statements and reports of others (auditors, etc), can you agree upon the data quality standards (if there are any). In other words: are the data, processes and reports you base you calculations on, 100%  reliable and guaranteed? If not, what's the actual confidence level of your data en do you report about this confidence level to the board?

  • Data quality Conformation
    Have you checked your calculation data  set on bases of samples or second opinions?

    And if so, do you approve with the methods used, the confidence level and the outcome of the data audit? 

    Or do you just 'trust' on the blue eyes of the accountant or auditor and formally state you're "paper compliant"?

    Did you check if  client information, e.g. pension benefit statement, are not only in line with the administrative data, but also in line with insurance policy conditions or pension scheme rules?

  • Up to date, In good time
    To what quantitative level is the administrative data  'up to date' and is it transparent?

    Do you receive administrative backlog and delays reporting and tracking and if so, how do you translate these findings in your calculations?

  • Outsourcing
    From a risk management perspective, have you formulated quantitative and qualitative demands (standards) in outsourced contracts, like 'asset management', 'underwriting'  and 'administration' contracts?

    Do you agree on these contracts, do 'outsourcing partners' report on these standards and do you check these reports regularly on a detail level (samples)? 

And some more questions you have to deal with as an actuary:
  • Distribution Compliance
    Is the intermediary and are the employers and customers your company deals with, compliant? What's the confidence level of this compliance and in  case of partially noncompliance, what could be the financial consequences? (Claims)

  • Communication Compliance
    Is communication with employees, customers, regulators, supervisors and shareholders compliant? Has your board (and you!) defined what compliance actually means in quantitative terms?

    Is 'communication compliance' based on information (delivery and check) or on communication?

    In this case, have you've also checked if  (e.g.) customers understood what you tried to tell them?

    Not by asking if your message was understood, but by quantitative methods (tests, polls, surveys, etc) that undisputed 'prove' the customer really understood the message.

    Effective Communication Practice
    Never ask if someone has understood what you've said or explained. Never take for granted someone tells you he or she 'got the picture'.

    Instead act as follows: At the end of every (board) presentation, ask that final and unique question of which the answer  assures you, your audience has really understood what your tried to bring across.

Checking Compliance
Now we get to the quantitative 'hard part' of compliance:

How to check compliance?

This interesting topic will be considered in my next blog.... ;-)

To lift a little corner of the veil, just a short practical tip to conclude this blog:

Compliance Sample Test
From a large portfolio you've taken a sample of 30 dossiers to check on data quality. All of them are found compliant. What's the upper limit of the noncompliance rate in case of a 95% confidence level?

This type of question is a typical case of:

“If nothing goes wrong, is everything alright?”

Answer.
The upper limit can be roughly estimated by a simple rule of thumb, called 'Rule of three'....



'Rule of three for compliance tests'
If no noncompliant events occurred in a compliance test sample of n cases, one may conclude with 95% confidence that the rate of  noncompliance will be less than  3/n.

In this case one can be roughly 95% sure the noncompliance rate is less than 10% (= 3/30). Interesting, but slightly disappointing, as we want to chase noncompliance rates in the order of 1%.

Working backwards on the rule of three, a 1% noncompliance rate would urge for samples of 300 or more. Despite the fact that research for 46 international organizations showed that on average, noncompliance cost is 2.65 times the cost of compliance, this size of samples is often (perceived as) too cost inefficient and not practicable.

Read my next blog to find out how to solve this issue....

Related Links:
- Actuarial Compliance Guidelines
- What Is The Right Sample Size For A Survey?
- Epidemiology
- Probability of adverse events that have not yet occurred
- The True Cost of Compliance (2011)
- 'Rule of three'
- Compliance testing: Sampling Plans (accounting standards) or Worddoc

Mar 5, 2011

Supervisory Compliant, is it enough?

Risk management is tricky business... Being 'Officially Compliant', 'Just Compliant' or in other words "Supervisory Compliant", is not enough to help your CEO survive with your company in the complex market battle!

Whether you're an Actuary or Risk Manager of an Insurance company, Bank or a Pension Fund, the risk of being 'Supervisory Compliant' is simply : bankruptcy!

Becoming 'Supervisory Compliant' in complex programs like Solvency-II, Basel III or Legal Pension Fund Risk Frameworks, consumes so much time and effort, that almost no time seems to be left for contemplating or doing the essential Risk Management work properly.

Just being 'Supervisory Compliant' implies:  constantly running after the Supervisor to become  'just in time' officially compliant and not having enough time to think about the (f)actual relevant risks.

Supervisory Compliance becomes very frighting when Risk Appetite and Valuations are rashly based upon the minimum Supervisory requirements, as is (e.g.) the case in the Dutch Pension Fund legal framework. Boards stop thinking about the actual risks and feel compliant and satisfied once the Supervisory Compliance Boxes are checked.

A new look at compliance
Let's take a look from a new point of view at the complete Risk Management Compliance Field:

In basis there are three types of 'being compliant':

  1. Supervisory Compliant
    When you're Supervisory Compliant, you officially comply to all legal Risk Management compliance requirements. Your Supervisor is happy...

  2. Professional Compliant
    You comply to your own professional Risk Management standards. You are happy...  but what about your Supervisor? Comply or Explain....

  3. Success Compliant
    Being Success Compliant implies that all Risk Management requirements that are key to have success - e.g. key to survive in the market on the long run - are met.

Let's zoom in at some specific areas in this chart:

Bias areas
It's perhaps hard to admit, but in our attempt to be complete, we define and manage a lot of (small) risks that do actually exist, but are in fact not really or limited relevant with regard to company continuity.

Distinctive Character area
The Distinctive Character area is perhaps the most interesting area. To get grip on this area urges us to 'Think outside the Circle'.

By doing so we'll be able to manage risks that  our competitors fail to do. Here we can achieve 'Distinctive Character' by managing risks more efficient or by turning risks into profits. Examples are: Derivatives that limit our investment risks. Specialized experience rating (rate making) on your portfolio on basis of characteristic and unique risk profiles.

Tricky area
The tricky area is the area that consists of Supervisory Risks you tend not to find important, but that are very important for achieving success in the market. Tricky areas could e.g. be: Deflation Risk, Longevity Risk or Take Over Risk.

Reversed Thinking area
This is perhaps the most interesting risk area.

To explore this area you'll not only have  to 'think outside of your circle', but - just like in reversed stress tests with Banks - try to think backwards, to find out what could cause a certain event or loss.

This reversed thinking process succeeds best as a group. Group members should be professionals and non-professionals from different types of business, education and background.

A successful group mix could e.g. consist of : an actuary, an accountant, a manager, a marketing manager, a compliance officer, an employee, a client, a shareholder representative and last but not least the receptionist.

Finally.....
Try to find time to manage your company to new heights and stop being just 'Supervisory Compliant'.....

Jul 14, 2010

Solvency II Project Management Pitfalls

When you - just like me - wonder how Solvency (II) projects are being managed, join the club! It's crazy...., dozens of actuaries, IT professionals, finance experts, bookkeepers accountants, risk managers, project and program managers, compliance officers and a lot of other semi-solvency 'Disaster tourists' are flown in to join budget-unlimited S-II Projects.

On top of it all, nobody seems to understand each other, it's a  confusion of tongues..... 

Now that the European Parliament have finally agreed upon  the Solvency II Framework Directive in April 2009, everything should look ready for a successful S-II implementation before the end of 2012. However, nothing is farther from the truth.....

Solve(ncy) Questions in Time
The end of 2012 might seem a long way of...
While time is ticking, all kind of questions pop up like:
  • How to build an ORSA system and who owns it?
  • What's the relation between ORSA and other systems or models, like the Internal Model
  • Where do the actuarial models and systems fit in?
  • What are financial, actuarial, investing and 'managing' parameters, what distinguishes them, who owns them and who's authorised and competent to change them?
  • How to connect all IT-systems to deliver on a frequent basis what S-II reporting needs......?
  • How to build a consistent S-II IT framework, while the outcomes from QIS-5 (6,7,...) are (still) not clear and more 'Qisses' seem to come ahead?
  • Etc, etc, etc, etc^10

The Solvency Delusion
Answering the above questions is not the only challenge. A real 'Solvency Hoax' and other pitfalls seem on their way....

It appears that most of the actuarial work has been done by calculating the MCR and SCR in 'Pillar I'.

It's scaring to observe that the 'communis opinio'  now seems to be that the main part of the S-II project is completed. Project members feel relieved and the 'Solvency II Balance Sheet' seems (almost) ready!

Don't rejoice..., it's a delusion!  The main work in Pillar II (ORSA) and Pillar III (Reporting, transparency) still has to come and - at this moment - only few project managers know how to move from Pillar I to Pillar II.

Compliancy First, a pitfall?
With the Quantitative Impact Study (QIS-5) on its way (due date: October 2010) every insurer is focusing on becoming a well capitalized Solvency-II compliant financial institution.

There is nothing wrong with this compliance goal, but 'just' becoming 'solvency compliant' is a real pitfall and unfortunately not enough to survive in the years after 2010.

Risk Optimization
Sometimes, in the fever of becoming compliant, an essential part called "Risk Optimization" seems to be left out, as most managers only have an eye for 'direct capital effects' on the balance sheet and finishing 'on time', whatever the consequences......

Risk Optimization is - as we know - one of the most efficient methods to maximize company and client value. Here's a limited (check)list of possible Risk Optimization measurements:


1. Risk Avoidance
- Prevent Risk
   • Health programs
   • Health checks
   • Certification (ISO, etc)
   • Risk education programs
   • High-risk transactions
      (identify,eliminate, price)
   • Fraud detection
      (identify,eliminate, price)
   • Adverse selection
      (identify, manage, price)

- Adjust policy conditions
   • Exclude or Limit Risk  
      (type,term)
   • Restrict underwriter
      conditions
      (excess, term, etc)

- Run-off portfolios/products

2. Damage control
- Emergency Plans (tested)
- Claims Service, Repair service
- Reintegration services


3. Risk Reduction
- Diversification

- Asset Mix, ALM
- Decrease exposure term
- Risk Matching
- Decrease mismatch
   AL/Duration
- Outsourcing, Leasing

4. Risk Sharing
- Reinsurance (XL,SL,SQ)
- Securitization, Pooling
- Derivatives, Hedging
- Geographical spread
- Tax, Bonus policy

5. Risk Pricing
- Exposure rating, Experience rating
- Credibility rating, Community rating
- Risk profile rating

6. Equity financing
- IPO, Initial Public Offering
- Share sale, Share placement
- Capital injections

Solvency-II Project Oversight
Just to remind you of the enormous financial impact potential of 'Risk Optimization' and to keep your eye on a 'helicopter view level' with regard to Solvency-II projects and achievements, here's a (non-complete but hopefully helpful) visual oversight of what has to be done before the end of 2012.....

(Download big picture JPG, PDF)

Be aware that all Key Performance Indicators (KPIs), Key Risk Indicators (KRIs) and Key Control Indicators (KCIs) must be well defined and allocated. Please keep also in mind that one person’s KRI can be another’s performance indicator(KPI) and a third person’s control-effectiveness indicator.

Value Added Actions
As actuaries, we're in the position of letting 'Risk Optimization' work.
We're the 'connecting officers' in the Solvency Army, with the potential of convincing management and other professionals to take the right value added actions in time.

Don't be bluffed as an actuary, take stand in your Solvency II project and add real value to your company and its clients.

Related Links:

- A Comparison of Solvency Systems: US and EU
- UK Life solvency falls under qis-5
- Determine capital add-on
- Reducing r-w assets to maximize profitability and capital ratios
- Risk: Who is who?
- Balanced scorecard including KRIs (2010)
- Solvency II, Piller II & III
- Risk Adjusted Return On Risk Adjusted Capital (RARORAC)
- ERM: “Managing the Invisible" (pdf; 2010)
- Unlocking the mystery of the risk framework around ORSA
- Risk  based Performance: KPI,KRI,KCI
- Risk of risk indicators (ppt;2004)
- Defining Risk Appetite
- Risk appetite ING KPI/KRI
- Board fit for S II?
- How to compute fund vaR?
- Technical Provisions in Solvency II
- Insurers should use derivatives to manage risk under Solvency II 
- Solvency Regulation and Contract Pricing in the Insurance Industry
- Overview and comparison of risk-based capital standards 
- Solvency II IBM
- Reinsurance: Munich Re  , Reinsurance solvency II