PsyFi Search

Sunday 26 April 2009

Risky Bankers Need Swiss Cheese Not VaR

Financial Risk Management is Too Risky

The failures by banking institutions across the world over the last couple of years would have been remarkable in almost any industry. However, when they’re taking place in institutions that are fundamentally all about risk management, closely overseen by phalanxes of regulators, they’re quite extraordinary. Sadly the banking industry was too focused on profits to remember the basic rule of investment.

If pharmaceutical companies or airlines suffered from the same type of risk management failures as the banks we’d all be dying of aspirin overdoses and ducking for cover as airliners crashed in our back gardens. These other industries have more nuanced models of managing risk, relying on combinations of methods. It’s about time the banks learned about the Swiss Cheese Model of Risk.

What Went Wrong?

To summarise a vast range of problems in simple terms, the people running the banks, the credit rating agencies and the regulatory bodies didn’t have a clue about the limitations of the risk management models they were all using. They were all looking at the same data and using the same models. And all drawing the same conclusions. Which were wrong.

The risk management models used by the banks were, and are, highly sensitive to their input conditions. So long as the world continues to work as the models require then they’re perfectly fine at assessing risk stress levels. As soon as the world gets bored with the models and goes off to do something else they stop working – just at the point where they’re needed.

Unfortunately the sheer complexity of the deals being done meant that most people with oversight weren’t able to assess the risks being taken and simply relied on the models. To my regular readers my apologies, but we’re back to Man With A Hammer Syndrome. Again.

Northern Rock

One striking – if simple – example of this was the failure of the UK’s Northern Rock. This regional bank was primarily a mortgage lender with limited retail deposits from customers such as you or I. Most of the funds they lent were raised on the wholesale markets, borrowing from other commercial lenders.

The UK’s financial regulator was so comfortable with the bank’s borrowing practices it allowed it to reduce its capital requirements – the amount of cash it must keep on tap at all times – in 2007. It promptly went bust in 2008.

NR’s problem was that its models failed to account for possible volatility in its funding costs, a problem exacerbated for them by their habit of lending 125% of the value of the mortgagee’s properties. Rationally anyone with half a brain could have foreseen that falls in property prices would lead to lenders getting nervous about the bank’s loan book and responding by widening spreads and reducing liquidity. Yet because the models generated numbers that said everything was OK neither bank executives nor regulators saw anything to get worried about.

Value At Risk

Underpinning the financial regulation of the banks is a measure known as “Value at Risk” or VaR. This is essentially a risk measure for measuring the likelihood of an extreme event in ordinary circumstances.

Its aim is to predict the probability that a portfolio of investments will rise or fall by more than a certain amount. So a one-day 5% VaR of $1 million means that you’d expect the portfolio to rise or fall by $1 million once in twenty days (i.e. 5% of the time), all things being equal (which they never are, of course). Such an event is called a VaR break.

VaR was developed in response to the oft-observed extreme behaviour of markets – the so-called Black Swans of Nassim Taleb. These happen more regularly than most risk models would predict and the losses they cause can easily wipe out the gains made in between extreme events. The idea was that by measuring the increase in VaR breaks it would be possible to predict the likelihood of looming Black Swans.

Well, that seems to have worked out pretty much as predicted then. Not.

The Road To Hell

The trouble with VaR is that in the hands of people who don’t understand what it’s measuring it’s pretty much an unexploded bomb. It ticks along nearly all of the time, right up until you really need it. Then it stops working. And blows up.

The financial world already knew this – the failure of Long Term Credit Management in 1998 was directly down to the failure of their VaR model caused by an unexpected liquidity crisis. Risk management professionals knew about the problems, the trouble was that financial target obsessed senior management didn’t want to hear: VaR says it’s OK, so it’s OK. Now shut up.

Meanwhile institutions started rewarding people for making money at low risk, as measured by VaR. The road to Hell is paved with good intentions and those managers who set up such compensation schemes ought to be well on their way to a good roasting right now.

If you understand the way VaR’s calculated it’s easy to game it and if you allow people to legally defraud you they will. So with no proper crosschecks in place the employees could take on excessive risk without impinging on VaR. Which was fine, while the Black Swan stayed away. Once it took wing, though, the whole risk edifice collapsed.

Alternatives to VaR

It’s not that VaR’s bad, it’s just been overused and misapplied. Any averagely intelligent person could see that Northern Rock’s business model was highly risky, yet the models said it wasn’t so everyone carried on.

In other industries the inability of any single risk model to measure total risk is well understood. Invariably failures in managed systems involve multiple interacting errors which combine to cause the fatal flaw. So Northern Rock borrowed too heavily on the wholesale markets, lent too riskily to heavily indebted borrowers, its regulator was too complacent about the risks of its processes and the risk management models were trusted too much.

The Swiss Cheese Model of Risk

The Swiss Cheese Model of risk addresses this kind of problem. Originally developed for the healthcare industry it envisages systems as consisting of layers of holey Swiss cheese, moving backwards and forwards. For a system failure to occur the holes must line up just at the point where some external event hits the system, passing through all of the layers and causing a critical malfunction.

The inventor of the model, James Reason, argued that accidents occurred due to four levels of failures – organisational influences, unsafe supervision, preconditions for unsafe acts and the unsafe acts themselves. Sound familiar?

In Reason’s own words:
Perhaps the most important distinguishing feature of high reliability organisations is their collective preoccupation with the possibility of failure. They expect to make errors and train their workforce to recognise and recover them. They continually rehearse familiar scenarios of failure and strive hard to imagine novel ones.
Anyone recognise banks in that description? No, me neither.

Implementing Better Risk Management

This really isn’t too hard to implement. The financial industry’s over-reliance on a single measure of risk – a measure known to be flawed – was crazy. Worse still the industry’s regulators have made using VaR a requirement in the new Basle II regulations: thus enshrining in law the weaknesses of the approach and simultaneously validating an approach that ignores the use of basic intelligence.

At root, though, everyone involved in this game ignored that basic rule of investment – you can't expect to make large profits without taking the risk of large losses. When those gains start to mount up, it’s usually time to take a long, hard look at where the money’s coming from and where the risk lies. All too often it ends up with us, as shareholders and taxpayers.


Related Posts: Alpha and Beta: Beware Gift Bearing Greeks, Regression to the Mean: Of Nazis and Investment Analysts

No comments:

Post a Comment