PsyFi Search

Wednesday 9 February 2011

When Muddled Modellers Model Muddles

Hat, In, Cat, The

If a muddled modeller models a muddle do they end up with a muddle model or a model muddle?

Sadly Dr. Seuss’s most famous creation isn’t about to appear and rescue us from this conundrum. The problem is known as model risk and it’s what we often end up with when we start trying to model financial systems: a muddle rather than a model. Which helps explain how people who don't exist can get mortgages they can't pay.

Flying Straight

Financial markets have always been subject to risk, it’s inherent in the nature of the beasts. The introduction of mathematical, quantitative models was intended to simplify the analysis of these risks and to make it easier for investors to make qualified decisions. However, while the introduction of these models may have been beneficial in many respects it also created the possibility of a different problem – risk arising from the models themselves.

How to describe this? Well, it’s like the introduction of fly-by-wire computer systems to fly aircraft. This actually makes them far safer, but adds in the new risk of the computer going wrong. Only this is a really bad analogy because fly-by-wire has indeed made flying much safer whereas quantitative financial modelling has simply increased the risks through inappropriate attention to the issues of model risk.

Model Choices

By definition a model is an inaccurate representation of whatever it seeks to describe. We can’t model everything – there are more things in heaven and Earth than we can even think of, let alone model – so this is inevitable. Unfortunately there are almost infinite ways in which a model can go wrong and the financial industry has tried out most of them on an unsuspecting public.

To make a model we have to start with an assumption that inputs and outputs are causally related. As we saw in Twits, Butter and the Super Bowl Effect you can’t rely on correlation to prove causality. Even if you get that right modellers must also make choices to try and model those things which are most essential while ignoring those which aren’t relevant. And, of course, sometimes they’ll get this wrong.

Model Risk

As Emanuel Derman describes in Model Risk you can have models which are incorrect, because they fail to model what’s required, models which are correct but provide an incorrect solution due to specification errors, models which are correct but are used incorrectly, models which give the wrong errors due to the limitations of microprocessor technology, models which are implemented wrongly and models which turn out to be wrong because they were based on data that turns out not to be predictive of the future.

And sometimes there are no models that work at all. As Derman says:
“In terms of risk control, you’re worse off thinking you have a model and relying on it than in simply realizing there isn’t one”.
Model Arms Races

In fact, when you look at the muddles that models can cause you have to wonder why they get used at all. The answer, in general, is that they do actually help manage risk but the side-effect is that their very existence encourages an arms race as the developers of ever more exotic and potentially toxic forms of securities can give full reign to their imaginations. This is only possible if you can predict the risks that you’re taking and the development of such models, at least in theory, permits exactly this.

This can lead to the biggest model muddle of all: the use of the models can change the world that they’re modelling and invalidate the models. Now that’s a proper muddle.

Subprime Muddles

If we wind back to the sub-prime mortgage crisis we can trace the origin of the problem to the insurance offered to purchasers of collateralised debt obligations (CDOs). CDOs were simply packages of mortgages split into different tranches offering different amounts of reward in return for assuming different amounts of risk. To simplify drastically, if you were prepared to buy the highest risk tranche you got the highest interest rate but would be the first to lose your money if the mortgage borrowers stopped repaying.

The purchasers of CDOs offset the risk of these dubious derivatives by buying insurance from insurance companies like AIG; and AIG and co were prepared to offer insurance because their quantitative models, based on historical data of mortgage defaults, told them it was safe to do so. Unfortunately this triggered a chain of behavioural consequences which invalidated the data and hence the models.

Muddled Incentives

In essence the purchasers of CDOs were happy to buy as many of these securities as they could get their hands on, because they could insure the risk. The mortgage lenders were happy to sell as many mortgages as they could because they could package them up as CDOs and pass on the risk. To sell as many mortgages as possible the lenders heavily incentivised mortgage brokers to push these who, presumably because they weren’t carrying the risk, were happy to ignore lending basics like deposits and proof of income or, in some cases, issues like whether the borrower actually existed or not.

So the brokers earned commissions, lots and lots of commissions, by selling mortgages, lots and lots of mortgages, often on introductory teaser rates, to people who had never bought homes before because no one would lend to them largely on account of their lack of deposits, income or corporeal bodies. Hence, these people had never appeared in the data before and the risk models of the insurers didn’t take them into account. RIP, risk models.

Muddle Risk

The sorry story is expiated by Austin Murphy in An Analysis of the Financial Crisis of 2008: Causes and Solutions:
“The premiums charged on the credit default swaps [by the insurance companies] do not appear to have provided sufficient compensation for the higher default rates on mortgages with lower (or no) down payments, especially where no documentation was required and no human credit analysis was undertaken.

The problem of under-pricing the insurance payments on credit default swaps may have been at least partially exasperated by the mathematical models of the insurers not fully allowing for the rising defaults that normally occur on adjustable rate mortgages as the interest rate rises following initially low teaser rates”.
This merry muddle of a story doesn’t fall into any of the model risks described by Derman. It’s a different class of problem, where the model itself created the incentives for various parties to game the system at the expense of the organisations running the models. Other parties got caught because they didn’t understand the nature of the risks, usually because they outsourced their risk management to the Credit Rating Agencies which, as we saw in Credit Rating Agencies: A Market Failure?, turned out to be another muddle in its own right.

Setting the Boundaries

However, don't think it's as simple as that ... the evidence indicates that some of the originators of the dodgy loans knew perfectly well the risks that they were running and shifted these onto other less wary parties: their risk models, or at least the intuitions of their risk managers, were pretty good. This story is too complicated simply to blame it on complexity or muddled modellers.

Which is how you would expect it to be: no amount of mathematical modelling can eliminate the need for knowledge and experience and feet on the ground. These are complementary disciplines because you can’t replace an understanding of human nature with a model that incompletely represents a denuded subset of existence and expect to reduce your risks.

Of course, this is exactly what many financial institutions did do and, one day, what they’ll do again. Sadly they’re not risk modelling, they’re risk muddling and in their muddling they’re creating new risks which their models can’t cope with and which inevitably causes us investors to lose out. Yet this isn’t something we should just accept: after all, we don’t expect airplanes to regularly fall out of the sky. It’s time to rethink the acceptable boundaries of model risk.


Related articles: Mandelbrot's Mad Markets, Be A Sceptical Economist, Your Financial Horoscope

7 comments:

  1. Isn't this exactly a case of "models which are correct but provide an incorrect solution due to specification errors". The models adequately described the risks of the market before the model was used and increased lending.

    What was needed was a new model based on what was likely to happen to lending (and defaulting) if the first model was applied - and then a new model and another in some kind of recurrance relationship. Hopefully the repeated models might stabilise around a new usable meta-model. If they don't then the models are not fit for purpose.

    ReplyDelete
  2. I mostly agree with that: I think the real issue is that any system that includes humans is intrinsically affected by reflexivity: the system changes the way people behave which changes the system. So the models need to adapt as the responses to them change and it's unlikely that process will ever end.

    There are also (at least) two problems underlying implementing this adaptability. Firstly there's figuring out what the changes are quickly enough to change the models before something goes badly wrong. Secondly there is often quite strong pressure to avoid changing the models if they'll have a negative impact on business expectations. I covered this in the article on Risk, Reality and Richard Feynman.

    The brutal reality is, I think, that if any business comes to rely on models too heavily without understanding their intrinsic limitations then they're more risky than they might seem - and investors should behave accordingly.

    ReplyDelete
  3. And then you have moral hazard. Is there any incentive for the businesses to care whether their model is profitable and accurate in the long term, when they can see that it will produce excellent returns for them in the short term and the downside will be shared with the whole of society.

    (Sorry for posting twice - I normally just read and move on but this one has really piqued my interest)

    ReplyDelete
  4. I don't usually reply twice, but intelligent comments are hard to ignore :)

    As I wrote last week, although it's clear that UK savers were compromised by moral hazard it's not clear that business managers were: if they had they'd have worked their share options far harder. It looks more like they didn't understand the risks they were taking, or at least they didn't want to understand them.

    In the end businesses are run by people and it's the people that make the mistakes. I don't think most of these people felt their organisations were too big too fail, I think they felt they were too clever to let them fail.

    ReplyDelete
  5. Some good thoughts in here, although I do not think models are intrinsically good or bad. Models are a tool to help make reasoned decisions but obviously cannot replace common sense and other forms of judgment. Models can definitely get you in trouble but a back of the napkin approach can do the same. If the point is that we should all be suspect of models, I think that is correct. However, I would argue models can be a useful tool to supplement other analysis. - Adrian Meli

    ReplyDelete
  6. Alpha is merely proof of a flawed risk metric. There must be some universal conservation law that says risk is never created or destroyed, only rearranged. It should have been a red flag when splitting the credits into pieces netted out a value larger than what went in. I imagine the free money made it hard for anyone involved to doubt the existence of the philosopher's stone.

    ReplyDelete
  7. "To make a model we have to start with an assumption..."

    That is the fatal flaw in soft-science models. It reminds me of the recipe for chicken soup in The Economist's Cookbook: "first assume a chicken."

    ReplyDelete