Imagine you’re a turkey. Every day you're approached by a man with a bucket of corn who feeds you. What kind of mental model of what happens when he appears do you think you’ll build up?
Gerd Gigerenzer, the Director of the Max Plank Institute for Human Development and the Harding Center for Risk Literacy uses this story to demonstrate the financial industry’s approach to modelling decision making under uncertainty. We pay rich financial services organizations to make predictions based on the past that are only ever correct by chance, in order to absolve ourselves of responsibility for when they go wrong. It’s a waste of time, money and talent. Because, of course, Thanksgiving always comes.
Gigerenzer was lecturing last week at the Royal Institution in London. The RI is a venerable place, founded in 1799, home to Humphry Davy and Michael Faraday. No less than ten chemical elements were discovered there, and they've managed fifteen Nobel laureates. And Gigerenzer is a great lecturer, dedicated to communicating his story, the need for citizens to be risk savvy, in a clear and thought-provoking way, underpinned by a dry delivery and deadpan humor.
He culls anecdotes from a wide variety of his studies to make profound points about the way we as individuals should think about risk, and to make the point that risk and uncertainty are different: risk is measurable, uncertainty is not. Risk is about the probability of a mammogram saving a woman’s life – on average it won’t, by the way (to discover why you’ll need to read his new book – Risk Savvy: How to Make Good Decisions). Uncertainty is about whether the stock market will go up or down next year, which can't be predicted at all, not that that stops anyone trying.
Currency Exchange Snake Oil
As demonstrated in his lecture, and a constant theme on this blog, a lot of financial market prediction is based on the Turkey Illusion. One example: in an analysis of currency market predictions over the first decade of this century he neatly demonstrated that the predictions tracked the market with a one year delay. You might think that a child could do this armed with a crayon and some graph paper and you’d be right – but that doesn't stop corporations shelling out good money for securities snake oil.
Although the research in question comes from the great and the good of our financial institutions (I know, I know, that’s an oxymoron) and is based on algorithms protected like they’re state secrets (perhaps not the best analogy in the age of Wikileaks) the basic trend is clear. When the average prediction one year falls short of the actual results then next year the prediction moves towards the previous year’s actual results, which go and do something entirely different.
In fact, in most years the real exchange rates were outside even the most extreme guess – sorry, prediction – and in only one year did the estimates and the actual rates coincide. ‘Coincide’ being the operative term, it being clear that there’s no predictive power in the estimates whatsoever. So the question is why anyone keeps paying for this research – after all, if it has no predictive ability why do smart people and successful organizations keep shelling out for the stuff?
Defensive Decision Making
The answer, Gigerenzer surmises, is down to something he calls Defensive Decision Making. This, in a nutshell, is where people make decisions that aren’t the ones they would make if they had a free hand, but are the ones they make when they’re watching their backs. He traces this tendency to our society’s increasing need to find someone to blame if things go wrong – a trend which leads to a huge amount of wasted time, energy and talent. Not to mention deforestation due to the blizzard of paperwork it engenders.
The problem appears at all levels of organizations, and can be found in all sorts of areas where we’d rather it doesn’t. The medical profession, for instance, is rife with it: unsurprisingly when you consider how litigious we’ve become. On finance Gigerenzer even goes beyond the normal criticism of economics by arguing that even behavioral economics is on the wrong track. Standard economics equates uncertainty (which isn’t measurable) with risk (which is). Behavioral economics attempts to reduce uncertainty to risk – and inevitably fails.
Wait 500 Years
But this is not dogma: you can use complex algorithms if you have enough data. In one throwaway comment he noted that with enough data you can reduce stockmarket uncertainty to risk – but it will take 500 years of data to do so, and you need to hope that the stocks that are trading today are the same ones around in 2514. Or rather, your descendants do.
Although there are situations where lots of data is enough to help make decisions for most of us stockmarket investors, most of the time, we’re better off adopting simple heuristics rather than complex algorithms. The famous example is that of Harry Markowitz (see: Harry Markowitz and the Efficient Frontier) who developed the mathematical models underpinning portfolio theory and then ignored them when he came to invest for his own retirement in favor of a simple 1/N algorithm where N was the number of investment options available to him.
Naïve Asset Allocation
The 500 year throwaway comment comes from a study by Victor DeMiguel, Lorenzo Garlappi and Raman Uppal – in Optimal Versus Naïve Diversification: How Inefficient is the 1/N Portfolio Strategy? they show that:
“Based on parameters calibrated to the US equity market, our analytical results and simulations show that the estimation window needed for the sample-based mean-variance strategy and its extensions to outperform the 1/N benchmark is around 3000 months for a portfolio with 25 assets and about 6000 months for a portfolio with 50 assets. This suggests that there are still many “miles to go” before the gains promised by optimal portfolio choice can actually be realized out of sample.”
The researchers looked at 14 asset allocation models, but none of them could beat the simple heuristic. If we had enough data to plug into these models then we might well see a different result – but we don’t, and we never will. And without big data for the algorithms to crunch to reduce the uncertainty to measurable risk so they go wrong, much like the exchange rate estimates we looked at earlier.
Gigerenzer is a hero. He’s out there now trying to change the way health systems are managed and to get risk savviness onto school curricula. He believes, passionately, that without the basic skills to understand risk in the modern world we’re at the mercy of anyone who wants to manipulate us.
In the end this is each individual’s responsibility. Otherwise we’ll be nudged hither and thither and made to think fast or slow to the rhythm of other peoples’ priorities. Read the book. Go see him if you get the chance. Take control, because if you don't someone else will.