PsyFi Search

Wednesday 17 February 2016


Advisors' Suck

There’s a new class of financial intermediary in town: the robo-advisor. As I understand it, the human advisor’s financial knowledge is sucked out of their brains and up-loaded into a machine.  So, that shouldn’t take very long then.

But humans being the clever creatures that we are simply replacing the human advisor with a machine won’t save us from our own stupidity. We’re far too clever for that.


The problems with real-world advisors have been extensively documented. On one hand the idea that advisors can cover their arses by prior disclosure of conflicts of interest has actually been shown to increase bias – because the problem is now the client’s, not the advisor's, who is now free to be as biased as they like (see: Disclosure Won’t Stop a Conflicted Advisor).

On the other hand, increased fiduciary rules have increasingly hampered the ability of good advisors to give good advice.  If you’re constrained in what you can advise by Modern Portfolio Theory then you’re going to find it tricky to escape the evil clutches of the Efficient Market Hypothesis (see: Minding the ChastityBelts: Fiduciary Duties and 900 Pound Lemmings).  And it’s expensive for clients.

Where Are the Robots' Yachts?

In the middle of this mayhem it’s not surprising that innovators are seeing the opportunity to introduce low-cost automated advisory services, the so-called robo-advisors. These are algorithm based software programs which can create portfolio allocations for clients without any of those rule constrained, biased human form advisors getting their pesky fingers in the pie. And they’re low cost as well: robots don’t (yet) need yachts.

As we saw in Can Software Beat Penny-Flippers? the idea that automated advisors can outperform humans can be traced back to the research of Paul Meehl who showed in Clinical vs. Statistical Prediction that statistical analysis was always as good at medical diagnosis as a human being, and usually better. The idea of putting ourselves in the hands of a bunch of dumb algorithms may be a bit scary, but we do it every time we step on a plane: it’s all context dependent.


Now, mostly robo-advisors are going to be using a variation of the aforementioned Modern Portfolio Theory to construct portfolios based on the client’s risk tolerance. And there are two problems with that ...

Modern Portfolio Theory, the demon-spawn child of the Efficient Markets Hypothesis, was Harry Markowitz’s solution to the problem of how to allocate assets to minimize risk – which is this case specifically means volatility. As it happens a low volatility portfolio appears to offer a way of outperforming the market, so this is no bad thing. But if the whole world starts investing using robo-advisors we’ll doubtless run into the law of unintended consequences.

The other problem is more interesting. If I, as a client, want to get my robo-advisor to create a portfolio that matches my risk tolerance I have to tell it what my risk tolerance is. And, unfortunately, people are generally pretty bad at knowing anything about themselves. Even worse, they refuse to accept this.

Decoy Effects

Perhaps the most spectacular way of demonstrating this is using anchoring and specifically something called the decoy effect. Dan Airely has written and demonstrated this several times. For instance, consider the following magazine subscription options:
  • Web edition only for $59
  • Print edition only for $125
  • Web and Print edition for $125
So what’s the point of the print edition only option? It’s a decoy – when presented with these three options the majority of people go for the Web and Print edition but when that option is removed then the majority go for the Web edition only.

What’s happening is anchoring – we don’t compare things to what we need, but to other easily accessed reference points. So what do you think we anchor to when we’re thinking about our tolerance for risk in our robo-advisor portfolio?

Risky Preferences

Well, generally, it’s current market conditions and recent investing experience. If our recent experiences have been positive, with market and portfolio gains, we’ll be more bullish and less risk-adverse. If the opposite is true then we’ll be more cautious and more risk adverse.

Typically a robo-advisor creating a portfolio for a bullish investor will create one that’s more risky and, because under Modern Portfolio Theory risk is volatility, this means that the investor will see more fluctuation in the value of their portfolio than their risk-adverse equivalent. So when markets get jittery they’ll probably see more variation in capital value – and the client is more likely to ditch their portfolio just at the point of maximum fear.

The reverse is true for cautious investors – they’ll see lagging returns in bull markets and will doubtless decide they want a piece of the action, just at the point where markets are about to tumble.  The problem is not the robo-advisor – it’s the human client.

Prospect Theories

The underlying issue is that risk tolerance is not a fixed thing. This is a major problem for standard economics which assumes that if I go to a restaurant and order the Chicken Royale today then when I go back next week I’ll do the same.  Unfortunately not only are the preferences of real people not fixed, they’re also subject to biases: as Tversky and Kahneman showed we tend to underweight large probability events (e.g. markets will drop 20% this month) and overweight small probability events (e.g. markets dropped 20% this month, so they’ll drop 20% next month too).

In fact characterizing robo-advisors as all the same is unfair. There are a wide variety of algorithms being used, resulting in an equally wide variety of performance. The question then, of course, becomes how the client chooses between a bunch of robots that look superficially the same but are actually very different when you get down to the bits and bytes?

Ethical Robots

The answer, presumably, is the same way that clients always choose financial products – by looking at past performance. No doubt someone will shortly come up with a product that automates the choice of robo-advisor based on the client’s risk tolerance. There is no end to the possibilities for new financial services: presumably one day the client itself may be a robot.

Ultimately for robo-advisors to really work you'd have to take the human out of the equation. Work on this is already underway. It's possible, for instance, to analyze peoples using their social media profiles - and in future we could pump these preferences into an algorithm that adjusts itself to the specific circumstance of the client:
"This is how, perhaps, the robo-advisor should consider transforming itself - from the initial set of rules that respond to a limited number of questions, to actually interacting with human handlers to alert a subset of clients that the market has fallen 5 percent and that they may want to move more to cash, or that a particular sector has hit a 52 week low, or other variables."
(The Ethics of Intelligent Machines, Vincent Paglioni)

Frankly I'm not sure that sounds very wise. Letting robots loose with our money is one thing, getting them to highlight moments of fear and uncertainty to investors is entirely another.

Fish or Steak?

But all things being equal robo-advisors are a good thing: low cost access to the type of portfolio allocation that even Harry Markowitz, the inventor of Portfolio Theory, eschewed because it was too difficult to implement. It’s still early days for the industry, the monies under robo-management are still tiny compared to global funds.

But thinking that this is going to solve the underlying issue is stupid, because the underlying issue is that people will do strange things when placed under emotional load.  All things being equal low-costs are good, but the idea that robo-advisors will eliminate behavioral bias from investing is a non-starter. Somewhere you still have a human being dithering over whether to have the fish or the steak.


Related articles:

1 comment:

  1. Sounds a bit harsh. If robo-advised accounts outperform comparable artisan-advised accounts in net terms, that will be progress, and that objective is hard to cock up (as you greatly reduce fees and remove the artisan's own behavioural issues). Won't cure cancer, sure, but progress is incremental.