It’s been a rough start of the decade for quantitative investing shops. High profile firms like AQR and Worldquant have made significant layoffs this month.

I suspect the problem with quant performance is the AUM has scaled up faster than the machines can which means marginal $ are invested in low return trades. In fact, a recent academic study showed that a significant portion of backtested (aka hypothetical) returns from quant models were a result of strategies that don’t scale (e.g. microcaps).

However, the topic of the day isn’t why are quant models underperforming. It’s how can they go horribly wrong!

How AI Learns

I’m going to do the really basic 50,000 foot view version here. Everyone loves cat videos. How does YouTube know when you watch a cat video that it was a video about a cat, so that it can then show you another cat video next?

This is what machine learning does. It essentially looks at millions of pictures, some with cats and some without. It eventually becomes pretty good at recognizing if a new picture has images that look like the ones marked “cat” or “not cat” in its database. It can then be programmed to show you all the cat pictures and videos you demand.

What Does This Have To Do With Investing?

Machine learning is essentially what quant investors use, but in a much more rigorous way. Rather than look for cats, machines are shown investments that went up and the characteristics of that investment (fundamentals, technicals, factors, etc.).

Then, it looks for similarities in those characteristics and creates a strategy. Once it finds a successful strategy, it backtests it against historical data, and, if it continues to produce profits, it gets deployed with real money.

As an aside, not all quant strategies work this way. Certainly, there are firms where humans come up with ideas for market beating strategies and then have the machines run through terabytes of data to test them to see if they actually work. That approach can work too, but it is going to miss strategies that fallible humans may not have considered.

Spirit vs. Letter of the Law

The problem with blindly trusting machine learning is…you’re blindly trusting machine learning. Some quant shops don’t even know what strategies the machines are using. All they care about is if the machine is generating trading profits. Who cares why!!!

There is no value judgment being made about whether the strategy is “rational” or “irrational”. It is only judged by its efficacy in meeting the objective. Do you remember the Microsoft AI that learned to be racist?

If a quant program finds stocks tends to go up the day before an earnings miss, it will buy stocks the day before the miss. It may or may not sell the stock at the end of the day which means it could then suffer the larger losses on the poor result the next day. Oops!

It may also “learn” that stocks outperform if the CEO is precisely 48 years old, even though that seems like an irrelevant reason to invest. Or that they underperform if the company’s logo is yellow or the company’s name has seven letters. It doesn’t care about the why.

If you think I am being flip here, there are plenty of examples of machine learning doing really counterintuitive things because the objective is not precise enough.

For example, in a game where the objective is “don’t lose on the 5th level”, machines may learn to kill themselves on level four or even turn off the game rather than risk a loss at level five. They technically satisfied their mandate, even if not in spirit.

Because machine learning finds patterns without our help, we have no idea if the “successful” strategy it found is a sound one we want to emulate or a spurious one that should be discarded. It is too complex for us to ascertain. This leads to the scary part

The Quandary

Machines are amoral by definition. They don’t have feelings. They don’t care whether they make money in a way people view as “smart” or “insane”. They don’t care if they do it legally or illegally (hopefully the coders put in safeguards to prevent illegal tactics). They just “care” about meeting the objective.

If machines conclude the best way to make money is to create bubbles they can ride up, that is what they will do. Every day that propagating a bubble is rewarded with higher prices, it reinforces to the AI that buying into a bubble is good and should be repeated.

Unless the machine can teach itself that perpetuating bubbles only makes sense up to a point and can also teach itself when the bubble is about to end – which is highly unlikely – it will continue to repeat the same bubble promoting investments.

Yes, this is the part where I remind you how strong tech stocks have performed in the last year and that Apple now has a bigger market cap than the entire S&P energy sector. What do you think the computers are learning is a winning approach???

The End Game: Calamity?

So how does this play out? Something will change exogenously (a recession, a new technology, the election, etc.) that will make buying winners (this isn’t just about tech, the market as a whole has divided into groups of have and have nots) not appealing to the model.

You might say, “what’s the big deal? We’ve seen bubbles blow up before”. True, but at least we know how humans respond to the end of bubble. We have no idea what the machines will do.

First, will they inflate the bubble more than past tops even when active investors are starting to sell out of it? Once it pops, will they not change behavior fast enough and try to blow it back up?

Or worse, when they see the humans begin to sell, do they correctly see how bad the downside will be and accelerate the path downwards (typical bubbles tend to deflate in phases rather than all at once which at least gives policymakers some room to try to address it)? In this scenario, we could see an immediate Black Monday like drop, or even worse.

Remember, after Black Monday, there were humans ready to step back into the market and bargain hunt, which is part of why it proved so fleeting. The machines will develop new strategies once the old ones break. They may decide to bargain hunt orthey may decide to sell all their inventory and then short everything until it’s down 90%.

A “winning” approach may be to lever up the short side and keep rolling the profits into bigger bets to force longs to capitulate due to unfavorable marks and margin calls. This isn’t terribly different from what happened on a smaller scale in structured credit during the crisis. The machines could apply this strategy, destroy all other market participants, and win the whole pot.

The country would be in chaos, but remember, the machines don’t care about that. That’s not in their objective function. Their objective is to maximize returns. Collateral damage to the greater good is not factored in.

How Do We Prevent This?

Quant funds are essentially unregulated. Sure, they have the same obligations as other investment firms, e.g. don’t insider trade, don’t run a Ponzi scheme, etc. However, there is no regulation for systemic risk.

The SEC should require quantitative investing products to report the objective functions they are using so they can be vetted for potential unintended consequences.

While the investment strategies themselves are considered IP and shouldn’t be disclosed, a routine audit should be done annually by an outside firm that can examine a sample of strategies and verify the examined strategies do not pose market risk in a tail event.

My prediction is these regulations will all become a reality someday. However, that day unfortunately will be after a market panic, not before, because #humannature.