Tl;dr version = this will be a lot of words about garbage in, garbage out!

Key Highlights
– The fundamental flaw in quant models is they are backwards looking and they assume that the past informs the future
– Models that only work in “normal” times but fail in the tails are dangerous.
– A better model would try to assess when the past is reflective and when it’s a poor judge of the future.
– Fundamental investors end up taken hostage by flawed risk models and can’t buy good companies at distressed prices.

How to Think

One of the seminal events in my life was taking Gene Fama’s class on efficient markets in business school. It was a classic swerve, right out of a movie plot. You thought you were signing up to learn about how markets worked, but that was really the sideline. The class taught you how to think.

Most of your time was spent dissecting academic papers to figure out whether the methodology made sense, whether the data used was legit or cherry picked, whether the statistical results were really significant or not, etc.

It taught you not to believe everything you read or hear, just because it sounds impressive or comes from someone accomplished. If you want to know why I challenge management teams so much, that’s where the seed was planted. Blame Professor Fama! 🙂

Unfortunately, investment analysts love to perform simple analyses and extrapolate meaningless results and tout them as if they are breathtaking discoveries. For all of you who have been on the other side of one of my complaints that your conclusions are spurious and are unlikely to repeat in the future because the data isn’t robust, blame Professor Fama for that too. And yes, I’m particularly thinking of all of you who tout the so-called mythical “hurricane trade” (maybe I’ll have more to say about that this summer).

Rise of the Quants

But now we live in a world of genius computer scientists who can find amazing relationships amongst seemingly random data and divine the future of stocks. Surely, they must be doing things the right way! Well, I’m sure some of them are, but I am doubtful that most do. There seems to be an awful lot of what we now call p-hacking or just plain bad methodology design.

In some sense, quants create their own success because if a strategy becomes popular and they throw a lot of money behind it, it is going to work. It’s not really honest poker to say you’ve found this great predictor of stock returns when you are manipulating the future success of the strategy by trading it in size and thus disturbing the natural supply-demand balance that previously existed.

The Importance of Stationary Data

Which brings us to today’s main topic: how do quants fool themselves?

The biggest reason is they assume what statisticians call “stationarity”. In simple terms, it assumes the characteristics of the data set do not change over time. For example, if you have a normal distribution with mean of 5 and standard deviation of 1 and ten years from now the mean and SD of your data set are still 5 and 1, then your data is stationary over time. This is good. It means it is easy to do statistical analysis on.

If your data set has mean of 5 and SD of 1 today, but next month it is mean of 8 and SD of 2, it is no longer stationary. Thus, if you want to look for relationships over time in your data set, your answers become meaningless. Clearly, stock market returns are not stationary!

Too much changes to say “of the last 20 times this happened, 14 times that happened next, so buy the stock the 21st time this happens”. Now, there are times when the relationship holds and you can rely on history but there are many other times it does not and you lose a lot of money.

The Kraft Debacle

OK, enough textbook stuff, let’s get to the real world. How do historical relationships get abused by quant models (and by derivation, the fundamental investors who use these models to shape their portfolio risk tolerance)?

Bloomberg made an interesting observation about the Kraft debacle (the cheese maker 🧀, not the football owner)💆.

Smart-beta strategies have exploded over the last few years, with asset managers including BlackRock Inc., State Street Corp. and Invesco Ltd. all starting indexed funds that try to beat the market by leveraging quantitative analysis to construct their benchmarks. However, these tools tend to be backward looking — evaluating the dividends that a company has already paid, for example — rather than projecting the future. That leaves them vulnerable to an abrupt turn in a company’s fortunes.

Bloomberg

There it is! I couldn’t have said it any better! The fundamental flaw in quant models is they are backwards looking and they assume that the past informs the future.

A Better Model

A better model would try to assess when the past is reflective and when it’s a poor judge of the future. Again, maybe some quant models do that but my impression is most do not.

How would one do that? One approach is to break the data set into periods where it is stationary and use different models for each period. This is called asymmetrical modeling and actually is what I used long ago for my college honors thesis (yes, it’s finally useful)!

Let’s do a simple example. Many models obviously look at a stock’s beta to predict future movement in the stock relative to the market. In fact, there are so many model driven traders that it has almost become tautology that “high beta” stocks will sell off double the market on down days.

The fundamentals are almost irrelevant on those days and it has become very hard for high beta stocks to shake that tag even long after their fundamentals have changed dramatically. Life insurers are still high beta stocks. QED. It’s not 2008 anymore but it doesn’t matter.

Model Failure At Extremes

Models that only work in “normal” times but fail in the tails are dangerous. They create a false sense of comfort right before the fall. Returning to the beta example, look at a period like last December where high beta insurance stocks were so oversold that they had discounted something like the 2011 Euro crisis if not a 2008 level recession. Yet, they still sold off every day because the quant models said the market is down, go sell high beta stocks.

Furthermore, risk models that fundamental firms use would discourage you from buying beaten down stocks since you would set off alarms because your beta (and probably momentum and volatility) exposures would end up too high. Essentially, fundamental investors end up taken hostage by flawed risk models and can’t buy good companies at distressed prices.

Stationarity as the Solution

The way to solve this is to realize that the beta at market lows is probably not reflective of the future! It is stale. Instead, we should switch to a model that looks at what the beta has been in other market slump environments in which the stocks are at lows on all fundamentals measures. In other words, we should look at a model that is stationary in stress environments.

One might say that a 2 beta stock is really an amalgam of 2 beta most of the time, 3 beta at the beginning of a market fall (or rise), and 0.5 beta after an extended market fall. In other words, beta is not linear, but asymmetric.

This would lead you to buy “high beta” stocks aggressively after they have reached fundamental support levels and your firm’s risk model would be supportive of this approach because it would realize the beta in the commercial risk model is stale. Again, the past doesn’t necessarily inform the future.

A good model should predict the future rather than just report the past. If you notice what the market has done since Christmas, and particularly high beta cheap stocks, I think you would agree my construct is a pretty good basis for a model.

Stating the Obvious?

Hopefully, after reading this, your reaction is “yeah, that kinda makes sense” or “that’s obvious” or even “wait, doesn’t every investment firm do that already”?

The answer is no, investment firms do not do this. When I have suggested these modifications in the past, the general response is “yeah that makes sense but…”. It’s either too much work or it’s not practical or it’s too big a change or how would we come up with the different betas, etc.

My response to that is producing excess returns isn’t easy. Doing things the same way as everyone else doesn’t tend to produce outperformance. You need to evolve. One way to evolve is to use asymmetric factor loadings in your risk models. If anyone is interested in trying this approach and wants to talk about it more, go to the contact us page and send me an e-mail.

Or tell me why this is ridiculous in the comments below…

3 thoughts on “The Fundamental Flaw in Quant Models”

  1. One of the most popular sort of articles targeting budding data scientists is “Do I really need to know math to be a data scientist?” There is this strong desire for obviously mathematical problems to somehow be rendered “intuitive.” It’s a problem. So yes, most of Wall Street (or indeed any forecasters) don’t use appropriate models.

    Also, using appropriate models is hard! Measuring nonstationarity (how the parameters of the assumed underlying probability distribution are changing over time) is difficult, because you somehow have to differentiate that parameter drift from noise. The non-cat accident year loss ratio is up 1 point over last year–noise or drift? In the equity world, downward corrections are usually modeled as jump processes, which means the jump itself is almost impossible to forecast, only the probability that a jump is likely to occur. But that’s the classic growth investor trap: when do I sell, which recent research suggests everyone is really horrible at doing.

    So you’re absolutely correct, and I’m not holding my breath for much to change.

    1. Very insightful as always, Todd! And yes, the issue of whether an observed loss ratio change is drift or noise is a great example of the challenge. Unfortunately, it seems often the solution is “pick the next year the same as last year” so maybe you don’t have to be a mathematician to be an actuary??? 🙂

  2. “so maybe you don’t have to be a mathematician to be an actuary?”

    This used to be the case. Or at least, this was the emphasis. Actuaries were encouraged to be bridges between more and less technical groups in management. Over the last 10 years, the actuarial syllabus has greatly increased in modelling skills comparable to what data scientists are doing, but many fear it is too little too late. And of course, we all know about re/insurers who simply hire math/stat people in lieu of actuaries.

    “it seems often the solution is “pick the next year the same as last year” ”

    No joke: one of the best US hurricane forecasts is the last 5-year average. Regression to the mean is a thing. And of course, this is one of the simplest ways to balance between drift and noise.

Comments are closed.