(Editor’s Note: The timing of this note is not ideal for reasons I explain in the postscript. Also, this is much longer than usual given the depth of the topic which is part of the reason for the delay.)

Let me start with a disclaimer. This is not a prediction that AI is going to be a dud. Rather, it is a counterpoint to all the bullish arguments more commonly presented.

Do I believe this is a bubble and will end with pain? Yes, but that doesn’t mean AI won’t ultimately justify the hype after the crash. However, I suspect the path to getting there will be tougher than people anticipate.

Have some of my arguments been addressed by others? Yes, many of them actually, but I think it is helpful to put them together in one place. To some extent, this is intended as a reference document to look back on down the road.

What Is The Use Case?

I’ll start with perhaps the most controversial one. Personally, I find the common AI offerings (ChatGPT, Gemini search answers) underwhelming. I have learned never to trust the first answer, as it’s often wrong.

If you don’t believe me, ask it something you know the answer to and see if it tells you what you would expect. Most of us trust the answers we get because a) they sound credible and b) we don’t know the answer ourselves, so we don’t question it. That doesn’t mean the answer is correct.

My current practice is to ask three times on three different devices. If the answer is consistent, then I’ll buy it, but you’d be surprised how often you get three conflicting answers.

Thus, I don’t see meaningful value to the average consumer (emphasis on average). No, making fake videos or cheating on your homework isn’t a compelling use case. While the accuracy of information provided should improve over time, all it ends up being is a better search engine. Yawn.

I know people will say they outsource tasks to AI, but until you can show me it will be 100% accurate, there is no way I would pay to outsource something that may not be done correctly.

Would you trust a ten year old to handle an important financial transaction? Then why are you trusting AI? The ten year old probably would be more competent.

Now, do I believe there is a big use case in the corporate market? Yes. AI can clearly be used as a productivity driver if it is being asked to collate information rather than “think” and the commercial offerings are maybe more like a 15 year old than 10, so useful, but unpredictable.

But how big a market is that? It’s large, I get it, potentially even Industrial Revolution large, but so was Microsoft Office in its day. It’s not novel to introduce a new business productivity tool.

But if you think AI agents are going to replace human agents at selling insurance, good luck with that! It is too mistake prone to be trusted with something that complex (to be fair, many trained human agents are mistake prone as well). We have already seen examples of companies getting sued by using AI for “mundane” tasks that it couldn’t get right. So sure, it will take your order at the drive thru but that isn’t life changing.

I have experimented with using it for stock research and, while the output seems impressive if you didn’t know the first thing about analyzing companies, the quality is at the level of the most pedestrian Wall Street analyst. In other words, it is equivalent to the analysts I would never pay cause I thought their work added no value.

Who Will Pay?

Expanding on the point above, what is the revenue model? It’s unclear to me why consumers will pay and it’s harder to put ads into chat than a search engine.

There is, as noted, a large opportunity to charge corporates for productivity software, but companies typically pay more for things that grow sales than cut costs.

A big part of the bull case is an assumption that corporates will spend enormous amounts of money on AI products. But will they? It’s one thing to say companies will spend more on AI than they do for CRM software, but the size of the AI buildout suggests companies will spend unprecedented amounts of their tech budget on AI.

What if that doesn’t happen? What if it just displaces spending on other productivity tools like Salesforce or Microsoft?

Also, if the willingness to pay is tied to the idea of replacing people with machines, this overlooks the fact that those cost savings will, as always, get competed away and passed along to the customer.

Industrial revolutions are massively deflationary, so companies may want to be careful what they wish for. Their revenues may be fall faster than their costs!

If AI really could produce things at near zero cost, do you know what happens to prices? They also approach zero. In fact, a sentient AI CFO would surely choose to compete for market share on price rather than try to maintain high margins knowing competitors would cut price to take share.

To give a simple example, if AI could make all LAE costs go to 0, what would happen? Insurance prices would fall by roughly the amount currently spent on LAE.

So how much are companies going to be willing to pay for that innovation? Probably not as much as needed to justify the buildout costs.

I’d also offer a reminder that just because a lot of picks and shovels are being sold doesn’t mean anyone will find gold. Nvidia is selling picks and shovels and is making a lot of money at it.

But don’t confuse the idea that a Gold Rush is taking place with the notion that miners will get rich. That is still a huge unknown. Usually, the miners go home with empty pockets.

All we know at the moment is a lot of people are headed to the mines. The market has assumed they will hit the mother lode. What if they don’t?

The Human Capital Challenge

Let’s say the bulls are correct and millions of jobs can be replaced by AI. OK, that can justify the corporate spending needed to justify the AI revenue models, but it brings its own problems.

First, let me state I am not one of those who believes technology lowers living standards. Every major technology innovation has eliminated some jobs but replaced them with better jobs. So will AI.

However, the transition periods can be ugly with some groups ending up left behind. If AI is going to displace large segments of the white collar economy (and that’s a big if, as similar predictions were made about mainframes, Excel, etc.), that will have severe economic repercussions.

It won’t be small factory towns that suffer this time. It will be metro areas and their suburbs. It has negative implications for housing prices, for commercial real estate, for all sorts of retail brands who depend on those consumers, for the large tech companies who make their money selling ads for products people can no longer afford, etc.

So even if the bulls are correct about AI’s benefits, they may find the labor market disruption causes a financial panic as people and businesses default on loans, stock valuations collapse on lower retail sales, etc.

The Natural Resource Challenge

The big difference between the AI buildout and the internet buildout was the internet glut mainly required glass (for fiber) while AI requires immense amounts of electricity and water (not to mention copper and other raw materials).

Power and water are scarce resources. Our grid is already unreliable and large parts of the western US don’t have enough water as is.

Scarce resources should be allocated to their most productive use. Is me getting a more robust answer to a search more productive than keeping the lights on? Is a business replacing a worker with a bot more productive than using water to grow food?

There will be huge backlash as AI consumes ever more and more of these resources. It will only get worse when people see their electric and water bills skyrocket because AI users are driving up prices.

This will become a huge public policy issue. It’s an easy political ad to show the poor family getting their electricity cut because they couldn’t afford the bill after AI drove prices up so much.

And I didn’t even touch on the environmental lobby. It is inevitable they will mobilize against AI accelerating the warming of the planet.

Political Risk

Which brings me to the broader political impact. There are so many reasons both parties will be anti-AI. I already hit on job loss, the environment, electric bills, economic fallout, etc. If that’s not enough, you can add increased cyber crime, a further explosion of fakes news, massive deflation (just as bad as rampant inflation), anticapitalist movements in response to increased tech profits, etc.

While there are few issues that can bring bipartisan unity, I suspect politicians of both sides will come together to try to limit AI’s growth. There will be plenty of fear stoking about the coming Terminator era destroying humanity.

It seems like a fairly easy prediction that there will be a new “trustbuster” era trying to limit corporate power and provide more benefits to workers.

The CAPX Conundrum

Even if the revenue projections do come through, can AI companies make money at the rate they’re spending money? Investors have, rightfully, developed angst about the unprecedented levels of spending to build out data centers and buy chips. Can the hyperscalers ever earn an acceptable return back on this investment?

Let’s do some very rough math. To earn a 15% IRR on an earnings stream growing 10%/yr, you need to earn about 7.5% in year one. Thus, $1T of AI CAPX needs to earn (normalized) year one profits of $75B/yr. That puts its on the same level as the tech giants like Apple, Google, and NVIDIA.

Of course, the expectation is that it will take several trillion of CAPX for the full AI buildout, so now we’re looking for unprecedented levels of future profitability.

But, even if you think that’s possible, it overlooks some serious issues. All this CAPX is only to produce the current generation of AI models on current generation chips.

Why is this a problem? All new technology hardware has steeply declining cost curves due to the high obsolescence risk from future generations. This means not only will NVIDIA’s existing chips go down in price as new chips are designed, but the cost of producing AI tokens will decline as the models improve.

Thus, the recoverability of the current CAPX spend is highly suspect. A data center being built on today’s expectations of future token prices is likely wildly optimistic. Given the debt funding of much of the data center buildout this could be a big problem (see below).

Similarly, buyers of NVIDIA chips are assuming their useful life will extend out for 5+ years but, in the real world, as soon as there is a meaningfully better new NVIDIA chip, most of the current inventory will be junked (or make its way to China perhaps).

The paradox is the more that AI improves the more likely it is the current spending is wasteful.

In other words, the mistakes of the internet bubble are being repeated. Buyers are assuming unrealistic useful lives and future prices because they underestimate the future innovation that will make their current inventory obsolete and require a whole new round of mega spending.

There’s a reason nobody makes semiconductors in the US anymore. It’s because they realized a foundry is a bad business model. The same will prove true of data centers.

From Asset Light to Asset Heavy

There is a reason all the big tech companies are so valuable – they have high margins with low capital requirements. Even those that make hardware find a way to lay the costs off on others (Apple’s phones are financed by the cellphone carriers, Nvidia’s only designs chips while letting others manufacture them, etc.).

AI is blowing up that business model. It has massive capital requirements and all the big tech companies are using the preponderance of their cash flow to develop it. It’s like if every MGA decided to go full balance sheet and retain all their risk.

Even if they achieve the revenues they hope for, their valuations will decline over time as their free cash flow declines.

The way to win is to develop low cost software like search or social media, not to be the cable company tearing up neighborhoods to lay pipes.

OpenAI itself should be a low multiple business once investors realize free cash flow is so far down the road as to be unforecastable.

The AI winners likely don’t even exist yet. They will be those who wait for token generation prices to fall and develop consumer products based on AI that people are willing to pay up for.

Financing Chicanery

This has been covered a lot by the press so I won’t write as much on it as I normally would, but the circular financing arrangements (e.g. NVIDIA invests in an AI developer who takes the proceeds to buy NVIDIA chips) are highly reminiscent to the things Cisco was doing in the 90s.

Let’s do a quick primer on why these are so devious. My company has $1B of cash lying around. That is worth, of course, $1B. Or is it?

What if my company trades at 10X sales? Now, I can approach your company and say I’ll lend you $1B to buy $1B of my product. You agree because the financing cost is low, so why not?

For me, my company gains $10B in market value so I have transformed $1B into $10B! Pure alchemy!

Of course, nothing stops you from taking $1B of your cash and lending it to a third company to buy your product. Now we both have increased our valuation by $10B!

This is the stuff bubbles are made of. It’s basically acting like a bank using the money multiplier effect to make loans with grandma’s savings account – but without any regulation or capital requirements.

But that’s not the only dicey finance going on. Some of the stories around the data center financing are eye popping. It’s very aggressive structured finance on par with the CDO structures that got everyone in trouble during the housing crisis.

These are arguably more concerning than the circular deals as there is little disclosure, a ton of leverage in these data center operating models (see above about risk of obsolescence and lower prices), and the debt is often held by highly levered private credit firms who have their own risks.

Timing

The most important thing I have learned from investing through prior bubbles is there is a big difference between identifying a bubble and making money betting against it. It is still far too early to bet on the bubble collapsing.

The bubble doesn’t end when prominent voices start screaming bubble. That is a necessary, but not sufficient condition. It doesn’t end when your neighbors start telling you it’s a bubble, but that means we’re closer.

It tends to end when those who bet against the bubble have been sufficiently punished. In the tech bubble, famous investors were forced into retirement for not owning enough tech. In the mortgage bubble, bearish analysts were all out of work before the peak for calling it two or three years too early.

I think the one thing that has to happen before any top is an OpenAI IPO, for a few reasons. First and foremost, it’s an obvious sentiment indicator. It will be a bigger hype cycle than the Netscape IPO.

However, there are some less obvious reasons as well. One thing inflating this bubble is the inability of public investors to buy any of the AI companies since they’re all private. That is a new phenomenon. This leads to buying AI proxies that are public instead, which leads to them becoming overpriced (e.g. buying power generators because I can’t buy Anthropic).

When OpenAI and others go public, other stocks need to be sold to fund the IPO purchases. Normally, this isn’t a big deal but OpenAI is suggesting it will come public at a $1T valuation! While not all of this will freely float initially, even at 10%, investors need to sell $100B of AI proxies to buy OpenAI (or they can not sell and lever up, which is even more dangerous in the long run).

Thus, AI companies coming public, in and of itself, could pop the bubble. There is unlikely to be that much demand waiting on the sidelines to digest these ginormous offerings. We go from scarcity of ways to publicly invest in AI to abundance. That’s bearish.

Also, venture capital investors will want to cash out. They know the clock is ticking and their paper gains don’t count until they’re monetized (well, actually they do, but that’s a fundamental flaw in VC to discuss another day). They will likely be aggressive sellers on lock up expirations.

Does this mean I think the day of an OpenAI IPO is the top? No, but it means the countdown clock begins in earnest and if you’ve been playing the “I know it’s a bubble but I’ll sell before the top” game then the IPO is probably the smart time to exit. Until then, speculators will probably do better than the bears.

Postscript

Let me conclude by expounding on my initial observation about the timing of this note. The timing is poor for two reasons – one, the note was 80% done a few weeks ago and I set it aside to address AIG and other current events, and two, because it’s going to seem like I’m writing this in response to AI stocks selling off recently over some of these concerns.

The latter is not the motivation. I’d much rather have published it at the market highs. Rather, given the note was done, it made no sense to scrap it just because it might appear less compelling than if I got it out sooner.

I doubt the bubble is ending as we speak. As noted, we haven’t reached the OpenAI IPO phase yet. So this is likely a near term selloff and this note may coincide with the short term bottom.

Regardless, this note is not a market call for the next month. Rather, its an outline for the next year or two or three which is the lens through which I will be evaluating its prescience, or lack thereof.

Finally, there are many other things I would have liked to address or expound upon but I’m already at double or more the length of an average post, so some things were left on the cutting room floor. This is by no means meant to be the exhaustive or definitive bear case on AI, but the fact that I could go on so long and not get to everything is reason enough to be skeptical.

2 thoughts on “The Bear Case on AI”

  1. From Guillaume Bonnissent’s Insurance Technology Diary, in The Insurer:

    The ChatGPT desktop app admits, at the bottom of every screen, that “ChatGPT can make mistakes.”
    When I asked, ChatGPT explained this to me. It told me it is “tuned to say what seems true.” I asked for an explanation, and was told:
    “ • I’m optimized to approximate truthful, helpful answers,
    • but I do not have guaranteed correctness,
    • and I do sometimes produce false statements that sound right.”

    1. This sounds about right! Of course, humans make mistakes too but people tend to be inherently more trustful of answers from machines.

Comments are closed.