The 2028 Global Intelligence Crisis: A Different Perspective

A brief note: I wrote this as a comment on the original piece, then discovered that commenting requires a $125/month subscription. The irony of that particular friction tax was too good not to mention.

I first discovered this post on the Wall Street Journal, but within a day all the major news outlets — and some I do not peruse — had picked up on it, calling it a "doomsday scenario," which it kind of is. So for those who picked up on that: bravo. But for the many that just gulped and moved on: don't do that.

Here is the comment I was originally going to post, and below it, my further thoughts.


Fascinating piece, and one of the more honest attempts I've seen to model left-tail AI risk without sliding into doomerism or denial. A few thoughts from someone who has spent the last year studying AI closely enough to write a book about it for general audiences.

The timeline is doing a lot of heavy lifting, and I think it's probably too compressed by a factor of three to five. Not because the mechanisms are wrong — they're mostly right — but because organizational inertia is real. Enterprise software contracts have multi-year lock-ins. Procurement committees are slow. Regulatory friction exists. The 2026-to-2028 cascade assumes near-perfect propagation speed across an economy that moves in half-decades, not quarters. The dynamics are sound; the clock is probably off.

On the core analysis: agreed. The Ghost GDP concept is useful and underappreciated — AI output that doesn't circulate through the economy as wages or spending creates phantom growth that looks good on paper while hollowing out real demand. The reflexivity point — that threatened companies become AI's most aggressive adopters — is sharp and underexplored elsewhere. And the white-collar concentration risk is genuinely different from prior recessions in ways most frameworks aren't equipped for.

Here's what I'd add, as a different perspective rather than a correction: this piece is written from the viewpoint of people who built and invested in the intermediation layer. That's a legitimate vantage point. But it's worth sitting with the fact that many of the business models it mourns weren't serving ordinary people particularly well. The crisis it describes is real for certain stakeholders. It reads quite differently if you're not one of them.

The policy question — who captures the gains from abundant intelligence — matters more than whether SaaS multiples stabilize. The canary is still alive, as you say. But not everyone in the mine is rooting for the same outcome.


What I Really Feel

Before I go further, let me note something Shah buries at the very end of his piece: "my portfolios and companies are positioned for it... my firms will benefit financially." He puts it there for transparency, and I respect that. But it belongs at the top, because it explains the lens. This is a crisis narrative written by someone who stands to profit from the crisis. That doesn't make him wrong — but it shapes what he sees and what he doesn't.

Here's what he doesn't see. Shah writes that AI "eliminates friction" — and specifically that "we have built trillions of dollars of enterprise value on top of human limitations: things take time, patience runs out, and most people accept a bad price to avoid more clicks." He means this as neutral description. It isn't. That "friction" was engineered, defended with lobbying dollars, and optimized over decades specifically to extract value from people who lacked time, information, or leverage.

Insurance companies collecting 15-20% of premiums from customers who forget to shop around. DoorDash taking 30% of every delivery fee. SaaS contracts costing $500,000 a year that required a lawyer to escape. Real estate commissions of 5-6% built on information asymmetry the internet already destroyed — the industry just found ways to keep collecting anyway. Credit card interchange of 2-3% baked invisibly into every purchase. Subscription pricing designed so that cancellation required a phone call you had to make during business hours.

The crisis Shah describes is real for the people who built those models. It reads very differently if you were on the other side of them.

To his credit, Shah includes a "where I could be wrong" section that is genuinely honest. He acknowledges that gradual job losses could allow AI-driven productivity to soften the blow, that past technological revolutions did eventually create new work, and that decisive policy action could change the outcome. He's not a doomer. He's a smart, credible analyst with twenty years of real experience — and a legitimate blind spot about whose interests he has spent those twenty years defending.

That blind spot matters. Not because it makes him wrong about the mechanisms, but because it determines what questions get asked. Shah asks whether SaaS multiples stabilize and whether private credit survives. Those are reasonable questions if you're running a fund. The more important questions — who captures the gains from abundant intelligence, and how do we make sure it isn't just the people who were already capturing everything else — get one paragraph and a promise of a Part Three.

That's the conversation worth having. If you want to be part of it, start with understanding what AI actually is and what it isn't — which is exactly what My Adventures With Claude is for. You can find it here.