Skip to content

The AI bubble: hype, hope, and hard limits

Artificial Intelligence, Technology, Economics7 min read

AI bubble

New announcements about AI startups appear every week. Stock market valuations are at an all-time high, driven almost entirely by a few big tech companies that investors believe will cash in on the AI trend.

So what gives? Are we in an AI bubble? Is it all going to come crashing down?

It's magic, baby!

It's easy to see why people are excited. The first time you use ChatGPT or one of those apps that can generate an entire song based on your lyrics and mood, it feels like magic.

Many will recall Arthur C. Clarke's proverb: "Any sufficiently advanced technology is indistinguishable from magic." And yes, that's exactly what these tools feel like at first. You ask ChatGPT a question - any question - and it replies with a competent-sounding answer. How is that not magical?

People have fun with it too: AI-generated country songs with filthy lyrics are suddenly a thing. It's hilarious hearing computer singers croon about certain body parts and adult activities with a perfect Southern drawl. The guitar riffs sound right, the tone is spot-on, and for a moment, you might think you're listening to a human. Amazing!

The illusion of understanding

The truth is, 99.9% of people using these tools have no idea how they actually work. And that's fine - your grandma doesn't need to understand molecular physics to get a good banana bread recipe from ChatGPT.

But it does matter for the smaller group of people shaping the future of this technology: executives, investors, journalists, advisors, and policymakers. Many of them have little understanding of what they're dealing with, but they're surrounded by hype merchants and money flows. Nobody wants to be the one who "doesn't get it." So they nod along, invest, and assume the crowd must know something they don't.

Monkey see - monkey do. The vibe is strong, and the fear of missing out is even stronger. "Our competitors have an AI feature, so we need one too!" "Microsoft just signed a billion-dollar deal; we can't fall behind!"

What does AI even mean anymore?

The first clue that most people are lost is the term itself. "AI" has become an umbrella label for anything a computer does that seems a bit complicated.

To engineers, the overuse of the term is borderline cringe. It's as if people started calling every motorized vehicle an "auto-mobile" - dump trucks, buses, street sweepers, all the same. Technically not wrong, but painfully imprecise.

  • Ordinary computer vision algorithms, originally from the 1960s? AI.
  • Drones recognizing tanks in footage? AI.
  • Your phone tagging faces in photos? AI.
  • Your smart speaker reading your calendar? AI!

Everything's AI now. Automobile! AI!

So what is artificial intelligence?

If you ask the Oxford Dictionary, you'll get this:

"The capacity of computers or other machines to exhibit or simulate intelligent behaviour."

Notice the key part: simulate intelligent behaviour. That's right - pretending counts.

Early computer vision systems didn't get called AI, even though they tried to "see." Probably because no serious scientist wanted to attach the word intelligence to something that confused a cloud for an elephant.

Today's systems are pattern recognizers, not thinkers. Feed them enough mammograms, and they'll detect anomalies better than some radiologists - but they don't understand what they're looking at. They're useful, but not intelligent. And while they can improve efficiency or accuracy, they don't transform the economy.

The limits of the possible

Clarke's lesser known second law says:

"The only way of discovering the limits of the possible is to venture a little way past them into the impossible."

Most people talking about AI today really mean LLMs - large language models. They deal with words, not thoughts. People assume that because they can chat with a computer in plain English, it must be intelligent.

Sorry to disappoint, but LLMs are basically the language version of old-school image classifiers. Instead of comparing pixel soup, they predict the next word based on probability. They don't "know" what they're saying. They're excellent mimics, nothing more.

Ask them about economics, and they'll remix a thousand textbooks. It looks like thinking, but it's just statistics in a nice suit. Great for summaries and drafts - not so great for reasoning or invention. Certainly not ideal for fields where every word matters, like legal documents and littigation.

Reality check: coding assistants and real work

Developers were among the first to see the limits. LLM-based coding tools can save time on boilerplate or help you remember a forgotten syntax. But throw them a real-world problem, and things get messy fast.

Programming languages are simpler than human languages, yet real engineering involves context, compatibility, and intent - things LLMs don't grasp. The result? Assistants that mix API versions, generate fragile code, and pass tests for all the wrong reasons.

Senior engineers quickly learn where these tools help and where they hurt. Juniors often don't. They get stuck debugging AI hallucinations or chasing intricate problems. In many teams, productivity drops because seniors have to clean up after the "AI helper."

Just because code looks right doesn't mean it is right.

Heck, I'm using LLMs myself to turn my drafts into something I'm not completely embarrassed to publish - and there's nothing wrong with that. The thoughts are mine; statistics just helped smooth out the rough edges. English isn't my first language, after all.

What's ironic is that I still struggle to make these systems stop doing things I don't want - like sprinkling in unnecessary hyphens or capitalizing every word in a headline. It's absurd, because they don't actually think. Giving them such instructions is a bit like telling your houseplant to evolve so it can survive without water - futile, but entertaining to try.

Hard limits: still useful

The point isn't that LLM technology or AI functionality in general isn't useful - quite the opposite. They definitely are! Several fields are already benefiting, and many more will.

Take customer support, for example. LLMs can handle low-complexity queries without human intervention, freeing up support agents to focus on the more difficult cases - ideally improving service for everyone.

Software can be complex, and integrated LLMs help users get more out of the tools they rely on. Instead of navigating a long series of steps, LLMs can guide users straight to the final action, letting them review and approve or reject the results.

On a personal or professional level, LLMs give people a boost when facing the dreaded blank page - whether starting a PowerPoint slide, drafting an email, or writing a complaint. Think of it as a friendly nudge to get the ideas flowing, without judgment if your first draft is terrible.

The future (and the bubble)

Given the limits of this technology, why are valuations so high? Partly because people believe AGI - real intelligence - is just around the corner. "We're almost there! Just need more data, more compute, more vibes!"

Except there's zero evidence that current LLMs are anywhere near that breakthrough. We're burning money chasing an illusion.

Another bet is that productivity gains will justify the hype. Sure, translators and copywriters benefit, but at the macro level? Hardly a dent in GDP in sight.

Then there's the infrastructure play. Investors are pouring billions into inference hardware and cloud capacity, assuming demand will explode. But models are becoming more efficient, and soon they might run on your phone - turning all those fancy data centers into expensive block heaters.

Meanwhile, Big Tech props up the illusion through circular deals that make revenue numbers look better than they are, i.e. like AMD and Nvidia recently. And smaller investors follow along, FOMO first, logic later.

Waiting for the black swan

People don't really get AI and wildly overestimate what it can do. That's particularly dangerous for investors, C-level executives, and policymakers.

This house of cards is fragile. All it might take is one shock - geopolitical, regulatory, or financial - for the bubble to pop.

Ironically, the trigger might come from the U.S. government itself, perhaps through policies that unintentionally accelerate innovation elsewhere. Restricting access to AI chips, for instance, is spurring Chinese researchers to build better alternatives, perhaps causing the next DeepSeek event. Meanwhile, the U.S. gets distracted by culture wars and political theater.

When the correction comes, it won't just hit AI startups. It'll ripple through tech indices like the S&P 500, leaving many retail investors badly burned.

AI isn't magic. It's clever math wearing a cape. And just like every other hype cycle before it, this one will eventually collide with reality.

The only question is how loud the pop will be.

© 2025 Mat Hansen. All rights reserved.