Sitemap

Member-only story

Apple Just Pulled the Plug on the AI Hype. Here’s What Their Shocking Study Found

New research reveals that today’s “reasoning” models aren’t thinking at all. They’re just sophisticated pattern-matchers that completely break down when things get tough

6 min readJun 8, 2025

--

We’re living in an era of incredible AI hype. Every week, a new model is announced that promises to “reason,” “think,” and “plan” better than the last. We hear about OpenAI’s o1 o3 o4, Anthropic’s “thinking” Claude models, and Google’s gemini frontier systems, all pushing us closer to the holy grail of Artificial General Intelligence (AGI). The narrative is clear: AI is learning to think.

But what if it’s all just an illusion?

What if these multi-billion dollar models, promoted as the next step in cognitive evolution, are actually just running a more advanced version of autocomplete?

That’s the bombshell conclusion from a quiet, systematic study published by a team of researchers at Apple. They didn’t rely on hype or flashy demos. Instead, they put these so-called “Large Reasoning Models” (LRMs) to the test in a controlled environment, and what they found shatters the entire narrative.

--

--

Rohit Kumar Thakur
Rohit Kumar Thakur

Written by Rohit Kumar Thakur

I write about AI, Tech, Startup and Code

Responses (52)