JEFF CHILDERS THINKS WE CAN CROSS AI OF THE END OF THE WORLD DOOM LOOP SCENARIO :

……………

Finally, let’s unpack some more black-pilled AI doom-and-gloom. Yesterday, Psychology Today ran a story headlined, “Did Complexity Just Break AI’s Brain?” The sub-headline added, “A new report from Apple shows AI fails hardest where it should excel.”

This week, Apple released a peer-reviewed AI study titled, “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity.” It rocked the AI world, though so far corporate media showed all the alertness and volume of sleepy garden crickets.

https://machinelearning.apple.com/research/illusion-of-thinking

Apple researchers tested the “smartest” AI models —called Large Reasoning Models (LLMs)— with middle-school logic puzzles. For instance, they used the Towers of Hanoi and the old chestnut about the river-crossing Farmer with a fox, a chicken, and a bag of grain. They found a point where all the models failed —solutions “fell to zero”— when the puzzles got sufficiently complicated.

I’ve been reporting for ages that nobody really understands how AI does its thing. That admission appeared right in the study’s opening paragraph. “While these models demonstrate improved performance on reasoning benchmarks,” the researchers wrote, “their fundamental capabilities, scaling properties, and limitations remain insufficiently understood.”

The study concluded that LLMs only pretend to reason like humans. The truth is, the computers are just matching patterns. For instance, they may be able to solve a complex math equation or beat a master chess player, but that is only because their databases include millions of chess permutations. If you give them a novel problem that can only be solved with logic —with pure reasoning— instead of by brute force, the AI suddenly falls off a cliff.

AI defenders raced out in force, illogically arguing that Apple is just jealous because it lags behind in AI development, and unpersuasively arguing that, in an AI age, we must re-define what “thinking” means.

What would we do without the flexibility of redefining basic vocabulary?

For upwards of two years now, we’ve been breathlessly assured that AI “super-intelligence” is mere moments away, just like how Iran’s nuclear weapons program is always two weeks from readiness, but never actually arrives. For example, behold this breathless op-ed headline from Newsweek, just last month:

Yesterday, I listened to an apocalyptic podcast between a New York Times science reporter and a former OpenAI engineer. The developer gloomily predicted that the technology would achieve “artificial general intelligence” (AGI) within 18 months, and the pair spent the rest of the talk discussing a parade of horrors, ranging from the need for universal basic income to support displaced human workers to the violent collapse of civilization itself.

Don’t get me wrong. There’s much we don’t know about AI and its capabilities. We don’t even understand how it works. It continues to surprise developers who, like Spanish explorers of old, keep finding new, unexpected behaviors in the digital wilderness. But Apple’s researchers just found a hard upper limit on something humans can do fairly easily— but that AI apparently cannot.

It’s not that AI isn’t powerful. It is. It can crush repetitive tasks, mimic human syntax, and win debates it doesn’t even understand. But it’s not thinking. Not really. It’s just guessing. Guessing very well, and with flair. It is going to change everything. But this is the first evidence that it can’t replace people. Not this type of AI, anyway.

You’d think the AI community would be relieved—maybe even humbled—by Apple’s revelation. After all, if AI has a ceiling, it means we might not be facing an extinction-level event in the next update cycle. But instead of sighs of relief, we see snarls and snark. Why on Earth?

There’s a simple explanation why AI developers aren’t celebrating Apple’s study, and it is as old as humanity itself: the money train.

The AI gold rush is fueled not by truth, but by fear and fantasy. The more AI sounds like magic, the bigger the checks get— from venture capitalists, government contractors, and defense departments. Doomsday sells. Apocalypse, with a UI, sells even better. Because after all, we can’t let the Chinese get there first!

But if you were looking for a flicker of hope in the maelstrom of AI hype, Apple just snapped the cigar lighter’s wheel. The inescapable conclusion —assuming their research holds— is that this type of AI will never achieve AGI, certainly not anytime soon. There’s a structural limit.

Which is very good news.

If Apple is right—if this branch of AI, this architecture, has a built-in structural ceiling— then we’re not watching the rise of Skynet. We’re watching the rise of a powerful, disruptive tool. Compare it instead to things like the steam engine, the printing press, or the internet.

image 15.png
As regular readers know, my undergrad degree is in economics. Disruptive technology is fairly well understood. Mass layoffs are always predicted as disruptive tech emerges— but have never actually appeared, except perhaps on very short timelines.

The explanation is simple and logical. Let’s use AI as an example. As AI replaces call-center operators, basic bookkeepers, truck drivers, and school teachers, companies that employed them enjoy higher productivity and lower costs. That means more money for other things they couldn’t afford before.

If a business saves $10 million on payroll because AI answers the phones, it doesn’t bury that money in the backyard— it hires more developers, expands product lines, invests in marketing, builds something new. That’s how markets have always worked.

Some people screamed that ATMs would kill the banking industry. They didn’t. The industry changed, and brands consolidated, but banks are still building branches. The same thing happened with spreadsheets, email, and automated manufacturing.

With this much epistemological fog, technological hubris, and money in the wind, any confident forecast belongs in the fiction section. But if you’ve been waiting for a silver lining —something to pierce AI’s apocalyptic cloud cover— Apple may have just delivered.

Like Cortez’s hardy band climbing the final ridge and beholding the Pacific Ocean with a wild surmise, AI researchers may have finally crested the summit of their limitless ambitions— only to discover that the horizon is not infinite after all.

So hang in there. As somebody once said (I can’t remember who): weeping may endure for a night, but joy cometh in the morning.