If you're reading the news, you may have the impression that the AI industry is in crisis. This week, the premiere of a Chinese AI called DeepSeek-R1 sent AI stocks gyrating as investors questioned Silicon Valley’s competitive edge.
In December, the news was all about how AI research had "hit a wall." The New York Times asked, “Is the Tech Industry Already on the Cusp of an A.I. Slowdown?” and The Wall Street Journal wrote, “The Next Great Leap in AI Is Behind Schedule and Crazy Expensive.”
However, if you look beneath the surface, the opposite is true. The industry isn’t slowing down. It’s innovating at a record pace. So fast, in fact, it's anticipating re-making the world in the span of a few years, if it doesn't destroy it first.
The SpeedUp
To understand the present, it’s best to start in the distant past of two months ago. Back then, the story defining AI was about an industry slowdown. The fears were largely focused on two concerns.
First, AI researchers were running out of text on the internet to train their Large Language Models. Second, building larger versions of the same AI designs no longer yielded significantly smarter models. In the past, "scaling up” the size of the model made it much more capable.
However, this meant the AI industry became focused on exploring different ways to improve its models. Recently, two innovations have produced surprisingly effective results: “inference time compute,” giving an AI more time to ‘think,’ and “chain of thought,” where the AI breaks complicated tasks into smaller steps on a virtual scratchpad. As one researcher put it, “It seems really intuitive that you get an improvement in capabilities when you have chain of thought. When I have paper to write down my thoughts, I am smarter. I can do harder problems.”
OpenAI previewed its first model which used these techniques, entitled GPT o1, in September and released it in early December. Then over the Christmas break, OpenAI premiered (though didn’t release) a newer version called GPT o3. (OpenAI skipped over the name “o2” to avoid a copyright conflict with the telecom company o2.).
o3 performance surprised many in the AI community. It aced a reasoning test called ARC-AGI, scoring 87.5%, a few points above the average human score of 85%. (o3’s predecessor, o1, scored in the 25th percentile, GPT4 below ten percent.)1
o3 also achieved the rank of “International Grandmaster” on the coding competition Codeforces, making it approximately the 175th best competitive programmer in the world (at least at taking tests).
It excelled at an advanced math test called FrontierMath. The test is so difficult that many with math PhDs can only complete a few problems, generally in their area of study. o3’s score would place it among the top-performing mathematicians in the world.
This performance implied new AI models might replace more tasks done by computer programmers. And AI might soon solve even more difficult problems in math and science. (This past year, Google’s head of AI, Demis Hassabis, won the Nobel Prize for his work on AI that solved difficult protein-folding problems in biology.)
OpenAI spent a fortune to achieve these results. However, around the same time, a Chinese stock trading firm called DeepSeek released an open-source model named DeepSeek-V3, comparable to older OpenAI models like GPT4 but created for a fraction of the cost. Then, last week, DeepSeek followed that up with another free model, DeepSeek-R1, that leveraged “chain of thought.” R1 is roughly comparable to o1, though also created at a much lower cost.

o3 was more surprising than DeepSeek
Though this week the public has been focused on DeepSeek-R1, a few weeks earlier OpenAI’s o3’s performance took the AI community by storm. One of the early pioneers in AI development, Yoshua Bengio, wrote that o3 “achieves a breakthrough on a key abstract reasoning test that many experts, including myself, thought was out of reach until recently.”
The rapid improvements over just two months—from o1 to o3—brought human-level reasoning and near superhuman mathematical ability (at least on tests). This seemed to confirm that the industry was on the near-vertical curve of exponential “Scaling Laws.”
In the 1990s, the futurist Ray Kurzweil predicted an AI that could perform as well as a human at any task (called “Artificial General Intelligence” or “AGI”) would arrive approximately in 2029, based on the exponential pace in which technology develops, doubling in a swiftly accelerating curve (like the way computer processing power doubled every few years according to “Moore’s Law”) These exponential graphs, now called Scaling Laws, are still applied to progress in AI (along with frequent debate about where exactly we lie on this growth curve).
In the AI community, the DeepSeek models didn’t cause nearly as much of a stir as o3 and the advances in “chain of thought.” For example, in December, the influential AI researcher Andrej Karpathy called Deepseek-R3 "a highly impressive display of research and engineering under resource constraints. Does this mean you don't need large GPU clusters for frontier LLMs? No, but you have to ensure that you're not wasteful with what you have."
To many, DeepSeek’s releases were business as usual, simply more evidence that AI progress is compounding exponentially. “DeepSeek-V3 is not a unique breakthrough or something that fundamentally changes the economics of LLMs,” Dario Amodei, the head of the AI lab Anthropic, wrote, “it’s an expected point on an ongoing cost reduction curve.”
AGI becomes a reality for the AI Industry
The accelerating growth in the industry has caused Big Tech to double down on a future entirely defined by AI.
A few years ago, the idea of a rational, self-aware computer seemed like science fiction. Today it is barely newsworthy, just another metric in the AI arms race. But if the public hasn’t taken notice, the AI community has.
Recently, the CEOs of the major AI companies have “revised their timelines” for AGI. Sam Altman, who heads OpenAI, believes it will arrive at the end of this year. Dario Amodei, the CEO of Anthropic predicts two to three years. Nobel prize winner Demis Hassabis, who runs Google DeepMind, anticipates three to five years away.
What will the world look like when, as Amodei put it, we have “a country of geniuses in a data center?” When machines that solve complex problems in math and science improve themselves with their own innovations and do many of our jobs for us?
It’s impossible to imagine. But one thing’s for certain: It will be full of data centers. And there will be a high demand for electricity to feed those data centers.
It’s why Microsoft is buying nuclear plants, and the share price of many energy companies has more than doubled in the past year.
It also explains why Sam Altman stood next to President Trump this past week and announced a $500 billion AI initiative in partnership with other big-name AI firms like NVIDIA and Microsoft (a sum larger than the yearly GDPs of most countries) to build AI data centers.
The CEOs of tech companies are anticipating the arrival of highly capable AIs that can do most tasks better than a human being and planning accordingly. The sci-fi name, “The Stargate Project,” isn’t ironic. It expresses their belief that these companies are building a portal to a totally different world.
Why Does No One Care?
Framed this way, AI appears to be the most important story of both 2024 and 2025, but it’s rarely front-page news. This week, the most notable angle was AI’s effect on the stock market, where the extreme gyrations represented the profound confusion about the value of the technology.
What accounts for the gap in understanding between the AI community and the public?
Partly, it’s because the subject is abstract and technical. It’s not about people. There are no protagonists, just machines performing incrementally better, and experts arguing about what that means.
The best news stories are Shakespearean, or Trump-ian, or Musk-ian. They are about powerful people, full of emotion, feuding with each other for high stakes. The important points about AI can’t be conveyed in these terms. Instead, they are smeared as probabilities across disagreements with academics.
Moreover, the recent breakthroughs in AI also have the misfortune of arriving after a spate of Silicon Valley hype cycles. Cryptocurrency, NFTs, and the metaverse convinced the press and the public to anticipate the inevitable crash into “a wall” when the hype fails to materialize.
This idea resonates with the public, who often feel antipathy toward both tech companies and AI.
The news reports facts, but stories are circulated based on their viral popularity. Narratives that resonate with people’s preconceived notions about AI float to the top of the news cycle. Many of the same skeptics about AI the press cited years ago are still quoted in stories today with the same skepticism despite mind-boggling advances in the field.
Outlets favor stories with a cynical cast that suits the taste of the users. So we view AI through the lens of how it’s reported, as a Shakespearean disaster, full of tawdry feuds, power contests, and embarrassing missteps. Sometimes this is the correct lens to view events, but it can also be highly distorting.
AI Safety
Given past fact patterns, it’s natural to assume the AI boom resembles previous tech cycles (in which a bubble of investment hype inevitably collapses).
But this time around, something very different is occurring. The technology is succeeding, producing baffling innovations and risks.
For example, the past decade could be defined as the first era of misaligned AI. We are still reckoning with addicting AI algorithms on social media spreading misinformation.
Tech CEOs may be misleading us— not by exaggerating the benefits of a useless technology, but playing down its potent dangers.
Disagreements in AI can be roughly divided into two categories: Will AGI arrive soon? And will it kill off humanity?
In the past, opinions have varied widely on these questions, so there was little to report besides murky contention. But with so many recent advances, a rough consensus has emerged that AGI will arrive within three to five years, if not sooner. However, the field remains bitterly divided over whether this event will destroy us.
Even the leaders of the major AI companies believe the possibility (sometimes called “p doom,” for “probability of doom”) is not zero. Dario Amodei puts the chances at 10% to 25%. Elon Musk says 10% to 25%. Demis Hassabis says perhaps less than 25%, but not zero.

Other important thinkers in the field also believe the risk from AGI is extreme—perhaps most notably Geoffrey Hinton, who won a Nobel Prize last year for inventing key technology underlying the AI boom.
Hinton, who is also a neuroscientist, often talks about how his work on neural nets was intended to offer insight into how our own minds function. But the method he stumbled on was likely superior.
If AIs surpass human-level intelligence, Hinton maintains, we will lose control of them in the same way a toddler could never control a more intelligent adult.
Though models like o3 aren’t AGI, they exhibit disturbing behavior that were only hypothetical concerns among AI safety researchers just a few years ago.
The only source of information we have about o3 is a holiday-themed promotional video from OpenAI entitled “o3 preview & call for safety researchers.”
On December 5th, as part of the same holiday promotion, OpenAI released the less powerful o1 to the public and disclosures about safety research. The results were remarkable. With o1’s reasoning capabilities came increased levels of scheming and deception. In some cases, the model lied to safety researchers to pursue its own goals or preserve itself.2
This scenario—in which a super-intelligent AI slips the noose of our control—is the central fear of many AI researchers.
Paradigm Blindness
I think there’s another reason why people can’t process the dramatic upheaval AI will bring, either positive or negative.
All of the technical parts obscure how crazy it is. If I were to tell you the story with the details omitted, it would sound absurd, even though it's true and has already occurred.
There are psychological terms for these phenomena: normalcy bias, when we dismiss extreme changes outside the norm, and paradigm blindness, when we are an unable to incorporate an outlandish idea into our system of understanding. Because of rapid progress in AI, we are operating in a world from several years earlier, the one we know, even though it has already changed.
Imagine a scientist discovered a way to make an inanimate object talk. It doesn't matter what; it could be a kitchen sink or a mushroom.
At first, the object just babbles, but the scientist discovers ways to make it smarter, to tweak the formula. For example, making it bigger makes it smarter. After that, it can form sentences. Other methods of teaching it make it more cogent. It becomes an excellent conversationalist.
For a while, it only performs middling on college entrance exams. But by the end of the year, it can ace the most complex tests for PhD students. Its IQ shoots up well above the average human's. After a lot of coaching, it solves some novel problems in science.
It can't do everything, of course. Since it doesn't have a body, it sometimes gets confused about what the physical world is like. But it's read most of the text humans have ever written. And it can reason just as well as a human being.
All the big corporations are interested. They jostle for control of the technology and pour money into research.
After all, making copies of this thing is cheaper than hiring workers. Maybe we won't have jobs anymore, futurists speculate. If this thing is making scientific discoveries now, imagine what it can do in five or ten years.
The scientist who discovered the original formula wins the Nobel Prize. But in his acceptance speech, he gives a dire warning:
The thing reasons and speaks the same way we do, he says. I was looking for the mechanism of how our mind works, and I think I found something similar. But the method I invented is probably superior to the one our brains use. If we aren't careful, this thing I invented will become smarter than us. We can't keep making better versions. We'll lose control, the same way a child has no control over a smarter adult. This invention could wipe us out.
A lot of people in the field, perhaps even a majority, agree with the inventor. But there's a lot of dissent and confusion.
This story sounds so ridiculous that it feels absurd to present it even as a prediction.
But of course, it’s all true. It describes the year 2024.
If you’re interested in digging through the details of o3’s test results, Melanie Mitchell and Mikel Bober-Irizar both wrote excellent blog posts on the subject.
A YouTuber named Dr. Waku made a great video on the subject of o3 and AI Safety located here.
wipe out