Pulse News

FridayMarch 14, 2025

AGI is suddenly a dinner table topic

View Original Article →Published: 3/11/2025

# AGI is suddenly a dinner table topic

By James O'Donnell

The term is everywhere this week. Working on a definition matters.

The concept of artificial general intelligence (AGI) — an ultra-powerful AI system we don't have yet — can be thought of as a balloon, repeatedly inflated with hype during peaks of optimism (or fear) about its potential impact and then deflated as reality fails to meet expectations. This week, lots of news went into that AGI balloon. I'm going to tell you what it means (and probably stretch my analogy a little too far along the way).

First, let's get the pesky business of defining AGI out of the way. In practice, it's a deeply hazy and changeable term shaped by the researchers or companies set on building the technology. But it usually refers to a future AI that outperforms humans on cognitive tasks. Which humans and which tasks we're talking about makes all the difference in assessing AGI's achievability, safety, and impact on labor markets, war, and society. That's why defining AGI, though an unglamorous pursuit, is not pedantic; it is actually quite important, as illustrated in a new paper published this week by authors from Hugging Face and Google, among others. In the absence of that definition, my advice when you hear AGI is to ask yourself what version of the nebulous term the speaker means. (Don't be afraid to ask for clarification!)

Okay, on to the news. First, a new AI model from China called Manus launched last week. A promotional video for the model, which is built to handle "agentic" tasks like creating websites or performing analysis, describes it as "potentially, a glimpse into AGI." The model is doing real-world tasks on crowdsourcing platforms like Fiverr and Upwork, and the head of product at Hugging Face, an AI platform, called it "the most impressive AI tool I've ever tried."

It's not clear just how impressive Manus actually is yet, but against this backdrop—the idea of agentic AI as a stepping stone toward AGI—it was fitting that New York Times columnist Ezra Klein dedicated his podcast on Tuesday to AGI. It also means that the concept has been moving quickly beyond AI circles and into the realm of dinner table conversation. Klein was joined by Ben Buchanan, a Georgetown professor and former special advisor for artificial intelligence in the Biden White House.

They discussed lots of things—what AGI would mean for law enforcement and national security, and why the US government finds it essential to develop AGI before China—but the most contentious segments were about the technology's potential impact on labor markets. If AI is on the cusp of excelling at lots of cognitive tasks, Klein said, then lawmakers better start wrapping their heads around what a large-scale transition of labor from human minds to algorithms will mean for workers. He criticized Democrats for largely not having a plan.

We could consider this to be inflating the fear balloon, suggesting that AGI's impact is imminent and sweeping. Following close behind and puncturing that balloon with a giant safety pin, then, is Gary Marcus, a professor of neural science at New York University and an AGI critic who wrote a rebuttal to the points made on Klein's show.

Marcus points out that recent news, including the underwhelming performance of OpenAI's new ChatGPT-4.5, suggests that AGI is much more than three years away. He says core technical problems persist despite decades of research, and efforts to scale training and computing capacity have reached diminishing returns. Large language models, dominant today, may not even be the thing that unlocks AGI. He says the political domain does not need more people raising the alarm about AGI, arguing that such talk actually benefits the companies spending money to build it more than it helps the public good. Instead, we need more people questioning claims that AGI is imminent. That said, Marcus is not doubting that AGI is possible. He's merely doubting the timeline.

Just after Marcus tried to deflate it, the AGI balloon got blown up again. Three influential people—Google's former CEO Eric Schmidt, Scale AI's CEO Alexandr Wang, and director of the Center for AI Safety Dan Hendrycks—published a paper called "Superintelligence Strategy."

By "superintelligence," they mean AI that "would decisively surpass the world's best individual experts in nearly every intellectual domain," Hendrycks told me in an email. "The cognitive tasks most pertinent to safety are hacking, virology, and autonomous-AI research and development—areas where exceeding human expertise could give rise to severe risks."

In the paper, they outline a plan to mitigate such risks: "mutual assured AI malfunction," inspired by the concept of mutual assured destruction in nuclear weapons policy. "Any state that pursues a strategic monopoly on power can expect a retaliatory response from rivals," they write. The authors suggest that chips—as well as open-source AI models with advanced virology or cyberattack capabilities—should be controlled like uranium. In this view, AGI, whenever it arrives, will bring with it levels of risk not seen since the advent of the atomic bomb.

The last piece of news I'll mention deflates this balloon a bit. Researchers from Tsinghua University and Renmin University of China came out with an AGI paper of their own last week. They devised a survival game for evaluating AI models that limits their number of attempts to get the right answers on a host of different benchmark tests. This measures their abilities to adapt and learn.

It's a really hard test. The team speculates that an AGI capable of acing it would be so large that its parameter count—the number of "knobs" in an AI model that can be tweaked to provide better answers—would be "five orders of magnitude higher than the total number of neurons in all of humanity's brains combined." Using today's chips, that would cost 400 million times the market value of Apple.

The specific numbers behind the speculation, in all honesty, don't matter much. But the paper does highlight something that is not easy to dismiss in conversations about AGI: Building such an ultra-powerful system may require a truly unfathomable amount of resources—money, chips, precious metals, water, electricity, and human labor. But if AGI (however nebulously defined) is as powerful as it sounds, then it's worth any expense.

So what should all this news leave us thinking? It's fair to say that the AGI balloon got a little bigger this week, and that the increasingly dominant inclination among companies and policymakers is to treat artificial intelligence as an incredibly powerful thing with implications for national security and labor markets.

That assumes a relentless pace of development in which every milestone in large language models, and every new model release, can count as a stepping stone toward something like AGI. If you believe this, AGI is inevitable. But it's a belief that doesn't really address the many bumps in the road AI research and deployment have faced, or explain how application-specific AI will transition into general intelligence. Still, if you keep extending the timeline of AGI far enough into the future, it seems those hiccups cease to matter.