The Quote That Shouldn’t Have Surprised Me

On March 22, 2026, on the Lex Fridman podcast, Jensen Huang (NVIDIA CEO) said: “I think it’s now. I think we’ve achieved AGI.”

When I saw the headline, my first reaction was a mix of skepticism and curiosity. AGI stands for “Artificial General Intelligence,” and it’s been treated as the Holy Grail of AI for decades. The idea of a machine that matches or exceeds humans at any cognitive task.

But then I went to read the full context of the interview. And I discovered something that bothered me more than the declaration itself.

AGI has no official scientific definition. It means whatever each CEO wants it to mean at the moment.

The Chameleonic Term

The more I researched this topic, the more I noticed a pattern that made me mildly furious. Let me show you how this game works.

Lex Fridman asked Jensen Huang if AGI had arrived, using as a definition “an AI capable of starting, growing, and running a tech company worth more than $1 billion.” Huang answered yes. But wait — since when is “generating a $1 billion company” the definition of general intelligence? The traditional definition has always involved a broad range of human cognitive capabilities, not just business viability.

Sam Altman, in turn, has oscillated impressively. In January 2025, he wrote on his personal blog: “We are now confident we know how to build AGI.” In February 2026, he told Forbes: “We basically have built AGI, or very close to it.” Then he walked back: it was a “spiritual” statement, not literal. In August 2025, he even admitted to CNBC that AGI “isn’t a super useful term” and has “become a very sloppy term.”

Satya Nadella (Microsoft CEO, who invested $13 billion in OpenAI) disagrees with both. He publicly declared that the industry is “nowhere near achieving AGI,” adding that “it’s not about Sam or me declaring it.”

Three of the biggest AI leaders on the planet. Three different answers. To the same question.

Why the Definition Matters (A Lot)

Here’s the detail that made me understand the game.

OpenAI has a clause in its contract with Microsoft that changes everything when AGI is “achieved.” Specifically: if OpenAI’s nonprofit board declares it has achieved AGI, Microsoft’s access rights to future technology become restricted. Microsoft — which invested $13 billion — is trying to remove that clause. It has even considered walking away from the deal.

Pause to digest. The definition of a hypothetical revolutionary technology is being litigated between two companies worth trillions. Not for scientific reasons. For contractual reasons.

This helps me understand why Altman oscillates so much. In January, when he needed more investment, AGI was “close.” In August, when he needed to negotiate with Microsoft, the term became “sloppy.” In February 2026, “we’ve built it” — but only spiritually.

As Max Tegmark, president of the Future of Life Institute, wrote: “It’s smarter for them to just talk about AGI in private with their investors.” He compared the stance to “a cocaine salesman saying it’s unclear whether cocaine is really a drug” — because it’s too complex to decipher.

The “Pickaxe” of the Gold Rush

NVIDIA is, without a doubt, the best-positioned company of the AI era. Its chips power approximately 80% of all AI training on the planet. Every token processed by LLMs runs on hardware it sells. And Jensen Huang isn’t naive — he knows exactly the impact of his statements.

A quote from him that made me laugh (nervously):

“If a $500,000 engineer hasn’t consumed at least $250,000 in tokens by year-end, I’d be deeply alarmed.”

It’s like Starbucks saying you need 5 coffees a day to be a functional human. Reminder: NVIDIA projects at least $1 trillion in cumulative chip sales across its Blackwell and upcoming Vera Rubin architectures.

After Huang made the AGI declaration on the podcast, NVIDIA shares rose 1.5% in a single day. The coincidence is revealing.

The Narrative for Each Audience

One of the things that impresses me most about Sam Altman is how he’s a master at adapting the narrative depending on who’s listening.

For Congress, AGI will cure cancer, solve climate change, and end poverty. It’s a civilizational transformation narrative that justifies favorable regulation and subsidies.

For users, AGI is a superpowered assistant ready to replace junior tasks. It’s a practical utility narrative that justifies premium subscriptions.

For Microsoft, AGI is a system that will print billions in profit. It’s a commercial narrative that justifies maintaining the $13 billion investment flow.

I’m not saying Altman lies — he probably believes parts of each of these narratives. But he is, above all, an exceptional salesman who knows exactly which button to push in each situation.

What I Really Think About This

Here’s my honest opinion, after weeks diving into this:

The technology is real. The advances LLMs have made in the last 3 years are genuinely impressive. AI is passing bar exams, writing production-quality code, discovering new drugs. At GTC 2026, Huang mentioned “inference” nearly 40 times and “training” only 10 — the industry’s focus is shifting from building smarter AI to making AI execute tasks more efficiently. This is engineering progress, not an intelligence breakthrough.

AGI as a concept is being hijacked. As one analyst noted: “No major AI CEO now uses AGI to mean what the term meant when researchers invented it.” The definition is now entirely controlled by people who benefit financially from declaring it achieved as soon as possible. That’s a structural problem.

The race has real consequences. The market reacts violently to each declaration. Investments flow or dry up based on these narratives. Public policies are formulated. Jobs are or aren’t created. And all of this based on a term nobody can scientifically define.

Skeptics aren’t a minor group. Yann LeCun calls AGI “complete BS.” Gary Marcus publicly disagrees with Altman. Serious researchers like Mustafa Suleyman (Microsoft AI CEO) say “any categorical declaration feels ungrounded.”

How to Navigate This Without Falling Into Traps

After everything I’ve read, I’ve arrived at some practical rules I use to filter AGI statements:

Pay attention to qualifiers, not headlines. Huang said “I think,” not “we’ve proven.” Altman said “spiritual,” not “literal.” These qualifiers aren’t modesty — they’re calibrated legal and PR strategy. When tens of billions in contracts are at stake, every word is carefully chosen.

Look at actions, not declarations. What companies are doing is more revealing than what they say. GTC 2026 launched new chips, agent platforms, and inference optimizations — real engineering advances. Packaging this as “AGI achieved” is narrative strategy, not scientific conclusion.

Separate technological advance from commercial hype. This is the key. AI remains powerful and useful even if AGI is, for now, just a sophisticated sales argument. Use the tool for what it actually delivers — not for the promise of what it might deliver.

Be suspicious of those who profit from urgency. The loudest voices asking you to “bet everything now” are usually those who profit most from your bet. That doesn’t mean they’re wrong — it means their incentive doesn’t always align with your interest.

Conclusion: The Map Isn’t the Territory

AGI may arrive one day. Current models are genuinely impressive. But for now, it is the best sales argument Silicon Valley has ever created — precisely because it’s vague enough to mean anything and specific enough to sound technical.

The challenge for us, enthusiasts and tech professionals, is twofold. First, harness AI’s real power to solve real problems. Second, ignore utopian (or dystopian) promises that only serve to inflate valuations.

The AI of 2026 is a transformative tool. It doesn’t need to be AGI to be important. And you don’t need to buy inflated narratives to use it well.

What about you? Is AGI a real technical goal or a carrot dangling in front of investors?

I’ve come to think it’s more the second than the first — but that doesn’t diminish the value of the tools we already have. Healthy skepticism and practical use can (and should) coexist.

Share your view:

AGI may arrive one day. For now, it’s the best sales argument in history — precisely because nobody can measure it.


Read Also