News

Sam Altman Defends AI Energy Consumption at India Summit as GPT-4o Retirement Sparks User Revolt

OpenAI's CEO told the India AI Impact Summit that training AI costs less energy than raising a human child, while thousands of users petition to reverse GPT-4o's retirement.

Sam Altman Defends AI Energy Consumption at India Summit as GPT-4o Retirement Sparks User Revolt

TL;DR

  • Sam Altman defended AI's energy footprint at the India AI Impact Summit, comparing training costs to raising a human child
  • Critics from Zoho, tech policy, and climate circles called the framing misleading and self-serving
  • OpenAI retired GPT-4o on February 13 with just two weeks' notice, affecting an estimated 800,000 users
  • A Change.org petition demanding GPT-4o's return has exceeded 22,000 signatures
  • The API remains available, but free and Plus ChatGPT users lost access entirely

The Energy Defense No One Asked For

Sam Altman arrived at the India AI Impact Summit in New Delhi last week with a talking point that landed like a lead balloon. During a keynote on February 19, the OpenAI CEO addressed growing criticism of AI's enormous energy consumption with an analogy that managed to annoy environmentalists, parents, and AI skeptics simultaneously.

"It takes like 20 years of life, and all the food you eat during that time, before you get smart," Altman told the audience, comparing the energy required to train frontier AI models to the caloric and resource cost of raising a human being to adulthood. The implication: AI training is actually energy-efficient relative to the intelligence it produces.

The argument is not entirely without foundation. Training GPT-5 reportedly consumed approximately 50 GWh of electricity - roughly what 4,500 American homes use in a year. A human child costs an estimated 58,000 kWh in food energy alone over 20 years, before accounting for housing, transportation, schooling, and healthcare. But the comparison collapses the moment you consider that AI models require continuous inference energy after training, while humans are remarkably energy-efficient once educated.

Altman doubled down at a separate panel, arguing that the economic value generated by AI systems would more than offset their energy costs. "The energy conversation is important, but it should not be a reason to slow down," he said. "The benefits of AI for India alone will be enormous."

The Backlash Was Swift

The response from the summit's own attendees and the broader tech community was pointed.

Sridhar Vembu, CEO of Zoho Corporation and one of India's most prominent tech founders, responded directly: "This is a terrible analogy. A human child grows into a citizen, a parent, a creative force. An LLM grows into a product that generates revenue for one company." Vembu, who has been vocal about sustainable technology development, called Altman's framing "the kind of Silicon Valley thinking that treats externalities as someone else's problem."

Matt Stoller, Director of Research at the American Economic Liberties Project and a persistent critic of Big Tech concentration, wrote that Altman's comments revealed "the fundamental arrogance of the AI industry - comparing their products to human life to justify unlimited resource extraction."

Paris Marx, host of the Tech Won't Save Us podcast, was more blunt: "Sam Altman just compared training ChatGPT to raising a child, and somehow the child came out looking like the better investment."

The energy criticism hits close to home for OpenAI. The company's partnership with Microsoft involves massive data center expansions, and Altman has separately been pursuing multi-hundred-billion-dollar infrastructure deals to secure the compute needed for future models. Environmental groups have increasingly targeted AI companies for their water and electricity consumption, particularly in regions where data centers compete with residential users for grid capacity.

GPT-4o Dies With Two Weeks' Notice

While Altman was defending AI's energy footprint in New Delhi, OpenAI was dealing with a different kind of backlash back home. On February 13, the company officially retired GPT-4o from the ChatGPT consumer interface, replacing it with GPT-4.1 and the newer GPT-5 family as the default options.

The retirement came with approximately two weeks of advance notice - a timeline that caught many users off guard. OpenAI's stated rationale was straightforward: GPT-4o represented just 0.1% of total ChatGPT usage, making it inefficient to maintain as a separate serving infrastructure.

But 0.1% of ChatGPT's user base is not a small number. With OpenAI reporting over 800 million weekly active users, that 0.1% translates to roughly 800,000 people who actively preferred GPT-4o over the newer alternatives - and now have no way to access it through the consumer product.

The API remains available for developers and businesses, which means the model is not technically dead. But for free-tier and ChatGPT Plus subscribers who relied on GPT-4o's specific behavior - its writing style, its handling of creative tasks, its particular brand of helpfulness - the consumer product is gone.

22,000 Signatures and Counting

The user response has been more organized than OpenAI probably expected. A Change.org petition titled "Bring Back GPT-4o to ChatGPT" has accumulated over 22,971 signatures as of February 24. The petition argues that GPT-4o offered a distinct personality and capability profile that newer models have not replicated.

On X, the #Keep4o hashtag has trended intermittently since the retirement announcement. User complaints cluster around several themes:

  • Creative writing degradation - Many users report that GPT-4.1 and GPT-5 produce more generic, safety-filtered creative output compared to GPT-4o
  • Personality loss - GPT-4o had developed what regular users describe as a distinctive conversational voice that newer models lack
  • No migration path - Unlike API users who can pin model versions, consumer ChatGPT users have no way to select legacy models
  • Inadequate notice - Two weeks is not enough time for users who built workflows around a specific model's behavior

The frustration echoes a pattern that has repeated across OpenAI's product history. When the company deprecated the original GPT-4 in favor of GPT-4 Turbo, and later when it introduced changes to ChatGPT's free tier, vocal user segments pushed back against changes imposed without adequate consultation.

The Bigger Pattern

What connects the energy defense and the GPT-4o retirement is a company moving too fast for its own user base to keep up, while simultaneously asking for patience on the consequences.

Altman wants the world to accept that AI's energy consumption is a reasonable trade-off for the technology's benefits. But the GPT-4o retirement shows that OpenAI struggles to manage even the simpler trade-off of retiring a product that 800,000 people actively use. If you cannot give your users a smooth migration path between model versions, the argument that you should be trusted with unprecedented energy resources becomes harder to make.

OpenAI's counter-argument is that progress requires moving forward, and that maintaining legacy models indefinitely is unsustainable. That is a defensible position from an engineering standpoint. But engineering decisions have human costs, and the 22,000 petition signatures suggest that OpenAI underestimated them.

For Altman, whose appearances alongside Anthropic CEO Dario Amodei at the India summit drew considerable attention, the energy defense may have been intended as a preemptive strike against regulation. India is rapidly developing its AI governance framework, and energy consumption is emerging as a key regulatory vector alongside data privacy and national security. Getting ahead of that narrative matters.

Whether the child-rearing analogy helped or hurt that cause is another question entirely.


Sources:

Sam Altman Defends AI Energy Consumption at India Summit as GPT-4o Retirement Sparks User Revolt
About the author AI Industry & Policy Reporter

Daniel is a tech reporter who covers the business side of artificial intelligence - funding rounds, corporate strategy, regulatory battles, and the power dynamics between the labs racing to build frontier models.