• AI KATANA
  • Posts
  • Why OpenAI Spends Millions on Your "Please" and "Thank You"

Why OpenAI Spends Millions on Your "Please" and "Thank You"

Unpacking Sam Altman's surprising take on the value of being nice to machines

We've all been there. You're asking ChatGPT for help – maybe drafting an email, brainstorming ideas, or settling a debate – and you instinctively type "please" and "thank you." It feels natural, right? Just good manners.

But have you ever stopped to think about what those extra words cost?

Recently on X (formerly Twitter), a user posed this very question, wondering how much money OpenAI might be losing in electricity costs simply from users being polite to their AI models.

The reply came from none other than OpenAI CEO Sam Altman himself: "tens of millions of dollars well spent--you never know"

Wait, tens of millions? Just for pleasantries? Let's break that down.

Tokens, Computation, and Kilowatts

Every time you interact with an AI like ChatGPT, your input (the prompt) and its output (the response) are broken down into smaller units called tokens. Think of tokens as pieces of words. "Please" might be one token, "thank" another, and "you" a third.

Each token requires computational power to process. More tokens in your prompt mean more calculations the AI needs to perform to understand your request and generate a response. More calculations mean more processing time on powerful, energy-hungry hardware (like GPUs) housed in massive data centers. These data centers also require significant energy for cooling.

So, yes, adding "please," "thank you," or any other extra words literally translates into increased electricity consumption and, therefore, higher operational costs for OpenAI. While a single polite prompt's cost is minuscule, multiply that by the hundreds of millions of users interacting with ChatGPT, and suddenly "tens of millions" starts to make sense.

Why Bother Being Polite to a Bot?

It's a valid question! ChatGPT doesn't have feelings (yet!). But studies show many of us do it anyway. A 2024 survey by TechRadar found 67% of American users are polite to AI assistants. Why? 55% said it just felt like the right thing to do, while a perhaps more... forward-thinking 12% admitted they did it just in case of a future AI uprising (better safe than sorry, right?).

Interestingly, some in the AI field believe politeness might actually yield better results. Microsoft's design manager, Kurtis Beavers, suggested that using polite language "helps generate respectful, collaborative outputs" and "sets a tone for the response." It seems politeness might subtly nudge the AI towards a more helpful and agreeable interaction style.

The Bigger Picture: AI's Voracious Energy Appetite

This amusing exchange about manners highlights a much larger, more serious issue: the enormous energy demands of artificial intelligence.

  • Training vs. Running: Training large language models (LLMs) like those behind ChatGPT is incredibly energy-intensive. Training GPT-3 was estimated to consume nearly 1,300 megawatt-hours (MWh) of electricity, roughly equivalent to the annual power consumption of 130 US homes. Training newer, larger models like GPT-4 requires significantly more – potentially 50 times as much! (Source: World Economic Forum).

  • Daily Operations: And that's just training. The daily energy cost of running ChatGPT for its millions of users is also staggering. One estimate puts the daily energy consumption around 1 gigawatt-hour (GWh), comparable to the daily energy use of about 33,000 US households (Source: Sajjad Moazeni, University of Washington).

  • Query Cost: Estimates vary, but a single ChatGPT query is often cited as using significantly more energy than a standard Google search – perhaps around 10 times more, consuming anywhere from 0.3 to several watt-hours depending on complexity and length (Sources: Epoch AI, WEF).

Data centers powering AI are becoming a major driver of global electricity demand. The International Energy Agency (IEA) notes that the data center industry already accounts for a significant chunk of global energy use and greenhouse gas emissions.

"Well Spent" & The Need for an Energy Breakthrough

So, why would Altman call those millions spent on processing politeness "well spent"? Perhaps it's about fostering positive interaction habits for a future where we engage even more deeply with AI. Maybe it's a tiny investment in ensuring future, potentially more powerful AI, sees us as cooperative partners. Or maybe it's just good PR.

Whatever the reason, Altman is acutely aware of the energy challenge. He has repeatedly stated that future AI progress will require an "energy breakthrough," specifically pointing towards nuclear fusion as a potential solution. He's even personally invested hundreds of millions in fusion energy companies like Helion Energy (Sources: Reuters, Popular Science, Verdict). He argues that without a radical shift in energy production – potentially through fusion or much cheaper solar power – the immense power demands of future AI models simply won't be sustainable.

The Path Forward: Efficiency and Innovation

While waiting for fusion, researchers are working on making AI more energy-efficient. This includes:

  • Developing smaller, yet still powerful, models.

  • Optimizing algorithms to require less computation.

  • Designing more efficient hardware specifically for AI tasks (some research has shown LLMs running on the power equivalent of a lightbulb! Source: UCSC News).

  • Exploring techniques like "on-device AI," where processing happens locally on your phone or computer rather than in a distant data center (Source: WEF).

So, the next time you catch yourself saying "please" to ChatGPT, remember: your manners are contributing (in a tiny way) to a multi-million dollar electricity bill and highlighting one of the biggest challenges facing the future of AI. It's a fun thought, but also a reminder of the serious work needed to ensure AI's incredible potential doesn't come at an unsustainable environmental cost.

Maybe those "tens of millions" really are well spent if they keep us thinking about these bigger questions. What do you think?