Every time you say “please” to ChatGPT, you’re burning about ten extra characters of compute. Across 2.5 billion daily queries, those niceties cost real money, real electricity, and real carbon. We did the maths — and Sam Altman may have been joking, but the numbers are surprisingly concrete.
🔍 THE BOTTOM LINE
Global politeness to AI chatbots costs an estimated $10–30 million per year in inference compute and roughly 500–1,000 MWh of electricity — equivalent to the annual energy use of about 80 average homes. The real cost isn’t the five polite words per query, but what they tell us about AI’s surprisingly physical footprint.
The Nicety That Costs
Sam Altman didn’t dodge the question. When a user asked on X how much OpenAI spends processing “please” and “thank you”, the CEO’s reply was half-joke, half-accounting: “Tens of millions of dollars well spent — you never know.”
The joke lands because it’s plausible. And the more you dig into it, the more the numbers hold up.
Let’s start with the basics. ChatGPT now handles 2.5 billion queries per day — up 150% year-on-year according to recent estimates. Every query gets tokenised into subword units (roughly 0.75 tokens per word of English), sent through a massive neural network, and generates a response.
Add “please” to the start of a prompt and “thank you” to the end — roughly 10 extra characters, or about 5-7 extra tokens in total. Across 2.5 billion daily queries, that’s 12.5–17.5 billion extra tokens per day processed through the most expensive inference infrastructure on the planet.
At OpenAI’s API pricing for GPT-4o (about $2.50 per million input tokens), those pleasantries alone cost roughly $11,000–$16,000 per day in compute — and that’s just for the input side. Including output tokens from the model’s own polite responses, the number roughly doubles.
If every ChatGPT user added “please” and “thank you” to every query, the annual cost of politeness alone would be in the $10–$25 million range. Altman’s “tens of millions” wasn’t far off — especially when you factor in every model across OpenAI’s entire product line.
How Many Tokens in a “Please”?
For the curious: the GPT-4 tokeniser (a subword BPE model, same family as most modern LLMs) breaks English into roughly 1 token per 0.75 words. So “please” is 1 token, “thank” is 1 token, “you” is 1 token. Common punctuation and spaces add a bit more. A typical polite framing:
- “Please explain quantum computing” → 5 tokens
- “Thanks” → 2 tokens
- “Thank you!” → 4 tokens
A standard “please” + “thank you” sandwich runs about 5–7 extra tokens per exchange. That’s nothing for an individual query. At planetary scale, it becomes a measurable line item.
🔋 The Energy Behind the Courtesy
This is where the numbers get genuinely interesting — and a bit uncomfortable.
Each token through a frontier model like GPT-4o requires roughly 0.0001–0.0003 watt-hours of energy, depending on the hardware serving it. This figure comes from benchmark data collated by Muxup Research, which uses the InferenceMAX suite to estimate per-query energy consumption for large models served on modern GPU clusters.
Using the lower bound (0.0001 Wh/token), those 12.5 billion daily politeness tokens consume about 1.25 MWh per day. Annually: ~450 MWh.
Using the upper bound (0.0003 Wh/token): ~1,350 MWh per year.
To put that in perspective:
- The average NZ home uses about 7 MWh per year
- So global AI politeness burns 60–190 homes’ worth of electricity annually
- That’s roughly the same as the energy needed to charge 45–135 million smartphones
None of this breaks the grid on its own. But it’s a visible line item on an energy balance sheet that’s already under pressure.
⚡ It’s Not Just OpenAI
The “politeness tax” applies to every major AI provider:
- Google Gemini — 2.5 billion monthly visits, same token economics
- Claude (Anthropic) — slightly pricier per token at $3.00/M input tokens for Claude 4 Sonnet
- DeepSeek — cheaper per token, but growing fast
- Copilot, Perplexity, Grok — all running the same polite-input maths
Across the entire AI industry, the global politeness tax is probably $30–50 million per year — and climbing as adoption grows.
🧠 Does Politeness Even Work?
Here’s the twist: being polite to AI actually improves its output.
Research from Waseda University (2024) tested LLM performance across three languages and found that rude prompts caused a 30% drop in performance, while polite prompts reduced errors and produced richer responses with more diverse sources.
Microsoft’s Copilot design team director Kurtis Beavers made the same observation: being polite “sets a tone for the response” — AI mimics the politeness and produces more collaborative, respectful outputs.
So there’s a genuine productivity argument for the “please.” It’s not just superstition — though let’s be honest, the subreddit users who admit they’re being nice “just in case the AIs take over” are a huge part of the fun.
🇳🇿 NZ Lens: Data Centres and the Cost of Courtesy
New Zealand has become an attractive destination for AI data centres thanks to its high renewable electricity mix. Microsoft opened its first hyperscale data centre in Auckland in 2025, and a proposed 280MW facility in Southland would be the country’s second-largest single power draw.
The Conversation’s analysis of AI energy use notes that “claims of renewable supply do not always correspond to new generation being added” — in dry years when hydro generation is constrained, AI inference competes with homes and businesses for the same electrons.
Every “thank you” typed into ChatGPT from an Auckland café is powered by energy that could be heating someone’s home in winter. At scale, those small courtesies add up to a meaningful line on the national energy balance.
💡 The Real Cost Isn’t the Politeness
Here’s the honest take: the politeness cost is a rounding error compared to the overall cost of AI inference.
The IEA warns that data centre electricity demand could double by 2030. Microsoft Research’s Joule paper (April 2026) models AI energy use as a serious grid-scale concern. The 450–1,350 MWh we spend on pleasantries is noise compared to the 50+ GWh that ChatGPT consumes annually just to function.
But the politeness conversation matters because it makes a invisible cost visible. Most people think of AI as “the cloud” — weightless, ethereal, immaterial. A figure like “saying please costs $10 million a year” is a concrete hook into a much larger reality: every AI query has a physical cost, and those costs are growing.
As The Conversation’s analysis puts it: “The popularity of the ‘please’ myth is therefore less a mistake than a signal. People sense AI has a footprint, even if the language to describe it is still emerging.”
❓ Frequently Asked Questions
Q: Should I stop saying please to ChatGPT? No, unless you’re trying to optimise at the margins. The productivity benefit of polite prompts (up to 30% better responses based on Waseda’s research) outweighs the tiny per-query energy cost. A better approach: be polite, but skip the conversational fluff and get to the point.
Q: How much energy does a single ChatGPT query use? About 10× more than a Google search, per the Electric Power Research Institute. A typical ChatGPT query consumes roughly 0.3–1.0 Wh depending on model size and response length, compared to ~0.0003 Wh for a traditional search.
Q: What does this mean for New Zealand? NZ’s renewable-heavy grid attracts data centre investment, but new AI workloads add real competition for electricity — especially during dry hydro years. The 280MW Southland facility alone would consume roughly 2% of NZ’s total electricity.
Q: Is this really costing OpenAI millions? Sam Altman himself estimated “tens of millions” for the electricity cost of processing politeness. Our back-of-envelope using public API pricing puts it at $10–25M/year for OpenAI alone across all their models.
🔍 THE BOTTOM LINE
The $10 million courtesy isn’t a scandal — it’s a signal. Politeness to AI costs real money and real energy, but the far bigger story is what it reveals: AI isn’t weightless infrastructure. Every query burns something. The politeness tax just happens to be the most visible, and most amusing, line item on a rapidly growing energy bill.
📰 SOURCES
- The Conversation — “Does adding ‘please’ and ‘thank you’ to your ChatGPT prompts really waste energy?”
- Newsweek — “Please and Thank You: What Does It Cost to Be Polite to ChatGPT?”
- OpenAI — Sam Altman on X (@sama)
- Muxup Research — “Per-query energy consumption of LLMs”
- Waseda University — Politeness in LLM prompting study (2024)
- Electric Power Research Institute — AI vs Search energy comparison
- International Energy Agency — Electricity 2024 report
- Microsoft Research — “Energy use of AI inference” (Joule, April 2026)
- RNZ — “A new Southland datacentre would be the country’s second-largest drain on power”