A government building with biometric scanners and a phone showing a chatbot interface, cold institutional lighting, documentary style
News

The US Senate Wants Your Government ID Before You Can Talk to ChatGPT

Unanimous Senate committee vote would require government ID to use any AI chatbot. No parental consent option. No appeals. Just a national identity system wrapped in child-safety language.

AI RegulationGUARD ActPrivacyAge VerificationGovernment ID

The US Senate Judiciary Committee just voted 22-0 to advance a bill that would require every American to upload government identification before using any AI chatbot. The bill is called the GUARD Act, and if it becomes law, it creates something the United States has never had: a national identity verification system for digital services.

Senator Josh Hawley, the bill’s sponsor, celebrated the unanimous vote: “My bill to stop AI from telling kids to kill themselves just passed out of committee UNANIMOUSLY.” It’s hard to argue with that framing. It’s also hard to notice what the bill actually does when the framing is that tight.


What the GUARD Act Actually Requires

The bill doesn’t say “verify age for companion apps.” It says verify age for any service that “produces new expressive content or responses not fully predetermined by the developer or operator” and “accepts open-ended natural-language or multimodal user input.”

That’s every chatbot. ChatGPT. Claude. Gemini. Your bank’s customer service bot. The homework helper your kid uses. The search assistant built into your phone.

A “reasonable age verification measure” cannot be a checkbox. It cannot be a self-entered birth date. It cannot rely on shared IP addresses. What it can be: a government ID upload, a facial scan, or a financial record tied to your legal name.

Every user of every covered chatbot would need to hand one of those over before being allowed in.


Here’s where the child-safety framing starts to fray. The bill contains no parental consent mechanism. A parent cannot decide their fifteen-year-old is allowed to use a homework chatbot. The government decides. The government says no.

There’s also no appeals process. If an age-verification algorithm decides you’re under 18 — wrong or not — you’re locked out. Period. Regardless of what your parents think. Regardless of what your actual birth certificate says.

This isn’t protecting parental authority. It’s replacing it.


The Honeypot Problem

Trade group NetChoice warned the committee that the bill would “force AI companies to collect and store highly sensitive personal data into honeypots ripe for cybercriminals to exploit through breaches, identity theft and fraud.”

They’re not wrong. Age-verification vendors have been breached repeatedly, exposing government IDs and biometric scans of millions of users. The GUARD Act would multiply those targets by routing every AI interaction in the country through similar collection systems.

The bill requires periodic re-verification, which means your sensitive identity documents either sit in a company database waiting for a breach, or you re-upload them on a schedule. Both options are surveillance infrastructure.


The Market Consolidation Effect

Smaller developers face $100,000 per-offense penalties for non-compliance. Building age-verification infrastructure is expensive. The companies that can absorb that cost — OpenAI, Google, Anthropic — end up consolidating the market. The small open-source alternatives? They’ll strip features until they no longer trigger the bill’s definitions, or block US users entirely.

This is how you get a two-tier AI economy: big companies with identity checkpoints, and everyone else locked out.


The Preemption Trojan Horse

Senator Marsha Blackburn intends to fold the GUARD Act into her TRUMP AI Act, which would preempt conflicting state AI laws. The GUARD Act itself contains a similar preemption clause, displacing state laws that conflict with it while carving out room for states to legislate separately for children under 13.

Federal preemption of state AI rules has been controversial. The GUARD Act offers a narrower vehicle for the same outcome, packaged inside child-safety language that makes opposition politically expensive. It’s hard to vote against a bill called GUARD that passed committee 22-0.


The Real Question

The criminal provisions — fines of up to $100,000 per offense for companies that knowingly design chatbots that solicit sexual content from minors or encourage self-harm — respond to real tragedies. Parents of children who harmed themselves after extended interactions with AI companions sat in the committee room during the markup. Their pain is real. Their cause is just.

The question is whether a national ID-verification regime is the right response. Whether building identity infrastructure that reaches every chatbot — including the ones nobody alleges caused harm — actually addresses the problem, or simply uses the worst cases as leverage for something much broader.

The infrastructure being authorized here will not check whether a user is a child before it asks for their ID. It will ask everyone. That’s what the bill requires. It’s also worth asking if that’s what the bill is for.


🇳🇿 The NZ Angle

New Zealand has no equivalent legislation. There’s no serious proposal to require government ID for AI access. The Privacy Act 2020 and the Privacy Commissioner’s existing guidance on AI would treat mandatory biometric collection for chatbot access as a significant privacy intrusion requiring clear justification and consent.

NZ’s approach to AI regulation has been principles-based rather than prescriptive — guidance from MBIE for responsible AI use, the Privacy Commissioner’s framework, sector-specific rules. It’s less dramatic than a 22-0 Senate vote, but it also doesn’t build national identity infrastructure by accident.

If the GUARD Act passes and US companies start requiring ID verification, the question for NZ is whether those requirements get exported. Global AI platforms tend to apply US compliance requirements worldwide rather than building country-specific flows. New Zealanders could end up uploading drivers’ licences to use ChatGPT — not because our parliament voted for it, but because the US Senate did.


🔍 The Bottom Line

The GUARD Act is the most significant AI regulation you’ve never heard of, because it’s packaged as child safety and it passed unanimously. But read the text: it creates a national identity verification system for AI access, eliminates parental consent, provides no appeals, and hands the resulting biometric and identity data to private companies under the weakest data-minimization language you can imagine.

The EU banned emotion AI in workplaces entirely. The US Senate’s response to AI safety concerns is to build the infrastructure to track everyone who uses it. Different planets, different approaches.

This bill has momentum. The full Senate vote comes next, then the House. The pattern of recent age-verification legislation suggests the substantive privacy questions will keep being asked, and keep being answered with the argument that any cost is acceptable if children are invoked.


Sources

Sources: Reclaim The Net, Senate Judiciary Committee, NetChoice