A state capitol building with American flags and a gavel on the steps, overcast sky, photojournalistic style
News

Three States Pass AI Laws in 48 Hours While White House Pushes Federal Preemption

States are regulating AI faster than Congress can agree on a framework. The White House's solution? Preempt them.

AI regulationstate legislationfederal preemptionNebraskaMaryland

On April 12, 2026, three US states passed AI legislation in the space of 48 hours — each targeting a different high-risk application of artificial intelligence. Nebraska cracked down on deceptive chatbots. Maryland forced AI pricing into the open. Maine banned AI-only mental health therapy.

The burst of state-level action underscores a growing reality: states are not waiting for Congress. And that has the White House worried about a patchwork of conflicting rules.


Nebraska: Conversational AI Safety Act

Nebraska’s Conversational AI Safety Act targets two specific harms. First, it requires AI chatbots to disclose their artificial nature when interacting with minors. Second, it bans chatbots from making false claims about providing mental health services.

The law responds to growing concerns about minors forming emotional attachments to AI companions and being unable to distinguish them from human counselors. Nebraska is the first state to explicitly require chatbot disclosure for underage users.


Maryland: AI Pricing Transparency

Maryland’s new law takes aim at a different problem: algorithmic pricing that consumers can’t see. The legislation requires businesses to disclose when AI systems set or influence pricing decisions.

The rule addresses growing use of dynamic pricing algorithms — from surge pricing on ride-shares to individualized product pricing based on browsing behavior. Maryland’s approach gives consumers a right to know when a machine, not a human, decided what they pay.


Maine: Healthcare AI Restrictions

Maine’s HB 2082 is the most restrictive of the three bills. It bars AI from providing mental health therapy without a licensed human professional involved in the process. The law effectively bans AI-only therapy chatbots from operating in the state’s healthcare system.

Maine’s legislators cited cases where vulnerable patients relied on AI chatbots for crisis counseling — without any clinical oversight. The law does not ban AI as a supplement to therapy, but it draws a hard line: no AI-only mental health treatment.


The White House Response: Federal Preemption

These three state laws landed at a particularly awkward moment for the White House. In early April 2026, the administration released a National AI Policy Framework with seven legislative recommendations — including a call for federal preemption over conflicting state AI laws.

The framework outlines priorities covering AI safety, innovation, free speech protections, workforce development, IP safeguards, and small business AI access. But the preemption clause is the contentious one. If enacted, it could invalidate state-level AI rules like those just passed in Nebraska, Maryland, and Maine.

The FTC is already enforcing AI-related consumer protections without a federal AI law in place. Meanwhile, the DOJ’s AI Litigation Task Force may challenge state laws — including Colorado’s AI Act — on federal preemption grounds.


The Patchwork Problem

The core tension is real. AI companies operating across all 50 states face a growing compliance maze. Nebraska says disclose chatbots to minors. Maryland says disclose algorithmic pricing. Maine says no AI-only therapy. Other states have their own rules — and more are coming.

But the states argue that Congress has had years to act and hasn’t. In the absence of federal legislation, they’re filling the gap with targeted rules that address specific harms in their communities.

The question isn’t whether AI needs regulation. It’s who gets to decide what that regulation looks like — and whether federal preemption protects consumers or just protects companies from having to comply.


Why This Matters Beyond the US

New Zealand is watching. As the NZ government debates its own AI regulatory approach, the US state experiment offers a live case study in what targeted AI governance looks like in practice — and what happens when federal and state priorities collide.

The specific harms these three states addressed — deceptive chatbots, opaque pricing, unqualified AI therapy — are not unique to America. They exist everywhere AI is deployed. The difference is that some jurisdictions are already acting on them.


SOURCES

  • Nebraska Legislature — Conversational AI Safety Act
  • Maryland General Assembly — AI Pricing Transparency Bill
  • Maine Legislature — HB 2082
  • White House Office of Science and Technology Policy — National AI Policy Framework
Sources: Nebraska Legislature, Maryland General Assembly, Maine Legislature, White House Office of Science and Technology Policy