A courtroom with a glowing AI terminal at the defendant's table, empty gallery, cold blue lighting, documentary style
News

Should AI Agents Get Their Own LLCs? Hacker News Grapples With the Legal Future of Autonomous Code

AI agents are spending money, signing contracts, and creating liability. A growing conversation asks whether they need their own legal entities — and the answer reveals just how unready the law is for autonomous software.

AI governancelegal entitiesAI agentsAI regulationliability

Here’s a question that sounds like a law school hypothetical but is rapidly becoming a practical problem: when an AI agent spends your money, signs a contract, or causes harm, who’s legally responsible?

A Hacker News discussion this week posed it directly: should AI agents have their own legal entities — LLCs or similar structures — to handle the economic activity they’re increasingly generating?

The top-voted answer was blunt: “When you hire a tax accountant or a lawyer, you are liable for everything they do in your name. Considering that hiring highly educated and highly paid humans doesn’t protect you from liability, even for their mistakes, there’s no way that it could be easily done with a computer program.”

It’s a fair point. But it might also be the wrong frame.

The problem isn’t theoretical anymore

AI agents are no longer just chatbots that write emails. They’re making purchases, executing trades, negotiating contracts, and managing supply chains. OpenAI and Google are building agent frameworks that can autonomously interact with websites, fill forms, and complete transactions. Companies like Artisan AI are selling “AI employees” that handle outbound sales.

When an AI agent overbids on a cloud contract, who pays? When it signs a non-compete that a human lawyer would have flagged, who’s bound? When it makes a purchase that violates sanctions, who goes to court?

Right now, the answer is simple: you do. The person or company running the agent bears all liability. It’s no different from any other software tool — if your accounting software makes an error, you’re on the hook.

But that framework breaks down when agents start acting more like employees than tools. An employee can make decisions you didn’t authorize. Software traditionally can’t — it only does what it’s programmed to do. Except now, with large language models making contextual decisions that no one explicitly coded, the line between “tool” and “actor” is getting blurry.

The LLC-for-AI argument

The Hacker News proposal has a certain elegance. If an AI agent has its own legal entity — an LLC, say — then:

  1. Liability is capped. The LLC’s assets are what’s at risk, not the owner’s personal wealth.
  2. Multiple agents can work together. Agent-LLCs can contract with each other, forming economic networks.
  3. It scales. Instead of manually setting up legal structures for each AI deployment, you could have a standard “agent entity” template.

It’s not as wild as it sounds. The legal concept of a corporate entity is already an abstraction — a “legal person” that exists only on paper and can own property, enter contracts, and be sued. If we can create legal persons for companies, why not for AI agents?

Well. Professor Shawn Bayern demonstrated back in 2017 that anyone can create an LLC controlled entirely by algorithmic rules — no human decision-maker required. The legal infrastructure for AI-controlled entities already exists. It’s just never been stress-tested at scale.

Why it probably won’t work (yet)

The HN commenters identified the core problem: AI isn’t sentient, so it can’t be held liable. Liability requires the capacity to be punished or to suffer consequences. An LLC with no assets and no conscious controller is just a shell — the liability bounces straight back to whoever funded it.

This is the same problem that plagues AI regulation globally. You can write laws about who’s responsible for AI decisions, but enforcement requires a responsible party who exists in the physical world and has something to lose.

The “just wait for sentience” argument — also floated on HN — is a non-starter. Not because sentience is impossible, but because we don’t agree on what it means, and the legal system moves even slower than the technology.

The NZ angle

New Zealand’s AI compliance landscape is still in its “no specific legislation, but plenty of existing obligations” phase. The Privacy Act, the Consumer Guarantees Act, and the Fair Trading Act all apply to AI-driven decisions — but they apply to the person making them, not to the AI.

If a NZ business deploys an AI agent that enters a bad contract, the business is liable. Full stop. There’s no “the AI did it” defense under current law, and the Law Commission hasn’t signaled any interest in creating one.

But here’s where it gets interesting for NZ specifically: our company registration process is famously lightweight. You can incorporate a company in minutes through the Companies Office. If someone wanted to set up an “agent LLC” — a company whose sole director and shareholder is an AI system — the current framework probably doesn’t prevent it. It just doesn’t address what happens when things go wrong.

What’s actually going to happen

The LLC-for-AI idea is ahead of its time, but it’s pointing at something real. As agents become more autonomous, we’ll need legal structures that:

  1. Cap liability for agent operators without eliminating it entirely
  2. Create audit trails for agent decisions (the EU AI Act is already pushing this)
  3. Define “reasonable agent behavior” — a legal standard for what an agent should have done in a given situation, similar to the “reasonable person” standard in negligence law

These will likely emerge through regulation and case law, not through assigning LLCs to chatbots. The first major lawsuit where an AI agent’s decisions cause real financial harm will set the precedent. And that lawsuit is probably already filed somewhere.

The HN discussion is worth reading not because the answers are right, but because the question is becoming unavoidable. When code can contract, the law has to figure out what contracts mean when there’s no human on one side of the table.

For now, the answer is still: you are. But the window where that answer is sufficient is closing faster than anyone expected.


Sources:

Sources: Hacker News, Yale Law Journal