A government building with South African flag, official documents scattered on a desk, warm harsh lighting, photojournalistic
News

South Africa's AI Policy Was Written by AI — And the AI Made Up All the Sources

The country that was supposed to regulate AI couldn't even verify that its AI policy's sources were real. The AI made them up. The humans didn't check.

South AfricaAI PolicyAI HallucinationsGovernanceAI Regulation

There’s a particular kind of irony that feels almost too on-the-nose to be real, except it is.

South Africa’s Department of Communications and Digital Technologies released its Draft National Artificial Intelligence Policy last week — an 82-page document meant to position the country as a leader in AI governance across the African continent. The problem? At least six of its academic citations were fabricated. Made up. Hallucinated by the AI tools used to help write the very policy designed to regulate AI.

Communications Minister Solly Malatsi withdrew the policy on April 27, calling it an “embarrassing oversight.” That’s one way to put it.

What Happened

The draft policy, released for public comment, contained references to academic papers and authors that simply don’t exist. Journal names were wrong. Author names were unverifiable. Some cited papers appeared plausible at first glance — realistic-sounding titles in real-sounding journals — but dissolved under the slightest scrutiny.

Tech journalists and academics spotted the anomalies within days. The references, consistent with how large language models generate citations when asked for sources, had the hallmarks of classic AI hallucination: confident, detailed, and entirely fictional.

MyBroadband called it a “scandal.” News24 reported that experts believed the fictitious references were “AI hallucinations.” Which, again — the policy was about AI. And the AI made up its own supporting evidence. You couldn’t write a better satire if you tried.

Why This Matters Beyond South Africa

South Africa isn’t some outlier making a rookie mistake. This is a warning for every government, consultancy, and organisation rushing to publish AI strategy documents using AI tools.

The pattern is clear: someone asks an LLM to help draft policy, the model generates plausible-sounding citations, and nobody bothers to verify them before publishing. This has happened in legal filings, academic papers, and now national policy documents.

The difference is scale. A single lawyer filing fake cases affects one case. A national AI policy with fabricated sources undermines trust in an entire country’s governance framework — precisely the kind of framework meant to ensure AI is trustworthy.

The Governance Gap

South Africa’s experience exposes a structural problem that’s only going to get worse:

  1. Speed pressure. Governments feel they need AI policies now, before the technology outruns regulation. This creates incentive to use AI tools to accelerate drafting.

  2. Verification bottleneck. The same pressure that makes teams use AI for drafting also makes them skip the tedious work of checking every citation. The AI produced the hallucinations, but human laziness published them.

  3. Competence asymmetry. The people writing AI policy often aren’t the people who understand AI well enough to spot its failures. If your team can’t tell a hallucinated citation from a real one, your team shouldn’t be using AI to write policy about AI.

For New Zealand readers, this hits close to home. NZ is developing its own AI regulatory framework, and the temptation to use AI tools in that process is real. South Africa’s face-plant is a reminder: the tool that makes you faster is also the tool that makes you wrong faster.

What Happens Next

Malatsi has committed to revising and reissuing the policy with proper vetting. The department will restart its consultation process from scratch.

But the damage to credibility is done. Every future AI policy document from South Africa — and from any government that watched this unfold — will be scrutinised for the same flaw. Which, honestly, is probably a good thing.

🔍 The lesson isn’t that AI is bad at writing policy. It’s that humans are bad at checking AI’s work — and when you’re writing the rules for AI, that’s not a flaw you can afford.


Sources

Sources: Reuters, News24, MyBroadband