Imagine discovering that your university took your lectures, chopped them into AI-generated snippets, mispronounced the literary critic you were discussing, slapped a $5/month price tag on it, and never told you. Welcome to Arizona State University, 2026.
ASU’s “Atomic AI Course Builder” — a beta web app launched in April 2026 — does exactly that. Powered by Anthropic’s Claude, it draws from faculty-uploaded Canvas materials (video lectures, slide decks, assignments), clips them into short segments, and repurposes them into customised courses with quizzes and readings. Topics range from project management to investing to history.
The faculty found out through word of mouth. Nobody asked them. Nobody warned them. And the content being produced with their faces and voices? Not always accurate.
The “Client Brooks” Problem
Humanities professor Chris Hanlon discovered a 1-minute clip from his 12-minute lecture on literary critic Cleanth Brooks had been sliced into an AI-generated module. Claude transcribed the name as “Client Brooks.” His face and voice were altered in the output. The module bore no resemblance to his actual course or teaching intent.
He called it “Frankensteinian.” That word does a lot of heavy lifting.
Michael Ostling, another faculty member, raised alarms about inaccurate content misleading students and the risk of bad actors fabricating “evidence” from sensitive topics — race, gender, Middle East conflicts. When your AI course builder can remix a professor’s lecture on colonial history into something that misrepresents their position, you haven’t created an educational tool. You’ve created a liability.
The IP Trap
Here’s where it gets particularly sticky. The Arizona Board of Regents claims ownership of “any intellectual property created by a university or Board employee in the course and scope of employment.” That includes most instructional materials on Canvas. Scholarly works might be exempt — unless they use “significant” university resources.
So ASU’s position is essentially: we own your lectures, so we can feed them to our AI, and you don’t get a say. It’s legally plausible and ethically astonishing.
Faculty argue this enables unauthorised scraping, modification, and commercialisation of their labour without control over their likeness or licensing. Worse, because the content has been scattered across servers and YouTube, removing it is somewhere between difficult and impossible.
Arizona is a non-union state. Faculty have limited avenues for pushback. Reddit threads show envy of unionised faculty elsewhere who’ve successfully gamed grievances for compensation. At ASU, there are contractual obligations and genuine fears of retaliation.
ASU’s Damage Control
President Michael Crow called it an “early stage experiment” at a faculty Q&A, acknowledged curriculum concerns as “legitimate,” and noted no formal evaluation had been done yet. Which raises the question: why launch a paid product with no evaluation?
A spokesperson described it as a pilot to repurpose “existing digital content” for non-degree learners, not yet promoted widely. At $5/month for access. That’s a pilot with a revenue model.
Why This Matters Beyond Tempe
This isn’t just an ASU problem. It’s a preview of every university’s AI dilemma:
- Who owns teaching? If universities can claim faculty IP and feed it to AI, the professor’s role shifts from knowledge creator to data source. That’s not augmentation. That’s replacement with extra steps.
- Consent is non-negotiable. The backlash here isn’t about AI itself. It’s about being blindsided. Faculty who might have engaged with the tool constructively were alienated by the process.
- Accuracy matters. “Client Brooks” isn’t a typo. It’s a symptom. AI course generation at scale will produce errors at scale. When those errors carry a professor’s face and voice, they carry their reputation too.
- The commercialisation question. ASU is charging students for AI-generated content built on unpaid faculty labour. Even if the IP ownership is technically clear, the optics are terrible — and the ethics worse.
The NZ Connection
New Zealand universities are watching. Our IP frameworks differ — NZ’s Copyright Act and employment law don’t give universities the same blanket ownership claims that Arizona’s Regents enjoy. But the pressure to “innovate” with AI in education is universal. An NZ vice-chancellor could attempt something similar, and our academic staff agreements might not be robust enough to prevent it.
The lesson from ASU is simple: the technology to do this already exists. The policy framework to govern it doesn’t. Every university needs clear rules about faculty consent, likeness rights, AI accuracy standards, and revenue sharing before — not after — someone launches the product.
The Bottom Line
ASU built a tool that proves AI can repurpose educational content at scale. That’s genuinely interesting. They also proved that doing it without consent, without accuracy, and without faculty input is a recipe for institutional crisis. That’s the part everyone should remember.
The professors aren’t anti-AI. They’re anti-being-harvested. There’s a difference, and universities ignore it at their peril.
SOURCES
- Inside Higher Ed — “Faculty Concerned About ASU’s New AI Course Builder”
- 404 Media — “ASU Atomic AI Modules”
- Reddit r/Professors — “University Professors Disturbed to Find Their Lectures Repackaged by AI”