On April 11, a controversy erupted across Los Angeles schools that educators had been dreading for months. Teachers reported dramatic score spikes on exams — not from better teaching, but from students using Google Lens to get instant AI-generated answers during tests.
The pattern was unmistakable. Students who had struggled all semester were suddenly acing assessments. The tool responsible was sitting on every LA Unified school Chromebook: Google Lens, with its AI-powered visual search that can photograph a test question and return a complete answer in seconds.
What Happened
Multiple teachers across LA Unified School District reported the same phenomenon: exam scores spiking in ways that bore no correlation to classroom performance. When they investigated, they found students pointing their Chromebook cameras at test papers and getting AI-generated answers in real time.
Google Lens, integrated into ChromeOS as a standard feature, doesn’t require any special installation or technical skill. Students don’t need to know how to prompt an AI or navigate a separate app. They just point, click, and read.
The speed and simplicity of the cheating method is what makes it so hard to detect. Unlike traditional cheating — notes hidden in pockets, answers written on desks — Google Lens leaves no physical trace. The AI answer appears on screen and vanishes when the student closes it. Unless a teacher is literally watching a student’s screen at the exact moment of the query, there’s nothing to catch.
The Chromebook Problem
At the centre of this scandal is a systemic decision that many school districts will recognise. LA Unified deployed Chromebooks running ChromeOS to hundreds of thousands of students. Google Lens comes bundled with ChromeOS as a built-in feature — it’s not a separate app that administrators can simply uninstall.
LA Unified’s position is that the tool has legitimate educational uses. Students can use it to look up vocabulary, identify objects, translate text, and access visual learning resources. Removing it would also remove those capabilities.
The district maintains that the answer isn’t to ban the tool but to teach students to use it ethically — through digital literacy modules and academic honesty codes. Teachers on the ground say that approach isn’t working.
The Assessment Integrity Gap
The LA scandal exposes a fundamental mismatch that most education systems haven’t addressed: AI tools have outpaced assessment design.
Traditional exams — written tests, multiple-choice quizzes, short-answer questions — were designed for an era when the only way to get an answer was to know it or look it up in a textbook. When a student can photograph a question and receive a complete, accurate answer in under five seconds, those assessment formats become unreliable measures of learning.
The solutions being discussed include:
- Pen-and-paper assessments — Return to handwritten exams where phones and cameras are removed
- Oral examinations — Test understanding through conversation rather than written output
- In-class supervised work — Shift assessment weight toward work done under direct observation
- AI-proof assignment design — Create tasks that require synthesis, creativity, or physical demonstration rather than recall
Each of these has trade-offs. Pen-and-paper is slower and harder to scale. Oral exams are labour-intensive. AI-proof assignments require fundamentally rethinking curriculum design, not just tweaking test formats.
California’s Policy Response
The California Department of Education has issued recommendations for AI-proof assignments and assessment redesign, signalling that state-level policy is starting to engage with the problem. But recommendations aren’t mandates, and the gap between state guidance and classroom reality remains wide.
A recent study found that 74% of US schools now have AI policies in place, but 76% of teachers say they’ve received no training on how to implement them. LA Unified’s experience is a case study in that disconnect: the district has digital literacy modules and honesty codes on paper, but teachers report minimal actual preparation for dealing with AI-assisted cheating in practice.
The broader issue is consistency. Without statewide standards, individual districts are left to navigate AI tool access and assessment integrity on their own. A student in one district might face pen-and-paper exams while a student in the neighbouring district tests on Chromebooks with Google Lens enabled. The inequity compounds existing gaps.
The Bigger Picture
What’s happening in LA is not an isolated incident. It’s the first large-scale, publicly reported case of AI cheating causing measurable score distortion in a major school district. But every district that has deployed AI-capable devices is dealing with the same tension, whether they’ve acknowledged it publicly or not.
The debate isn’t really about Google Lens specifically. It’s about what happens when you give students AI tools that are more capable than the assessments designed to measure their learning. That gap will only widen as AI capabilities improve.
The educators advocating for pen-and-paper alternatives aren’t anti-technology. They’re pointing out that assessment integrity is a prerequisite for meaningful education — if you can’t trust that test scores reflect learning, the entire system of credentials and qualifications starts to erode.
LA Unified may be the first district to make headlines, but they won’t be the last. The question isn’t whether AI cheating will happen elsewhere — it’s whether anyone will be prepared when it does.
SOURCES
- National Today — Google Lens sparks debate over AI’s role in education