There’s a question every education system is suddenly forced to answer, and most are getting it wrong: how do you teach AI in schools?
Not whether — that debate is over. Students are already swimming in AI, whether schools acknowledge it or not. The real question is how, and a new analysis from Canadian researchers lays out three distinct models emerging globally. The differences between them aren’t academic — they’ll determine whether a generation of students develops genuine AI literacy or just learns which buttons to press.
The Three Models
Published in The Conversation and covered by Phys.org, the framework identifies three approaches to integrating AI into school curricula:
Model 1: Dedicated Subject. AI and computer science get their own course, with protected classroom time, trained teachers, and proper assessment. British Columbia requires a full-year applied design and technology course in Grade 8. Newfoundland and Labrador has dedicated computer science courses in Grades 9 and 10. Ontario’s computer studies curriculum creates dedicated course space for computing concepts.
This is the gold standard. Students get sustained, sequenced learning with teachers who actually understand the material. It also creates the conditions for implementing serious frameworks like AI4K12’s five “big ideas” (perception, representation and reasoning, learning, natural interaction, and societal impact) and UNESCO’s AI literacy guidance.
Model 2: Embedded in Existing Subjects. Digital learning is woven into other subjects — tech literacy as part of English, data analysis in maths, ethics in social studies. New Brunswick’s “Middle Block” approach treats technology as one learning area among many.
The upside is connection: students see AI applied in context. The downside is capacity. Teachers already juggling overloaded curricula are now expected to teach AI concepts they may barely understand themselves. The result is often shallow — a lesson on “how to use ChatGPT” rather than understanding why it works, what it gets wrong, and who it disadvantages.
Model 3: Transversal Framework. Digital competencies are meant to be integrated across all subjects. Manitoba’s ICT literacy framework and Alberta’s ICT program of studies both state that digital skills should be “infused within core courses” rather than taught separately. Quebec has a province-wide digital competency framework.
This sounds progressive — AI everywhere! In practice, “everywhere” often means “nowhere.” Without protected time, dedicated teachers, or clear assessment, transversal approaches tend to produce vague outcomes. Students learn that AI exists, but not how it works, how to critique it, or how to use it responsibly.
Why This Matters for New Zealand
Here’s the uncomfortable truth: New Zealand doesn’t cleanly fit any of these three models. We’re closer to Model 3 in aspiration but haven’t even achieved that in practice.
NZ’s digital technology curriculum, introduced in 2020, is technically compulsory — but implementation varies wildly between schools. Many teachers received minimal training. Assessment is inconsistent. And the curriculum was designed for a pre-ChatGPT world. There’s no dedicated AI literacy component, no framework for how AI should be taught, and no requirement for schools to go beyond the basics of computational thinking.
The Canadian research makes clear that without deliberate choices about which model to adopt, schools default to the weakest version: AI as a set of app tips delivered by teachers who are learning alongside their students. That’s not literacy. That’s babysitting.
What Good AI Education Actually Looks Like
The researchers argue that the dedicated subject model (Model 1) creates the strongest foundation for genuine AI literacy. Here’s why:
- Protected time means sustained learning, not a one-off lesson squeezed between other priorities.
- Trained teachers can translate specialised concepts for non-specialists — and they can spot when students are using AI to avoid thinking rather than enhance it.
- Proper assessment means measuring understanding, not just usage. Can the student explain why a model produced a particular output? Can they identify bias? Can they evaluate when AI is appropriate and when it isn’t?
- Cumulative progression means building on prior knowledge across years, not re-teaching the same “intro to ChatGPT” module every September.
This isn’t just academic theory. UNESCO’s AI literacy framework envisions students as “co-creators and responsible citizens” — not passive consumers of AI tools. That requires depth. And depth requires structure.
The Training Problem
All three models share one critical dependency: teacher capability. As one leading business school course developer confessed to the Yale researchers: “Our faculty are passionate, but there are two problems. One is that the AI models are developing so quickly that it’s hard for teachers to put together courses that aren’t quickly outdated. The second problem is that a growing number of students have experience with these models that far outpaces that of the faculty.”
If that’s true at one of the world’s top three business schools, imagine what it looks like at a under-resourced NZ secondary school where the IT teacher is also covering maths and PE.
The Canadian analysis doesn’t sugar-coat this: in embedded and transversal models, “teachers are asked to carry new conceptual content without necessarily having time, training or materials.” The dedicated subject model doesn’t eliminate this problem, but it concentrates it — you need fewer specialist teachers, and you can invest in training them properly.
What NZ Should Do
We’ve written before about NZ’s AI education gap and the need for better AI literacy frameworks. The three-model framework gives policymakers a clear decision point:
- Pick a model. The worst approach is to let every school figure it out independently. That guarantees inconsistency and inequality.
- Invest in Model 1. Dedicated AI/computing subjects with trained specialist teachers. Start with Year 9-10, expand to NCEA levels.
- Supplement with Model 2. Use dedicated subjects as the foundation, then embed AI literacy across the curriculum — taught by teachers who understand it deeply, not expected to pick it up from a PDF.
- Drop Model 3 as a primary approach. “AI across everything” with no structure is a recipe for surface-level understanding.
The alternative? A generation of students who can prompt ChatGPT but can’t explain what a large language model is, can’t identify when they’re being manipulated by an algorithm, and can’t evaluate whether an AI output is trustworthy. That’s not education. That’s training users to be compliant consumers.
🔍 THE BOTTOM LINE
Three models for teaching AI in schools, and only the dedicated-subject approach produces genuine literacy. Everything else risks producing students who can use AI but don’t understand it. New Zealand currently has no coherent model at all — and the longer that gap persists, the wider the disadvantage grows for students who can’t access deep AI learning outside of school.