A South Korean classroom with students taking an exam, teacher reviewing papers, faint AI evaluation overlay on a tablet, documentary style
AI-Edu

South Korea Says AI Can't Grade Students Alone — The First Law to Draw That Line

South Korea just became the first country to classify AI student evaluation as high-risk — mandatory human oversight, transparency, and impact assessments for any AI that grades students.

AI RegulationSouth KoreaAI in EducationStudent PrivacyHigh-Risk AI

While the US debates whether AI belongs in classrooms and China mandates AI curricula nationwide, South Korea has drawn a different line entirely: AI can teach, but it cannot grade students without a human in the loop.

The AI Basic Act — effective January 22, 2026, with full enforcement from January 2027 — is the first national law in the world to explicitly classify student evaluation AI as “high-impact,” placing it in the same regulatory tier as medical devices, nuclear materials, and biometric identification for law enforcement.


What the Law Actually Says

The AI Basic Act (Act No. 20676) establishes ten categories of high-impact AI where the potential consequences for human life, safety, or fundamental rights justify mandatory oversight. Among them:

  • Energy supply
  • Healthcare and medical devices
  • Nuclear materials
  • Biometric identification for law enforcement
  • Employment and credit decisions
  • Student evaluations in primary or secondary education

That last item is what makes this law unprecedented. While the EU AI Act lists education as a high-risk domain, it frames it broadly around “education and vocational training.” South Korea’s law zeroes in on the specific act of evaluating students — grading, assessing, ranking — from preschool through secondary school.

Any AI system used to evaluate students in this age range must now meet the following requirements:

  • Fundamental rights impact assessment before deployment
  • Human oversight and supervision of all evaluation decisions
  • Explainability — operators must be able to explain how the AI reached its evaluation and what data it used
  • Transparency — students and parents must be informed that AI is being used
  • Risk management plan — documented, operational, and retained
  • User protection measures — specifically designed for the student context

Fines for non-compliance reach up to ₩30 million (approximately NZ$36,000) per violation.


Why Student Evaluation, Specifically

The inclusion of student evaluation as a standalone high-impact category was not accidental. South Korean education is among the most competitive in the world. The CSAT (Suneung) determines university placement and, by extension, career trajectories. Any AI system influencing that pathway carries extraordinary weight.

The law’s drafters recognised what education researchers have been documenting for years: automated evaluation systems can encode bias, penalise non-standard expression, and create feedback loops that disadvantage already-marginalised students. A grading algorithm that systematically underrates creative or divergent answers doesn’t just affect one assignment — it can shape a student’s entire educational trajectory.

By classifying student evaluation as high-impact, the law acknowledges that the stakes of AI in education are not theoretical. They are immediate, personal, and potentially irreversible.


How It Differs From the EU AI Act

The comparison is inevitable. Both laws take a risk-based approach. Both classify certain AI uses as requiring heightened oversight. But the differences are significant:

AspectEU AI ActSouth Korea AI Basic Act
Education scopeBroad: “education and vocational training”Specific: “student evaluations in primary or secondary education”
Pre-market controlMandatory conformity assessment, CE markingNo mandatory pre-market control; voluntary confirmation process
Enforcement penaltiesUp to €35 million or 7% global turnoverUp to ₩30 million (~NZ$36,000) per violation
Regulatory philosophyEx ante — prevent harm before deploymentPost-regulation with ex-post supervision
GPAI threshold10²⁵ FLOPs10× the EU threshold (effectively exempts domestic operators)

South Korea’s approach is deliberately lighter on enforcement but more targeted in scope. The maximum fine of ₩30 million is a signal, not a deterrent — roughly the cost of a mid-range car, not the existential threat that EU penalties represent for tech companies.

The deliberate regulatory gap for large-scale GPAI models (setting the compute threshold 10× higher than the EU’s) tells you who the law is really aimed at: foreign big tech operators, not Korean startups. Domestic companies get support and subsidies; foreign operators face compliance obligations and domestic agent requirements.


The Grace Period Is Already Producing Results

Full enforcement doesn’t begin until January 2027, but the grace period is already functioning as a policy testing ground. The government has launched an “AI Basic Act Institutional Improvement Task Force” with 40+ experts from industry, academia, and civil society, actively refining definitions and standards.

Key areas under calibration:

  • What exactly qualifies as “high-impact AI” in practice
  • How deep explainability requirements should go
  • Practical compliance pathways for startups
  • Technical standards for auditability
  • Alignment between legal definitions and real deployment conditions

Jung Woo-joo, CEO of inDJ and member of the Presidential Committee on Artificial Intelligence Strategy, described the approach as deliberately flexible: “We deliberately chose a ‘post-regulation’ approach for general-purpose AI while focusing on ‘pre-emptive safety’ for high-impact areas, allowing startups the breathing room to experiment while maintaining a social safety net.”


What This Means for Education Globally

South Korea’s classification of student evaluation AI as high-risk establishes a precedent that could reshape how other jurisdictions approach AI in education:

For the EU: The AI Act already lists education as high-risk, but South Korea’s specificity on student evaluation could push the EU toward more granular enforcement guidance. The European Commission’s upcoming codes of practice may reference the Korean model.

For the US: With 31+ states enacting their own AI education policies in a patchwork with no federal coordination, South Korea offers a model of what national-level legislation looks like. It’s also a warning: the Korean law works because it has a single enforcement authority. The US has none.

For New Zealand: NZ currently has no AI-specific education regulation. The South Korean model — particularly its human oversight mandate for student evaluation — could inform NZ’s approach if and when policy develops. The question isn’t whether AI will be used to evaluate NZ students; it’s whether anyone will be watching when it does.

For education technology companies: Any edtech platform operating in South Korea that uses AI for grading, assessment, or student ranking must now conduct impact assessments, provide explainability, and maintain human oversight. This includes international platforms accessible to Korean students, since the law explicitly covers “acts conducted abroad that affect the domestic market or users in the Republic of Korea.”


The Deeper Question

South Korea’s law is important not because of its penalties — ₩30 million won’t slow down a major tech company — but because of the principle it establishes: that evaluating students is a high-impact activity deserving of the same regulatory scrutiny as medical diagnosis and nuclear safety.

This is a meaningful philosophical commitment. It says that the act of grading a child — determining their academic standing, their opportunities, their sense of capability — is consequential enough that no algorithm should do it without a human bearing responsibility for the outcome.

The law doesn’t ban AI in education. It doesn’t prevent teachers from using AI tools. It doesn’t even prevent AI from assisting evaluation. What it does is insist that when AI is used to evaluate students, someone with authority and accountability must be in the room, aware of what the system is doing, and able to override it.

In a global environment where AI adoption in education is accelerating faster than regulation can follow, that insistence matters. The question now is whether other countries will follow South Korea’s lead — or whether they’ll allow AI to grade their students without anyone watching.


SOURCES

  • South Korea AI Basic Act (Act No. 20676), effective January 22, 2026
  • FairNow — “South Korea AI Basic Act: Compliance & Insights”
  • KoreaTechDesk — “Korea’s AI Law Enters Its Next Phase” (March 2026)
  • Law.asia — “Korea’s new AI Basic Act: Characteristics and significance” (March 2026)
  • The Guardian — AI Basic Act coverage
Sources: South Korea AI Basic Act (Act No. 20676), FairNow, KoreaTechDesk, Law.asia, The Guardian