Empty school hallway with fluorescent lights, a digital screen displaying AI text, no students, documentary style photography
AI-Edu

NYC Scraps AI High School, Admits Guidelines Aren't Ready — What Does That Tell Us?

NYC just killed its AI high school and published guidelines everyone hates. If the biggest school district in the US can't figure out AI policy, what hope do the rest of us have?

AI EducationAI PolicyNYC SchoolsSchool Guidelines

New York City just sent two very clear signals about AI in education, and neither of them is encouraging.

First, on April 27, Chancellor Kamar Samuels cancelled plans for “Next Gen,” the city’s first AI-focused high school in Manhattan — along with controversial proposals to close two middle schools. The AI school idea is dead, at least for now.

Second, the AI guidance the DOE released in March? Even members of the city’s own AI task force say it’s toothless.

If the largest school district in the United States — with more resources than most countries — can’t figure out how to integrate AI into education, that’s a problem for everyone.


The AI High School That Wasn’t

The “Next Gen” AI high school was supposed to be a flagship: a dedicated institution preparing students for an AI-driven future. Instead, it got killed by community opposition before it even opened.

Samuels said he wants to prioritize completing the DOE’s AI guidance playbook before revisiting specialized AI school ideas. Which would be more reassuring if that guidance wasn’t already drawing fire from every direction.


The Guidelines Nobody Likes

In March, the NYC DOE released its preliminary AI guidelines for schools. The reaction was swift and brutal:

  • Parent advocate Leonie Haimson highlighted a “growing chorus” calling for a full moratorium on AI in schools, citing academic integrity, environmental, mental health, and privacy risks.
  • A CUNY professor called the policy “worse than no policy at all,” pointing to privacy violations, plagiarism risks, environmental harm, stifled critical thinking, and mental health dangers.
  • Task force member Naveed Hasan noted the guidance can’t even ensure privacy, and urged the DOE to build in-house AI capabilities rather than depend on external vendors.
  • The 74 reported that the guidelines “raise more questions than they answer.”

The public comment period runs until May 8. Given the reception so far, the DOE might want to start from scratch.


What the Guidelines Actually Say (and Don’t)

The preliminary guidance provides a basic roadmap for AI use in classrooms. That’s the generous description. The less generous one: it’s a framework for saying “we have a policy” without the policy actually doing anything.

Missing from the guidance:

  • Privacy protections — no specifics on student data collection, retention, or third-party sharing
  • Oversight mechanisms — who monitors AI tools in schools, and how?
  • Risk assessment — no framework for evaluating whether an AI tool is appropriate for students
  • Guardrails for cheating — vague guidance on academic integrity in the age of ChatGPT
  • Teacher training — almost nothing on preparing educators to use (or refuse) AI tools responsibly

Some schools weren’t even waiting for the guidance, developing their own policies in the vacuum. That’s either inspiring autonomy or alarming fragmentation, depending on your perspective.


Why This Matters Beyond New York

NYC is the largest school district in the US with over a million students. When it sneezes, education policy catches a cold. The district’s failures here are a warning for everyone else:

Specialized AI schools sound great until you try to build one. What curriculum? What teacher qualifications? What safeguards? “AI school” is a branding exercise without answers to these questions.

Vague guidelines are worse than none. A policy that creates the illusion of oversight without actual oversight gives schools false confidence. It’s the educational equivalent of a “no weapons” sign on a door with no lock.

Community pushback is real and growing. Parents and teachers aren’t automatically embracing AI in classrooms. They have concerns about privacy, critical thinking, and whether these tools actually help students learn — or just help vendors sell products.


The NZ Connection

New Zealand is still developing its own AI education policies, and the NYC experience should be required reading in Wellington. The lesson: don’t publish guidelines that create more confusion than clarity. Don’t announce AI schools before you have the curriculum figured out. And absolutely do not let EdTech vendors set your policy agenda.

We’ve already seen what happens when AI tools enter classrooms without proper guardrails — bias creeps in, and students pay the price.


🔍 THE BOTTOM LINE

NYC just admitted what many education systems are too polite to say out loud: nobody really knows how to do AI in schools yet. The district that was supposed to lead the way cancelled its AI school and published guidelines that its own task force says don’t work. That’s not failure — that’s honesty. And honestly, it’s refreshing. The worst approach right now isn’t moving slowly. It’s moving quickly in the wrong direction.


SOURCES

Sources: Chalkbeat New York, The 74, X/Twitter