AI in schools is starting to look less like an “education revolution” and more like a stress test of the old system. When you read the recent reports together, a pretty consistent story shows up: AI isn’t automatically improving learning—because schools are mostly using it to do traditional schooling more efficiently. And when you speed up an outdated model, you don’t get transformation. You get the same problems, just at machine speed.

A big part of the concern is developmental. Brookings’ Global Task Force on AI in education argues that generative AI can offer real support, but the risks can outweigh the benefits when AI becomes a substitute for the “effortful thinking” that builds understanding, judgment, and agency—especially for children and teens who are still forming learning habits (Burns et al., 2026). In other words, the tool is powerful, but the default incentives in school often push students toward using that power to bypass the very work that makes learning meaningful.

The climate data makes this harder to dismiss as an edge case. Pew’s national teen survey suggests that AI-assisted cheating (or at least the belief that it’s happening) is common enough to change norms inside schools (Pew Research Center, 2026). Once “everyone assumes AI anyway,” classrooms drift toward suspicion, policing, and defensive teaching—none of which is great for relationships or motivation.

There’s also the institutional readiness problem. CDT’s survey-based findings link expanding AI use in K–12 settings with increased risk exposure, including cybersecurity issues and other negative spillovers that schools are often not staffed or structured to handle well (Laird, 2025). This is less about “bad students” and more about schools plugging powerful tools into fragile governance: unclear guidance, uneven training, inconsistent expectations, and predictable confusion.

And then there’s the other “AI in schools” story that many people don’t mean when they say “AI”: surveillance. Investigative reporting by the Associated Press has documented serious privacy and security concerns around AI-enabled monitoring tools used on school devices, raising questions about whether the costs to trust and student privacy are being treated as acceptable collateral damage (Associated Press, 2025).

One more wrinkle: strict bans can drive AI use underground

This is where policy can accidentally make things worse. When schools respond with blanket “no AI” rules, it often doesn’t stop use—it just makes people less likely to admit it. Reporting on academic integrity confusion notes that students may avoid asking teachers for clarification because admitting any AI use could get them labeled a cheater (Gecker, 2025). That creates a bad dynamic: the system gets less honest, not more ethical.

It also creates moral whiplash. Some students (and teachers) may feel that using AI to do schoolwork is “cheating” or at least morally questionable—yet still use it anyway, because AI can do the task better, faster, or more fluently. The result is a kind of daily cognitive dissonance: use it, feel uneasy, hide it, repeat. That’s a terrible way to build norms, and it makes the “AI conversation” mostly performative.

So what’s the real issue?

Taken together, these reports aren’t really saying “AI is ruining education.” They’re revealing a tougher truth: AI is exposing what school has been optimized to reward. If success is mainly about producing artifacts (essays, answers, worksheets) for grades, AI will produce the artifacts. If the system rewards coverage, speed, and surface polish, AI will amplify coverage, speed, and polish—without necessarily improving understanding.

Last year I published an article in Educational Leadership to make the point: if schools don’t change, AI’s potential won’t be realized (Zhao, 2025). The fix isn’t just better detection or stricter bans. Those approaches often create fear and secrecy. The deeper fix is redesign.

What needs to change (and yes, it’s brutally hard)

We need to rethink three things—starting with the biggest one:

  1. Rethink the curriculum: What do students truly need to learn now? Do we still need the traditionally defined curriculum in the same form, sequence, and density—especially when AI can instantly provide explanations, examples, drafts, and practice?
  2. Rethink learning and teaching: If AI can “do the work,” then learning needs to be designed around what can’t be outsourced so easily—problem finding, critique, judgment, creativity, ethical reasoning, collaboration, and making things that matter to real audiences.
  3. Rethink assessment: If we keep grading mostly final products, AI will keep winning. We need assessments that value process, iteration, and thinking over time—draft trails, decision logs, oral defenses, live demos, portfolios, and long projects.

And here’s the uncomfortable part: these changes are extremely difficult—often feeling almost impossible—because they collide with entrenched systems (standards, pacing, exams, college admissions, parent expectations, accountability structures, and the “grammar of schooling”). In the future, we’ll talk about how change can happen anyway—and what a realistic pathway might look like.

References

Associated Press. (2025, March 12). Schools use AI to monitor kids, hoping to prevent violence. Our investigation found security risks. https://apnews.com/article/25a3946727397951fd42324139aaf70f

Burns, M., Winthrop, R., Luther, N., Venetis, E., & Karim, R. (2026, January 14). A new direction for students in an AI world: Prosper, prepare, protect. Brookings Institution. https://www.brookings.edu/articles/a-new-direction-for-students-in-an-ai-world-prosper-prepare-protect/

Gecker, J. (2025, September 12). The rise of AI tools forces schools to reconsider what counts as cheating. Associated Press. https://apnews.com/article/ai-cheating-school-chatgpt-4f89a552e9093ce2180471b4d4736675

Laird, E. (2025, October 8). Hand in hand: Schools’ embrace of AI connected to increased risks to students. Center for Democracy & Technology. https://cdt.org/insights/hand-in-hand-schools-embrace-of-ai-connected-to-increased-risks-to-students/

Pew Research Center. (2026, February 24). How teens use and view AI. https://www.pewresearch.org/internet/2026/02/24/how-teens-use-and-view-ai/

Zhao, Y. (2025). If schools don’t change, the potential of AI won’t be realized. Educational Leadership, 82(5), 36–40. https://ascd.org/el/articles/if-schools-dont-change-the-potential-of-ai-wont-be-realized

More about Yong Zhao

Dr. Yong Zhao is a Foundation Distinguished Professor in the School of Education at the University of Kansas. He previously served as the Presidential Chair, Associate Dean, and Director of the Institute for Global and Online Education in the College of Education, University of Oregon, where he was also a Professor in the Department of Educational Measurement, Policy, and Leadership. Prior to Oregon, Yong Zhao was University Distinguished Professor at the College of Education, Michigan State University, where he also served as the founding director of the Center for Teaching and Technology, executive director of the Confucius Institute, as well as the US-China Center for Research on Educational Excellence. Additionally, he worked as a professor of educational leadership in the Faculty of Education at University of Melbourne and senior researcher at the Mitchell Institute of Victoria University in Australia. He was a visiting Global Professor at University of Bath and a visiting scholar at Warwick University in the UK.