“The risks of AI in schools outweigh the benefits, report says” is the title of a recent NPR story. The title is dramatic, reassuring, and deeply misleading. It is also a familiar title in the history of technology and education. Whenever, a significant new technology emerged, people wanted to know whether it’s beneficial or harmful, which sounds reasonable but ignores what is really important and meaningful.
Much of the current panic centers on the claim that AI makes students lazy and teachers dependent. One of the biggest concerns about AI in schools is academic integrity or lack thereof. This is true—but not in the way critics think. AI does not create laziness. It simply performs, efficiently and convincingly, the very tasks schools have long required students and teachers to perform.
If a machine can produce an acceptable essay, solve assigned problems, and generate lesson plans aligned with standards, the appropriate response is not outrage. It is reflection. Why were these tasks ever treated as meaningful indicators of learning in the first place?
There is no reason for students not to use technology to do what they are required to do if the technology is capable. This has always been true. Calculators did it to arithmetic. Spellcheck did it to spelling. AI is now doing it to most schoolwork. The difference is not moral. It is existential.
Teachers are not immune either. If AI can generate quizzes, feedback, and lesson plans faster than overworked educators, many will use it. This is not professional decline; it is professional survival in a system that long ago reduced teaching to compliance with curriculum and pacing guides.
So the real problem is not whether AI is good or bad for schools. That question misses the point. The real issue is that schools have not changed, and AI makes that failure impossible to ignore.
Bad uses of AI are easy. They require no change at all. Students use AI to complete meaningless assignments. Teachers use AI to manage meaningless requirements. Everyone becomes more efficient—and less educated.
Good uses of AI are much harder because they demand that schools rethink their purposes. AI can support problem finding, creativity, judgment, iteration, and value creation—but only if schools stop equating learning with standardized outputs. As long as assessment rewards what machines can easily generate, machines will dominate assessment.
The panic about cheating is really panic about assessment. AI did not break assessment; it exposed its fragility. When learning collapses the moment students gain access to tools, the system was measuring obedience, not understanding.
AI is not the enemy of education. It is the mirror. And what it reflects is uncomfortable: a system built for sorting rather than developing talent, for control rather than curiosity, for efficiency rather than meaning.
We can ban AI and preserve the illusion that schooling still works. Or we can finally ask what schools should be for in a world where information and answers are no longer scarce.
The future of AI in education does not depend on better algorithms. It depends on how willing we are able to transform education. I have written about the transformation we need in a number of places:
From Meritocracy to Human Interdependence: Redefining the Purpose of Education published in ECNU Review of Education. Available at:https://journals.sagepub.com/doi/10.1177/20965311251351988.
Artificial Intelligence and Education: End the Grammar of Schooling published in ECNU Review of Education. Available at https://journals.sagepub.com/doi/full/10.1177/20965311241265124
If Schools Don’t Change, the Potential of AI Won’t Be Realized published in Educational Leadership. Available at: https://ascd.org/el/articles/if-schools-dont-change-the-potential-of-ai-wont-be-realized
