Site icon Tech-Wire

The Dangers of Protecting Students from the Dangers of AI

CREDIT insanity100 jpg

Rather than shielding students from AI’s limitations, we should be guiding them through them.

GUEST COLUMN | by Owen Matson, Ph.D.

There’s a strange contradiction at the heart of many conversations about AI in education.

We worry that students will become too passive. That AI will do the thinking for them. That it will encourage plagiarism and weaken critical agency. But in the same breath, we worry that AI isn’t reliable—that it hallucinates, gets facts wrong, or lacks reasoning skills.

So which is it? Is AI too good at thinking—or not good enough?

‘So which is it? Is AI too good at thinking—or not good enough?’

Where AI’s Pedagogical Potential Begins

The truth is, those so-called “failures” are where AI’s pedagogical potential begins. They’re where human thinking becomes most necessary.

Rather than shielding students from AI’s limitations, we should be guiding them through them. Students should be learning to challenge AI output, verify claims, and treat AI as a fallible contributor to their thinking—not as a crutch, and not as an authority. When used well, AI doesn’t undermine critical thinking. It demands it.

But that’s not how most edtech tools are designed. In many schools, teachers use AI to streamline lesson planning, grading, and administrative tasks. At the same time, students are restricted from those same tools, out of fear they’ll cheat or become dependent. This asymmetry reveals a deeper problem: AI is embraced when it boosts efficiency but distrusted when it grants students agency. The result? More efficient teachers, more passive students. It’s the same old system, now running faster.

Even when students do engage AI—via AI tutors, for example—it’s often under rigid guardrails that simulate dialogue but reproduce top-down instruction. The AI plays the role of expert, the student of compliant respondent. These tools may seem to perform student-centered learning, but they reinforce dependency and limit exploration.

Not Just a Policy Issue

This isn’t just a policy issue—it’s a design issue. Most AI-powered edtech tools still operate on the old delivery model. They frame learning as content acquisition and use AI to accelerate delivery, automate feedback, and “personalize” instruction. But what they call “personalization” (a conspicuously vague marketing buzzword that once had genuine pedagogical meaning) is often little more than customization at scale. It treats students like users selecting options on a menu, rather than learners shaping their own pathways.

Real personalization is different. It enables students to metacognitively recognize and engage the full complexity of their own variability as learners—differences in memory, motivation, language, culture, identity. Personalization requires environments that adapt not just to what students know, but to how they make meaning. And most importantly, it requires that students have meaningful agency in the learning process.

If standard edtech personalization is like ordering a burger your way, true personalization is more like choosing the ingredients and learning how to cook.

AI’s Real Promise

AI’s real promise isn’t just automating education—it’s redefining it. AI-generated content is no longer remarkable—it’s the baseline. Anyone with a decent prompt can generate a passable response. That’s not deep learning. That’s not even thinking.

Education’s role now is to help students rise above the baseline—to refine, question, and build on what AI produces in ways that reflect their own insight and creativity. The goal isn’t to offload thinking to machines, but to develop the skills to think with them.

As Chris Dede has noted, the value of student work will increasingly lie in what neither the student nor the AI could have produced alone. What Dede means here (without saying it outright) is that human learners and AI work as an interdependent system–it’s collaborative cognition, not unlike what happens in effective peer learning environments. In this view, AI isn’t a shortcut. It’s a provocation.

Anyone who’s actually worked with these tools knows: the moment you try to let AI do your thinking for you, the result is the same flat, generic output we’re already drowning in. Cookie-cutter blog posts. Thoughtless “thought leadership.” Writing without thought.

Ironically, in trying to protect students from intellectual passivity, we’ve created systems that reinforce it. By treating AI as a threat to manage rather than a domain to understand, we deny students the very capacities they need most: how to ask better questions, how to navigate uncertainty, and how to think critically in hybrid human-machine environments.

AI isn’t going away. It’s not a trend—it’s a structural shift in how knowledge is produced and shared. The question isn’t whether students should use AI. It’s whether they’ll learn to use it well.

Owen Matson, Ph.D., designs AI-integrated edtech platforms working at the intersection of teaching, learning science, and systems thinking and is the Head of Educational Content Strategy at ViewSonic. He earned his doctorate in English Language and Literature/Letters at Princeton University. Connect with Owen on LinkedIn.

Share this:

Like this:

Like Loading…

Related

________________________________________________________________________________________________________________________________
Original Article Published at Edtech Digest
________________________________________________________________________________________________________________________________
Exit mobile version