A tech-savvy English professor turned part-time AI consultant helps us sort through what we’re even talking about.
GUEST COLUMN | by Jason Gulya
HASAN AS ARI
I talk about Artificial Intelligence and the future of education almost every day. I talk to professors. Some of them are incorporating AI into their classrooms. Others resist it. I talk to students about it. They range from fully embracing AI to resisting to being indifferent. I talk to administrators. They talk about it as a powerful efficiency boost. They also talk about it as a security threat.
Through all of these conversations — which have been relatively consistent for the last two years — I’ve learned one thing. There is a lot of confusion about what “AI” actually is.
‘…I’ve learned one thing. There is a lot of confusion about what “AI” actually is.’
Part of the problem is that virtually everyone has latched onto ChatGPT as the AI program par excellence. I’ve sat through hours of meetings about AI and education, only to realize later that we treated “AI” as synonymous with ChatGPT.
Add to that companies’ penchant for adding the “AI” label to virtually any of their products, and you’re bound to have confusion. This goes for the classroom as much as for the world at large.
For these reasons, I often begin my talks with students – and my trainings with faculty – with a clarification. When we think about AI, we need to think beyond ChatGPT.
ChatGPT is Only One Kind of Chatbot
ChatGPT is only one kind of chatbot. And chatbots are only one kind of Large Language Model (LLM). And Large Language Models are only one kind of Language Model. And Language Models are only one kind of Generative AI. And Generative AI is only one kind of AI. And AI is only one kind of adaptive technology. And so on.
If nothing else, the rise of Generative AI has made it abundantly clear that we often use language that seems clear, but that actually hides a great deal of information. I was reminded of this recently, when I came across a Substack post by Phil Christman. In it, Christman reviewed John Warner’s recent book More Than Words: How to Think About Writing in the Age of AI (Basic Books, 2025). This is the quote that got me thinking:
“LLMs aren’t intelligent, either artificially or in any other way. They cannot “write” the papers that students coax out of them, because, as Warner argues, “Large language models do not ‘write.’ They generate syntax. They do not think, feel, or experience anything. They are fundamentally incapable of judging truth, accuracy, or veracity.” Nor did they “read” the texts on which they were trained (and “trained” itself is arguably another misnomer). They solve a probability problem — what words are more likely to appear next to other words, given set constraints — at impressive speeds as reservoirs of drinking water are repurposed to cool enormous server racks.”
This quote forces us to dig deep into the words we use.
What is “writing”?
What is “feeling”?
What is “reading”?
What is “training”?
What is “artificial”?
What is “thinking”?
What is “generating”?
What is “intelligence”?
Reading this passage forces us to go down a series of rabbit holes, making us increasingly skeptical of the terms and phrases we use to discuss this technology. It forces us to tackle definitions.
Process is Pivotal
Behind these questions looms a much larger — more meta — set of questions. Are we being too rigid with our definitions? Are we right to say things like “AI can’t write and “AI can’t read”? Or are our definitions of reading and writing changing?
The process of “tackling definitions” means two things. First, it means laying out how concepts like AI, chatbots, LLMs, and Generative AI intersect with each other. Second, it means questioning (and requestioning) the language we use to describe not only these systems, but our own lives and cognition.
Guiding students through this process is pivotal to encouraging AI Literacy. We need to know what we’re talking about, exactly. We also need to know that anything that seems simple — such as a word like “intelligence” or “writing” — is more complex than we might think.
And above all else, we need to recognize that the language we use matters.
—
Jason Gulya is Professor of English & Applied Media at Berkeley College and an AI consultant and strategist for colleges. Connect with Jason on LinkedIn.
Share this:
- Click to share on Twitter (Opens in new window)
- Click to share on Facebook (Opens in new window)
- Click to share on LinkedIn (Opens in new window)
Like this:
Like Loading…
Related
Original Article Published at Edtech Digest
________________________________________________________________________________________________________________________________