Site icon Tech-Wire

Ending The Arms Race: Addressing Shadow AI Use in Higher Education

Kulpreya Chaichatpornsuk jpg

As unauthorized AI use surges across campuses, universities face a critical choice: clamp down and risk alienating faculty—or build smarter frameworks that channel innovation safely and transparently.

GUEST COLUMN | by Lauren Spiller

KULPREYA CHAICHATPORNSUK

Now that school’s back in session, the shadow AI challenges documented by the New York Times in May haven’t disappeared—they’ve likely intensified.

The Times highlighted faculty using ChatGPT to create course materials and grade assignments, but the scope of unauthorized employee AI use goes far beyond individual instructors. A recent survey reveals 78% of education employees know of colleagues using unauthorized AI, while only 31% say their schools have clear policies.

‘A recent survey reveals 78% of education employees know of colleagues using unauthorized AI, while only 31% say their schools have clear policies.’

As faculty and staff across campuses turn to AI for everything from administrative tasks to course development, universities face a fundamental question: how should they actually manage unauthorized AI use?

Current state of shadow AI in education

Despite widespread unauthorized AI use, it’s clear that education employees want institutional guidance. Seventy-six percent want clearer policies on AI use, 72% want access to official tools, and 56% want better education on risks.

Successful institutions will harness this enthusiasm by channeling existing AI adoption into approved frameworks. But the rapid-fire pace of new adoption presents challenges: 84% of education IT leaders say employees are adopting AI tools faster than they can be assessed, while 83% find it difficult to control unauthorized AI use.

This reality leaves institutions in a bind. As an IT professional from Texas State University puts it, “Do you go for maximum security and close all the doors, or do you acknowledge that we’re a research institution and need to give people access to the tools that exist?”

“Do you go for maximum security and close all the doors, or do you acknowledge that we’re a research institution and need to give people access to the tools that exist?”

The gap between employee demand for guidance and institutional ability to provide it reveals a clash between how AI tools work and how traditional IT governance operates. As a result, institutions may feel forced to choose between restricting tools their employees need for productivity and allowing unchecked adoption that creates security and compliance risks.

Why traditional IT governance falls short

The standard IT governance playbook—assess, approve, deploy—breaks down when applied to AI tools that employees can access instantly and use in countless unpredictable ways. As Dr. Shlomo Engelson Argamon, Associate Provost for AI at Touro University, explains, “When we talk about AI leadership, we have to be looking at not just the tools’ capabilities and risks, but how they change dynamics.”

Here are just a few ways AI changes the dynamics of traditional IT governance:

Speed of access. Web-based AI tools are available instantly, unlike traditional software that may require approval before installation. Employees can start using new AI tools immediately, making it impossible for IT teams to assess risks before adoption occurs.

Unpredictable use cases. Unlike traditional software with defined functions, AI tools invite “off-label” use, meaning employees may use them in ways developers never anticipated. This makes it difficult for IT to account for all potential use cases when vetting new tools.

Higher stakes and overconfidence. AI tools can cause system-wide damage in ways traditional software likely won’t (e.g., corrupting code repositories or research databases). The risk is amplified by users’ tendency to treat AI responses as authoritative rather than probabilistic, leading them to implement recommendations without adequate testing or verification.

Traditional governance models assume predictable adoption patterns and defined use cases—assumptions that break down completely with AI. Instead of forcing AI into frameworks designed for traditional software, institutions need new approaches built around AI’s core characteristics: instant accessibility, limitless applications, and the potential for both transformative benefits and serious harm.

Building better frameworks

Rather than treating AI governance as a traditional software deployment challenge, successful institutions are developing frameworks that acknowledge AI’s interactive and rapidly evolving nature. Two distinct but complementary approaches are emerging from early adopters: the safe spaces approach and the entrepreneurial model.

‘Rather than treating AI governance as a traditional software deployment challenge, successful institutions are developing frameworks that acknowledge AI’s interactive and rapidly evolving nature.’

The safe spaces approach

Some institutions are creating secure environments where experimentation can occur without compromising institutional security. Texas State exemplifies this model by providing enterprise accounts for AI platforms like ChatGPT, Perplexity, and Copilot, ensuring that faculty, staff, and student data remains within isolated environments.

This approach requires clear communication about appropriate use. “We have a handful of formally adopted applications that we’re allowed to use for university business,” says the IT professional we spoke to. “We also encourage staff to explore other tools and see what exists, but those other tools are not approved for use with business data.”

It’s also worth mentioning that the safe spaces approach requires significant upfront investment but pays dividends in reduced shadow IT adoption. When people have access to capable, approved tools, they’re less likely to seek unauthorized alternatives. The key here is ensuring the approved tools are genuinely useful, not just safe.

The entrepreneurial model

Touro University takes a different approach, treating faculty as entrepreneurs within a structured ecosystem. Rather than prescriptive top-down policies, their framework emphasizes distributed decision-making with central support.

“We want to give faculty the maximum autonomy possible to explore the space of how AI might be used,” Argamon explains. This includes running a Faculty Innovation Grant program where faculty develop new applications and share results across Touro.

The entrepreneurial model requires three administrative functions: establishing safety guardrails (e.g., data security, academic honesty policies), providing resources and training, and creating incentives for innovation and knowledge sharing. Its aim is not to police but to support faculty as they navigate this new frontier in education.

‘Its aim is not to police but to support faculty as they navigate this new frontier in education.’

Both approaches acknowledge that effective AI governance isn’t about controlling a technology—it’s about managing a new form of human-machine collaboration that changes behavior patterns. Fortunately, survey data shows that institutions are ready to move beyond restriction toward more flexible frameworks: 62% of IT leaders suggest integrating approved tools into workflows, while 58% recommend clearer policies.

Practical next steps

Moving from traditional restriction-based policies to more collaborative frameworks requires addressing specific implementation challenges:

Start with tool ecosystems and transparent policies. Rather than banning AI, develop disclosure requirements and boundaries around data types and use cases. As Argamon notes, “AI use nearly always needs to be transparent and disclosed.” As we mentioned earlier, Texas State’s enterprise accounts for popular AI tools show how providing options reduces shadow adoption when paired with clear guidelines.

Implement risk-based tool categorization. Instead of green- or red-lighting individual AI platforms, categorize tools by the data types they can handle safely, from public information to sensitive institutional data. This allows flexibility while maintaining security standards appropriate to specific use cases.

Build cross-functional governance committees. The fact that fewer than a third of institutions have clear AI policies suggests that most need structured policy development. Include representatives from IT, faculty, administration, legal, and students to ensure policies address real-world use cases.

Establish rapid vetting processes. Traditional software approval cycles that take weeks or months don’t work for AI tools. Focus assessments on data security and institutional risk rather than exhaustive feature analysis, and provide clear pathways for faculty and staff to add new tools to the queue.

Create structured learning mechanisms. Peer-to-peer education is more effective than top-down mandates. Identify power users or “AI champions” who can demonstrate effective practices and serve as departmental resources for colleagues. As our Texas State representative explains, “It’s one thing to be told what a tool can do, and another to have somebody give a 10-minute demo.”

Key takeaways

Higher education’s shadow AI challenge isn’t about rogue employees—it’s about institutions trying to apply outdated governance models to new technology. The arms race mentality of restriction versus adoption creates a false binary that leaves institutions trapped between discouraging innovation and compromising security.

The solution? Collaborative frameworks that no longer stifle the AI-curious, but instead channel their enthusiasm toward institutional goals. The window for proactive governance is closing as adoption accelerates, so institutions that act now will navigate the transition more successfully than those waiting for perfect solutions.

Lauren Spiller is an enterprise analyst at ManageEngine, where she explores how emerging technologies like AI are transforming digital workplaces. Her research and writing focus on governance, security, and the human side of tech adoption. Prior to joining ManageEngine, she worked at Gartner, developing data-driven content to help business leaders and software buyers make smarter decisions in fast-moving markets. Before that, she taught college writing and served as the writing center assistant director at Texas State University. She has presented at the European Writing Centers Association, Canadian Writing Centres Association, and the International Writing Centers Association conferences. Lauren holds a B.A. from Ashland University and an M.A. from Texas State University. Connect with Lauren on LinkedIn.

Share this:

Like this:

Like Loading…

Related

________________________________________________________________________________________________________________________________
Original Article Published at Edtech Digest
________________________________________________________________________________________________________________________________
Exit mobile version