Generative artificial intelligence (AI) is trained on enormous bodies of text, video and images to identify patterns. It then creates new texts, videos and images on the basis of this pattern identification. Thanks to machine learning, it improves its ability to do so every time it is used.
As AI becomes embedded in academic life, a troubling reality has emerged: students are extremely vulnerable to its use. They don’t know enough about what AI is to be alert to its shortcomings. And they don’t know enough about their subject content to make judgements on this anyway. Most importantly, they don’t know what they don’t know.
As two academics involved in higher education teaching, we argue that there are four key dangers facing students in today’s world of AI. They are:
blind trust in its abilities
using it to side-step actual learning
not knowing how it works
perpetuating the gap between expertise and uncritical yet confident noise.
Given our experiences as academics who have developed curricula for students and who research generative AI, we think there are three things universities can do. They should teach critical AI literacy, emphasise why developing knowledge is important, and teach students why being an expert matters if they’re going to engage meaningfully with AI.
The four dangers
Blind trust in AI’s false confidence. A recent Microsoft report showed that those who know the least about a topic are the most likely to accept AI outputs as correct. Generative AI programs like ChatGPT and Claude produce text with remarkable confidence. Students lacking domain expertise can’t identify when these systems are completely wrong.
Headlines already demonstrate the consequences of this in the workplace: lawyers submitting fabricated case citations generated by AI, and hospitals using AI transcription tools that invent statements never actually made.
Generative AI can get it wrong because it doesn’t understand anything in the human sense of the word. But it can identify and replicate patterns with remarkable sophistication. These patterns include not only words and ideas but also tone and style.
Missing the power of education. A core purpose of higher education is to give students a new way of understanding the world and their place in it. When students use AI in ways that sidestep intellectual challenges, they miss this essential transformation.
When students simply outsource their thinking to AI, they’re getting credentials without competence. They might graduate with degrees but without knowledge and expertise.
The false confidence trap. Even students who develop critical awareness about AI’s limitations face what Punya Mishra, a learning engineer professor at Arizona State University, calls “the false confidence trap”. They might recognise that AI can produce errors but lack sufficient subject knowledge to correct those errors.
As Mishra puts it:
It’s like having a generic BS detector but no way to separate truth from fiction.
This creates a dangerous half-measure where students recognise AI isn’t perfect but can’t effectively evaluate its outputs.
Perpetuating the knowledge gap. As AI becomes ubiquitous in workplaces, the gap between those with genuine expertise and those relying solely on AI will widen. Students who haven’t developed their own knowledge foundations will be increasingly marginalised in a world that paradoxically values human expertise more, not less, as AI advances.
Answers
There are three steps universities can take.
Integrate critical AI literacy. Students need to understand how generative AI works – how AI is trained on massive databases of human-created texts and images to identify patterns by which to craft new outputs.
It’s not enough to have an “Intro to AI” course. Every discipline needs to show students how AI intersects with their field and, most significantly, empower them to reflect on the ethical implications of its use. This includes engaging in questions around the use of copyrighted materials for the training of generative AI, the biases inherent in AI generated texts and images, and the enormous environmental cost of AI use.
Emphasise knowledge development. Higher education institutions must actively counter the view that university is merely about the provision of credentials. We need to help students see the value of acquiring domain expertise. This is not always self-evident to those students who understand higher education only as a means to a job, which encourages them to engage with knowledge in an instrumentalist way – and thus to use AI in ways that prevent engagement with complex ideas. It is a personal relationship with knowledge that will prepare them for a future where AI is everywhere. Advocating for the power of knowledge needs to be a central part of every academic’s job description.
Model dual expertise. Academics should model what Mishra calls “the dual expertise challenge” — combining domain knowledge with critical AI literacy. This means demonstrating to students how experts engage with AI: analysing its outputs against established knowledge, identifying biases or gaps, and using AI as a tool to enhance human expertise rather than replace it.
As AI becomes increasingly sophisticated, the value of human expertise only grows. Universities that prepare students to critically engage with AI while developing deep domain knowledge will graduate the experts that society needs in this rapidly evolving technological landscape.
We have our work cut out for us, but expertise remains highly valued.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
By Sioux McKenna, Professor of Higher Education, Rhodes University, South Africa, Rhodes University And
Nompilo Tshuma, Senior Lecturer and Researcher in Educational Technology and Higher Education Studies, Stellenbosch University