In the not-so-distant past, the image of a journalist conjured someone hunched over a notepad, recorder in hand, chasing leads on the field, making dozens of phone calls and poring over documents to unearth truth. Today, that same journalist might instead be feeding prompts into ChatGPT, using AI transcription tools like Otter.ai or Whisper, deploying Sora to generate synthetic video explainers or relying on algorithmic news aggregators to determine what’s “trending”. In this rapidly transforming media landscape, the defining question for journalism educators worldwide isn’t whether artificial intelligence (AI) should be taught, but how it should be taught. And more critically: are we equipping the next generation of journalists to use AI without compromising the very soul of journalism, authenticity, accountability and ethics?
Across lecture halls in Accra, Johannesburg, London and New York, journalism departments are in the throes of an identity crisis. AI is no longer a futuristic concept; it’s already embedded in newsroom workflows, from Reuters and The Washington Post to JoyNews and BBC Africa. Yet, many journalism schools remain behind the curve, caught between the allure of technological innovation and the imperative to protect core journalistic values. This piece explores whether journalism educators are truly prepared for this tectonic shift, what “authentic storytelling” means in the age of generative content, and how students can be trained to responsibly engage with and fact-check AI-generated narratives.
Are Journalism Educators Ready for the AI Revolution?
The response, globally, is uneven. Some journalism educators have enthusiastically embraced AI, seeing it as a powerful research and productivity enhancer. Others remain cautious, even resistant, concerned that AI could dilute originality, promote laziness and normalize unchecked automation over human verification. The middle ground is still forming, but what’s clear is this: journalism education must evolve and fast.
In Ghana, for example, most journalism programmes have yet to integrate AI ethics, prompt engineering, or automation tools into their syllabi in any structured form. Lecturers often teach newswriting and editing as though we are still in the age of quill and typewriter. While universities like the University of Media, Arts and Communication (UniMAC) have begun conversations around digital transformation, structured AI training is still embryonic. Some private institutions appear to be exploring modules on digital storytelling tools. However, available evidence suggests that artificial intelligence is typically introduced as a supplementary topic, rather than being positioned as a core component of the curriculum.
Contrast this with South Africa, where some institutions are pushing boundaries. At Rhodes University’s School of Journalism and Media Studies, AI tools are now part of data journalism labs. Lecturers encourage students to experiment with natural language processing tools to mine large datasets while emphasizing verification. Similarly, Wits University in Johannesburg has initiated workshops where students use AI to draft early versions of stories, then critique the limitations and biases in those outputs, a model that balances innovation with critical thinking.
In the United Kingdom, a number of journalism schools are leading the way. The University of Central Lancashire (UCLan) has introduced a module titled AI and the Future of Journalism, combining theory, ethics and hands-on experimentation. Students learn to use tools like ChatGPT, Midjourney and Synthesia, but they are also trained in how to cross-check AI outputs using traditional verification techniques. The Reuters Institute for the Study of Journalism at Oxford has gone further, publishing detailed research on how AI is affecting newsroom operations and ethics, offering seminars and fellowships focused on AI in news.
In the United States, institutions like Columbia Journalism School, Arizona State University’s Cronkite School and New York University are embedding AI ethics and toolkits into journalism curricula. At NYU, students now learn how to prompt generative models for headline ideation, but also receive instruction on fact-checking, source transparency and bias detection. The Craig Newmark Graduate School of Journalism has developed AI literacy bootcamps, where students and faculty alike assess the risks of overreliance on algorithms.
Yet even in these pioneering institutions, a gap remains: journalism educators are playing catch-up with a technology evolving at breakneck speed. Often, faculty are not digital natives and their understanding of AI tools lags behind that of their students. Without robust faculty training, universities risk either ignoring AI or introducing it superficially, without the ethical scaffolding required to ensure responsible use.
What Does “Authentic Storytelling” Mean in the Age of Algorithmic Content?
Authentic storytelling, long the gold standard of journalism, is now under existential threat. Generative AI tools can produce entire articles, generate photorealistic images or simulate video footage. In this new world, where a bot can imitate the tone of a seasoned journalist, what makes storytelling authentic?
The answer lies not in the tool, but in the intent and process. Authentic journalism still demands original reporting, ethical sourcing and a commitment to truth. A feature written with help from ChatGPT is not necessarily inauthentic, if the core reporting is real, the facts are checked and the story serves a public interest. But if AI is used to fabricate quotes, manufacture sources or generate deepfake videos, then the story, however compelling, is ethically bankrupt.
In Ghana, authentic storytelling often emerges from on-the-ground reporting, stories about child miners in Tarkwa, water shortages in northern communities or urban planning chaos in Kumasi. These are not narratives that generative AI can fully grasp or replicate without human observation, cultural context and lived experience. Journalism educators must emphasize this: that storytelling is more than content generation; it’s about bearing witness, asking hard questions and contextualizing facts within human realities.
How Can Students Be Trained to Fact-Check AI-Generated Content?
Training students to verify AI-generated content is perhaps the most urgent need in journalism education. AI outputs, while persuasive, are prone to “hallucinations”, confidently presenting falsehoods as facts. ChatGPT, for instance, can invent sources, misattribute quotes and distort timelines unless carefully monitored.
Here’s how educators can address this:
Embed Verification Techniques Across All Courses: Fact-checking shouldn’t be confined to a module; it should be woven into every assignment. Students should learn how to triangulate AI-generated content using primary sources, public records and open-source intelligence (OSINT) tools like Google Reverse Image Search, TinEye or InVID.
Teach Algorithmic Bias: AI models are trained on vast corpora of human content, meaning they inherit our prejudices. Journalism educators must teach students to critically assess outputs for gender bias, racial stereotyping, or ideological skew. This is especially relevant in multicultural contexts like Ghana and South Africa, where Western-trained AI may misrepresent African realities.
Encourage Prompt Transparency: Just as journalists cite their human sources, students should be trained to disclose the prompts they use when generating content with AI. This promotes accountability and allows instructors to assess the ethical integrity of the process.
Simulate Real-World Challenges: Assignments can mimic real newsroom dilemmas: e.g., comparing an AI-generated article with a human-written one on the same topic and asking students to critique accuracy, tone and depth. At Arizona State University, students are even given AI-generated misinformation to debunk in class, a practice that strengthens digital literacy.
Conclusion
AI is not the enemy of journalism; ignorance is. Tools like ChatGPT and Sora are not replacements for reporters but extensions of their craft, when used responsibly. But they can just as easily become instruments of disinformation, plagiarism or ethical shortcuts if students are not properly trained.
For journalism educators, the mandate is clear: embrace the tools, but uphold the principles. Integrate AI literacy into the curriculum not as a novelty, but as a necessity. Create spaces for experimentation, but also for ethical debate. Prioritize storytelling, but never at the cost of truth.
Whether in Accra, Cape Town, London or New York, the mission of journalism education remains unchanged: to prepare truth-seekers, watchdogs and storytellers. What has changed is the toolkit, and with it, the stakes.
The age of AI in journalism is not a threat. It is a test. And how we teach it will shape the future of journalism itself.
The writer is a journalist, international affairs columnist and journalism educator with a PhD in Journalism. He is a member of the Ghana Journalists Association (GJA), Investigative Reporters and Editors (IRE), the Centre for Collaborative Investigative Journalism (CCIJ) and the African Journalism Education Network (AJEN). Contact: [email protected]