We are living in a transformative era where generative artificial intelligence (AI) is rapidly reshaping how we work, create, and communicate. From drafting documents and generating images to automating conversations and solving complex problems, these tools offer what once felt like science fiction—on demand.
But beneath the marvel of this innovation lies a less glamorous, often overlooked truth: generative AI is built to remember, not to forget.
For years, I’ve urged individuals and organisations alike to pause before feeding these systems their most personal, sensitive, or proprietary information. Not out of fear of the future, but out of understanding of the present: once data enters a generative AI model, it’s nearly impossible to guarantee where it goes, how it’s used, or who can access it.
That caution was once a theoretical concern. Now it has legal teeth. In a landmark development, a federal court in the case of New York Times v. OpenAI has made clear what many of us in the data privacy world have known all along: AI systems remember more than they should—and often in ways that challenge ownership, accountability, and ethical stewardship.
The machine that doesn’t forget
At their core, generative AI systems function by learning from vast datasets—millions of articles, conversations, codebases, images, and, yes, sometimes even confidential or copyrighted material. These systems are trained to detect patterns, replicate linguistic nuance, and generate content that mimics what humans might say or write.
But unlike humans, AI doesn’t forget. A fleeting input—a confidential business strategy, an internal memo, a personal confession—may seem like a drop in the digital ocean. But once it’s entered, it’s no longer fleeting.
It becomes part of a system designed to optimise based on accumulated information. And while companies implement privacy policies, redaction tools, and training filters, absolute deletion or isolation of such inputs is nearly impossible after training. This isn’t just a software limitation—it’s a fundamental design principle of how machine learning works.
The illusion of control
Many users, especially in organisations, assume that using AI tools is as secure as using an internal knowledge base. The user interface feels simple. Clean. Trustworthy.
But here’s the truth: your data does not disappear when the chat ends. It can be retained in logs, potentially reused for training (depending on terms of service), or even inadvertently surface in future outputs, particularly if systems are misconfigured or improperly deployed.
For companies, this can mean accidental exposure of trade secrets. For individuals, a permanent record of personal details that they never intended to share publicly. And for society, it raises troubling questions about digital consent, ownership, and long-term consequences.
This was precisely the concern raised in New York Times v. OpenAI. The court’s findings signal a new chapter in our reckoning with AI: we can no longer pretend that AI is neutral or forgetful.
It isn’t. And it doesn’t.
We must rethink trust in the age of AI
The heart of the issue is trust, not just in AI companies, but in the entire ecosystem that surrounds the development and deployment of generative models.
Trust requires transparency: How is the data used? Where does it go? What safeguards are in place?
Trust requires consent: Did the individual or organisation knowingly agree to have their data absorbed, memorised, and potentially regenerated?
Trust requires accountability: If harm is done—if data is leaked, plagiarised, or misused—who is held responsible?
Currently, our answers to these questions are murky at best. That’s not just a policy failure—it’s an ethical crisis.
The path forward: responsible use, not reactive regulation
We cannot turn back the clock on generative AI. Nor should we. The benefits are real: educational equity, creative empowerment, productivity gains, and access to knowledge at an unprecedented scale.
But we must build better guardrails—and fast.
Data minimisation by default: AI tools should collect the bare minimum information required for functionality and delete transient data wherever possible.
Privacy-aware design: Privacy must be embedded into the AI lifecycle—from design and data collection to model training and deployment.
Organisational governance: Companies must develop internal AI usage policies that prohibit the input of sensitive data into generative tools and mandate regular audits.
User empowerment: Individuals should be educated not just on what AI can do, but on what it remembers—and how to keep their data safe.
Clear consent and control: Users must have the right to know if their data was used to train a model—and the ability to opt out.
Conclusion: A call to conscious use
The age of generative AI is here—and it’s not going away. But neither should our commitment to privacy, ethics, and digital dignity.
When we use generative tools, we are not just leveraging convenience—we are participating in a system that collects, remembers, and sometimes reuses what we give it.
Let us not confuse innovation with immunity.
Let us not confuse access with safety.
Let us instead choose to be vigilant, informed, and intentional.
Because in the end, what AI remembers is only as responsible as what we choose to teach it.
And we all play a role in shaping what it learns.
DISCLAIMER: The Views, Comments, Opinions, Contributions and Statements made by Readers and Contributors on this platform do not necessarily represent the views or policy of Multimedia Group Limited.
DISCLAIMER: The Views, Comments, Opinions, Contributions and Statements made by Readers and Contributors on this platform do not necessarily represent the views or policy of Multimedia Group Limited.