Is ChatGPT’s Memory a Helpful Companion or a Privacy Minefield?
ChatGPT’s new memory feature changes how we interact with AI. For communicators, it raises urgent questions about personalisation, privacy, and trust. This post explores the implications for our work, our organisations, and the future of digital engagement.

OpenAI has announced a major update to how ChatGPT works: it can now remember you. Not just your current conversation, but also your preferences, past interactions, writing style – even your likes and dislikes. Initially available to users on the paid Plus and Pro tiers, it’s being positioned as a significant step towards what OpenAI's CEO, Sam Altman, calls “AI systems that get to know you over your life.”
If you're in the UK, as I am, this feature isn’t available yet. But wherever you are in the world, it’s worth understanding it now – especially if you’re a communicator, technologist, or anyone interested in the ethics and impact of AI – because it changes the dynamics of how we interact with ChatGPT.
TL:DR - A Quick Summary
- ChatGPT's memory feature introduces a new level of personalisation by retaining user information across sessions.
- While beneficial for individual users, it raises significant privacy and control concerns, particularly for organisations.
- Organisations should consider developing bespoke AI solutions within their own networks to mitigate risks associated with data privacy, compliance, and security.
- Political uncertainty in the US under the current Trump administration compounds the challenges around regulatory clarity and international data governance.
What is ChatGPT’s Memory?
The new memory feature allows the chatbot to retain information you’ve shared over time, making it more personal and contextually aware. This isn’t just remembering the last few prompts. It’s the start of long-term memory: your AI assistant remembering that you prefer British spelling, for instance, or that you work in communications, or that you’re drafting a presentation for your CEO.
OpenAI says the aim is to make the experience more helpful and aligned with how humans naturally build relationships through accumulated knowledge and context.
The Promise: A More Human AI
On the surface, there’s a lot to like:
- Personalisation: It remembers your style, tone, and preferences without needing to be told every time.
- Efficiency: Saves time by eliminating the need to repeat yourself in every new session.
- Continuity: Ideal for longer-term projects or recurring tasks – your AI becomes more of a collaborative partner.
If you use AI as a thinking partner or creative assistant, as I do, the idea of having a shared memory between you and your chatbot is powerful. It shifts from being a clever tool to something more relational – almost agentic in nature.
The Trade-Off: Privacy and Control
But this also raises a host of questions, especially around privacy.
OpenAI says users can manage what the AI remembers, delete specific memories, or turn off memory entirely. But it’s entirely reasonable to wonder what’s actually being stored behind the scenes.
Even with privacy controls, you are still placing a great deal of trust in the platform. If you and your AI co-develop an idea over time, is that conversation stored permanently? Could it be used to train other models?
There’s also the emotional side. A chatbot that remembers everything you’ve said feels different from one that forgets, like humans do. Do we want that level of intimacy with an algorithm?
Shared Memory: A New Relationship with AI?
What OpenAI has introduced isn’t just a technical upgrade; it’s a shift in the relationship between user and machine. Shared memory suggests something ongoing, evolving, and co-developed. It’s not far-fetched to imagine future iterations of ChatGPT acting more like digital companions than tech tools.
But for all its potential, the idea of AI “getting to know you over your life” gives me pause – especially in a world where data is a currency, and surveillance is often a business model.

The feature will likely arrive in the UK and EU soon. When it does, the real question will be not just whether it works well, but whether we’re comfortable with the trade-offs it asks us to make.
Implications for Organisations: Navigating the Risks
For individual users, deciding to engage with ChatGPT's memory feature involves a personal trade-off between convenience and privacy. However, for organisations, the stakes are considerably higher.
- Data Sovereignty and Compliance: The storage and processing of conversational data by external AI services can pose significant compliance challenges. Nevertheless, organisations must ensure that AI tools comply with data protection regulations such as the European Union's AI Act. While there is no national regulation of AI in the UK yet, various areas of law, such as the UK General Data Protection Regulation (UK GDPR), touch on AI regulation in practice.
- A Different Picture in the United States: There is no national regulatory framework; instead, a fragmented framework of federal and state-level laws amplifies regulatory complexity. Notably, California’s CCPA and CPRA offer some of the strongest consumer data protections, while other states like Virginia, Colorado, and Connecticut have introduced similar legislation. However, these laws vary widely in scope and enforcement.
- The Return of Donald Trump to the Presidency: Since taking office in January, President Trump has signed a series of Executive Orders aimed at dismantling or restructuring key federal agencies, including those involved in technology governance and data oversight. Other changes Trump wants require approval by Congress. The result is a growing lack of clarity – within the US and around the world – about how the US will regulate AI, data privacy, and platform accountability in the years ahead. For global organisations, this uncertain political environment creates additional risk and highlights the value of self-governed AI deployments.
- Intellectual Property Risks: The potential for proprietary information to be inadvertently stored or processed by external AI models raises concerns about intellectual property protection and confidentiality.
- Security Concerns: Relying on third-party AI services introduces risks related to data breaches and unauthorised access, which can have severe repercussions for organisations.
Given these considerations, organisations should consider developing and deploying bespoke AI solutions tailored to their specific needs, which are operated within their secure networks.
This approach highlights a clear path:
- Define and enforce internal data handling policies.
- Train bespoke models on domain-specific data for better results.
- Reduce exposure to external threats by hosting AI in-house.
Implementing such solutions requires investment in infrastructure and expertise, but offers a path to realising AI's benefits while maintaining control over data and compliance.
Do you welcome the idea of an AI assistant that remembers everything about you – or does that feel like a step too far? What would make you trust a system like this, personally or professionally? Do share your perspective in the comments.