OpenAI CEO Sam Altman is raising serious concerns about the legal privacy of conversations with AI tools like ChatGPT—especially for users turning to the platform for emotional or personal support. In a recent podcast appearance, Altman made it clear that chats with ChatGPT do not enjoy any of the legal protections typically granted to conversations with doctors, therapists, or lawyers. And that could have very real consequences.
Speaking on Theo Von’s This Past Weekend podcast, Altman didn’t mince words: “If you go talk to ChatGPT about your most sensitive stuff and then there’s a lawsuit or whatever, we could be required to produce that. And I think that’s very screwed up.” He emphasized that, under current legal standards, there is no AI equivalent to medical or attorney-client privilege.
The issue isn’t hypothetical. OpenAI is already required to retain records of user chats—including those that users delete—as part of ongoing litigation, such as the lawsuit brought by The New York Times. That means anything said in a ChatGPT session could potentially be retrieved and submitted as evidence if legally compelled. In Altman’s view, that exposes a glaring gap in how the legal system treats interactions with AI.
As more people turn to AI tools for everything from relationship advice to managing anxiety, the lack of a privacy framework is becoming increasingly problematic. “Right now, if you talk to a therapist or a lawyer or a doctor… there’s legal privilege for it,” Altman said. “We haven’t figured that out yet for when you talk to ChatGPT.” He argued that similar protections should be in place for conversations with AI, especially given how users are treating these systems in practice.
Until such regulations are implemented, Altman believes it’s reasonable for users to hesitate. “I think we should have the same concept of privacy for your conversations with AI,” he added. “It’s fair for users to really want the privacy clarity before you use [ChatGPT] a lot—like the legal clarity.”
While OpenAI continues to develop its AI services, the tension between innovation and user rights remains unresolved. In the meantime, users looking to discuss sensitive matters with AI would do well to consider the legal risks involved—especially when those chats are effectively stored and potentially retrievable.
Altman’s comments come as the broader tech industry faces growing scrutiny over how AI systems collect and retain data. With lawsuits piling up and regulatory frameworks lagging behind the pace of adoption, the push for clarity around AI privacy is quickly turning from a policy discussion into a public necessity.