OpenAI CEO and longtime Reddit investor Sam Altman says social media no longer feels real — and bots are to blame.
Posting on X, Altman admitted he’s started to assume that many posts he reads, even positive ones about OpenAI, are generated by bots rather than humans. The realization came while browsing the r/Claudecode subreddit, where posts praising OpenAI’s Codex tool have become so frequent that one user joked, “Is it possible to switch to Codex without posting a topic on Reddit?”
Altman explained:
“I assume it’s all fake/bots, even though in this case I know Codex growth is really strong and the trend here is real.”
He went on to suggest that several factors are blending together to make online discussions feel artificial: humans adopting LLM-style writing quirks, social platforms optimizing content for engagement, monetization pressures on creators, astroturfing by competitors, and yes, actual bots.
The irony of LLM-speak
Altman’s comments point to an uncomfortable irony: OpenAI’s own models — designed to mimic human writing — may now be influencing how humans themselves communicate online. The result, he says, is a feedback loop where it’s harder than ever to distinguish genuine posts from automated ones.
And this isn’t just about bots. Altman noted that hyper-engaged online communities often move in unison, creating waves of hype or backlash that can feel manufactured, even when it’s just passionate humans acting in sync.
Astroturfing fears and OpenAI’s own critics
Altman also hinted that competitors may have engaged in astroturfing — fake grassroots campaigns designed to sway opinion — against OpenAI in the past. While there’s no direct evidence, Reddit forums did turn sharply critical after the release of GPT-5, with complaints ranging from the model’s “personality” to its credit usage.
Altman himself tried to calm frustrations in a Reddit AMA, but the once-fervent support of OpenAI’s subreddit has never fully rebounded.
A bigger bot problem
Altman’s unease echoes wider concerns: cybersecurity firm Imperva estimated that over half of all internet traffic in 2024 was non-human, driven in large part by bots and LLMs. On X, internal estimates suggest there are hundreds of millions of bots active on the platform.
The result, as Altman put it:
“AI Twitter/AI Reddit feels very fake in a way it really didn’t a year or two ago.”
What’s next?
Cynics see another angle here: The Verge reported in April that OpenAI was exploring its own social media platform. Altman’s “social media is fake” posts could be laying the groundwork for a new product to rival X or Facebook.
But even if OpenAI did build a bot-free network, would it really solve the problem? Researchers at the University of Amsterdam found that when they built a test social network made up entirely of bots, the bots still formed cliques, echo chambers, and misinformation loops — just like humans.
In other words, whether it’s people or machines, online communities may always have a “fake” feel.