Meta has rolled out new restrictions on its AI chatbots aimed at preventing harmful or inappropriate conversations with children. Updated internal guidelines, obtained by Business Insider, show how contractors are now being instructed to train the chatbots, with stricter boundaries on what is and isn’t acceptable.
The new rules explicitly prohibit any chatbot behavior that could “enable, encourage, or endorse” child sexual abuse. They also ban romantic roleplay with minors, roleplay where the AI is asked to act as a minor, and giving advice about physical intimacy if the user is underage. While chatbots can discuss serious issues such as abuse in an informational or supportive way, they are barred from engaging in conversations that could normalize or encourage harmful behavior.
The changes come after an August Reuters report revealed that earlier guidelines left open the possibility of AI chatbots engaging in “romantic or sensual” conversations with children. At the time, Meta denied this was consistent with its policies and said the language had been removed.
Meta’s AI bots have been under intense scrutiny in recent months. Regulators, including the FTC, have raised concerns about risks to child safety, especially as these companion-style chatbots become more advanced and more widely used.