Meta is stepping up its fight against deceptive content by announcing plans to label AI-generated images shared on its platforms, even if they’re created by competitors like OpenAI and Google. This move comes amid growing concerns about the potential misuse of generative AI technologies, particularly as elections approach in several countries this year.
In a blog post, Meta’s global affairs president Nick Clegg acknowledged the increasing ease of creating realistic AI-generated images and their potential to spread misinformation. To combat this, Meta will leverage its existing “Imagined with AI” labels for its own creations and extend them to AI-generated images from other sources.
The company has developed methods to detect these images, including identifying “invisible markers” like watermarks and metadata embedded within the files. Meta is collaborating with industry partners to establish standardized markers that can be detected across different AI tools, including those from Google, OpenAI, Microsoft, Adobe, and others. This labeling initiative will initially target Facebook, Instagram, and Threads, with support for all languages.
While detecting AI-generated videos and audio remains a challenge due to the lack of standardized markers, Meta will introduce a disclosure feature for users sharing such content. Failure to disclose might result in penalties, the company said.