Google is expanding the scope of its content transparency efforts by adding an AI-generated video detection feature to the Gemini app. The new capability allows users to check whether a video was created or altered using Google’s own artificial intelligence tools, reflecting the company’s ongoing attempt to address growing concerns around synthetic media and authenticity.
As AI-generated video becomes more common across social platforms and messaging apps, distinguishing between real footage and machine-generated content has become increasingly difficult. Google’s approach is relatively straightforward: users can upload a video to Gemini and ask whether it was generated using Google AI. Gemini then scans the file for SynthID, Google’s proprietary digital watermarking system that embeds signals into AI-generated content. These signals are not visible or audible to people but can be detected by Google’s software.
The detection process goes beyond a simple confirmation. Gemini analyzes the entire file and can identify whether AI was used in the visuals, the audio, or both. In some cases, it may even indicate where AI-generated elements appear, such as identifying specific time segments where synthetic audio is present while confirming that the visuals are unmarked. This added context is intended to help users better understand how a piece of content was produced, rather than offering a binary result.
There are limitations to the tool that shape how it can be used in practice. Uploaded videos must be no larger than 100 MB and no longer than 90 seconds, which restricts verification to short clips rather than long-form content. That constraint aligns with the types of videos most likely to circulate on social media, where questions about authenticity are often raised.
The video detector builds on an earlier Gemini feature that focused on images, extending the same underlying technology to motion and sound. Google has been promoting SynthID since its introduction in 2023, and the company says it has already applied the watermark to more than 20 billion pieces of AI-generated content across its platforms. From Google’s perspective, this scale makes it easier to trace content back to its origin when users rely on its tools.
However, the most significant caveat remains unchanged. The detection feature only works on content created or edited using Google’s own AI systems. Videos generated by other companies’ models or open-source tools will not register, limiting the detector’s usefulness as a general-purpose verification solution. In effect, Gemini can only confirm what originated inside Google’s ecosystem.
While Google positions Gemini as a convenient alternative to third-party detection services, the feature should not be mistaken for a universal AI video detector. Instead, it represents a controlled transparency measure that offers insight into Google-generated content, without addressing the broader and more fragmented landscape of AI media created elsewhere.

