YouTube has begun rolling out a new AI likeness detection tool designed to help creators identify and manage videos that use artificial intelligence to replicate or manipulate their facial likeness without consent. The feature, introduced on Tuesday, reflects growing industry efforts to combat deepfakes and ensure that creators maintain control over how their image and identity are used on the platform.
Available through YouTube Studio, the tool is part of a broader initiative to enhance creator safety and transparency around AI-generated media. Once creators complete an identity verification process — which includes submitting a photo ID and a short selfie video — YouTube will automatically scan for videos that feature AI-generated versions of their faces. Detected videos appear in a new content detection dashboard, where creators can review details such as the video’s title, uploader channel, view count, and relevant dialogue excerpts. From there, they can submit a likeness removal request directly through the platform.
The feature supports two types of takedown actions: one specifically for AI likeness misuse and another for traditional copyright violations if protected material has been used without authorization. YouTube says the system is intended to prevent misleading or malicious deepfakes while maintaining transparency about legitimate AI-generated creative content that uses consented likenesses.
Initially, access is being granted to select members of the YouTube Partner Program, particularly those at higher risk of impersonation or whose public visibility makes them more likely targets for deepfake content. The rollout will continue gradually, with all monetized creators expected to have access by January 2026.
A YouTube spokesperson told TheWrap that the feature is part of the company’s ongoing work to address synthetic media and its potential for misinformation. By allowing creators to see when their image has been used and providing an official channel for removal requests, YouTube aims to offer practical safeguards while balancing the interests of AI innovation and user protection.
The move comes amid mounting pressure on major tech platforms to respond to the rapid spread of deepfakes — especially as generative AI tools make it easier to create realistic videos that can impersonate public figures. YouTube’s detection tool adds to its existing policy requiring clear disclosure when content has been generated or altered using AI.
While still in its early stages, the system marks a key step in YouTube’s attempt to build trust and accountability into AI-driven media environments. If effective, it could set a new precedent for digital identity protection and ethical AI use across video platforms.

