Public officials and journalists will soon be able to keep track of AI-generated deepfakes of themselves on YouTube through the platform’s likeness detection feature.
The tool is already available to millions of content creators on YouTube, but beginning Tuesday, it will expand to a pilot group of journalists, government officials, and political candidates. (At a briefing with reporters, YouTube declined to share who was in the pilot group, including whether Donald Trump is part of it.) Likeness detection is similar to Content ID, which scans YouTube for copyrighted material — except likeness detection looks for people’s faces. When there are matches, an individual in the program can request that YouTube remove the content, though the company says not every request will be approved. Removals are based on YouTube’s privacy policy, which includes carve outs for content like parody and satire.
“YouTube has a long history of protecting free expression, and that includes parody, satire, and political critique. If a video of a world leader is clear parody, it’s likely to stay up,” said Leslie Miller, YouTube’s vice president of government affairs and public policy. “We evaluate every removal request under our longstanding privacy guidelines to ensure we’re not stifling the very civic discourse we’re trying to protect.”
To join the program, individuals will be required to submit a video of themselves and a government ID. YouTube says this data will only be used for the likeness detection feature, and that individuals can withdraw from the program and request YouTube remove the data.
Amjad Hanif, vice president of creator products, said that so far the amount of content creators request that YouTube remove under the policy is “actually very small.”
“They may see lots of matches, and I think for a lot of them, it’s just been the awareness of what’s been created, but the volume of actual removal requests is really, really low because most of it turns out to be fairly benign or additive to their overall business,” Hanif said. Politicians, of course, may not see it the same way — but Hanif did hint at the possibility of allowing monetization on AI-generated deepfakes in the future.
“You may find that folks in the industry want to allow that, and that’s something that we’re investing in and we have a long history and experience in,” he said.
YouTube and other platforms have been wrestling with how to handle AI-generated content for the last few years, starting with moderating a flood of AI soundalike music that mimicked real artists. AI deepfakes are a problem for average individuals, not just celebrities, journalists, and politicians — but having everyone in the world in the likeness detection feature is “probably not” in the roadmap, Hanif said, making the feature, at least for now, limited to famous people and those in the news. (Individuals still have the option to request removal of AI deepfakes through a complaint process.) In recent months, YouTube has cracked down on some low-quality AI slop channels with millions of subscribers under its spam policies; the platform has also dinged channels making fake AI movie trailers.
But on the creator’s side, it’s AI all the way down: YouTube has announced a slew of AI-powered tools that creators can use to ideate, plan, create, and optimize their YouTube videos. AI content reaches into every corner of the platform — a recent New York Times story detailed how children are being fed low-quality AI videos claiming to be educational.
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

2 hours ago
1













































