YouTube has announced that it will now be placing warning labels on realistic artificial intelligence-generated videos to protect users from potentially misleading content.
According to YouTube, content creators must now explicitly label any video that has been assisted in any way by realistic AI generation tools.
“This is particularly crucial for content discussing sensitive topics like elections, ongoing conflicts, public health crises, or public officials,” said YouTube Vice Presidents of Product Management Jennifer Flannery O’Connor and Emily Moxley on Tuesday.
The mandated labeling will only apply to content that meets YouTube’s standards of realism. This means that videos depicting fictional events or real individuals engaging in actions they did not actually perform will be required to carry the labels.
In line with its privacy policy, YouTube also stated that users will be able to request the removal of AI-generated content that replicates the likeness of another person.
Failure to comply with the policy may lead to the removal of videos and suspension of user accounts.
This move comes as tech platforms are grappling with the rapidly evolving landscape of AI technology. Platforms such as TikTok and Meta have implemented tags to provide users with better information about the credibility of the content they are engaging with, while Facebook has restricted the use of AI in advertising.
RELATED POSTS
View all