YouTube Takes Action Against AI-Generated Fake Videos
YouTube, owned by Google, officially announced on Tuesday that it will allow users to request the removal of deceptive videos created by artificial intelligence (AI) on the platform. It will also mandate labels on videos with realistic appearances that could deceive viewers. This information comes from an official blog post by YouTube's Vice Presidents of Product Management, Jennifer Flannery O'Connor and Emily Moxley.
While YouTube has long prohibited technically manipulated content that could deceive viewers and "pose a serious risk of egregious harm," the Tuesday policy update now requires creators to add appropriate labels when uploading content that includes "altered or synthetic content that's realistic, including the use of AI tools."
These new labels will only be necessary for "realistic" content created by AI or other synthetic content. This includes videos that "realistically depict an event that never happened or content that shows someone saying or doing something they didn't do," as stated by YouTube's Vice Presidents in the blog post.
"All content uploaded to YouTube is subject to our Community Guidelines, regardless of how it is created, but we also recognize that AI will create new risks and require new approaches," the text states.
This policy aims to prevent users from being misled by synthetic content amidst the proliferation of new generative AI tools directed at consumers, enabling quick and easy creation of compelling text, images, videos, and audio that are often challenging to distinguish from authentic content.
For certain sensitive content types such as elections, ongoing conflicts, and health crises, YouTube will display a more prominent label on the video player.
The company stated that it will collaborate with creators before the policy is implemented to ensure they understand the new requirements and is developing its tools for detecting rule violations.
YouTube also commits to automatically label content created using its own AI tools for creators.
Google, which creates tools that can generate generative AI content and owns platforms that widely disseminate such content, faces new pressure for responsible implementation of this technology. Google has already started addressing concerns about generative AI potentially creating a new wave of misinformation, announcing in September that it would require "conspicuous" disclosures for AI-generated political ads.