As the use of AI in content generation continues to grow, video creators on YouTube will soon be required to label their videos with AI disclosures. The popular video-sharing and social media platform announced a policy change on Tuesday, mandating that videos containing realistic and potentially misleading AI content must be labeled.
YouTube will be rolling out the labels soon in a bid to prevent viewers from being confused by technically manipulated and synthetic content. However, only videos that contain synthetic content that is “realistic” enough will need the labels to be applied to them.
YouTube’s Growing Efforts to Curb Misinformation Using AI-generated Content
The boom of easy-to-use generative AI tools over the past year has resulted in a significant spike in the spread of misinformation. The widespread and easy access to such tools allows end-users to create convincing images, videos, text, and audio content that are easy to mistake as real.
YouTube has already been working on ways to curb misinformation on the platform, and the introduction of AI labels is its latest step in that direction.
For long, the video-sharing platform has banned technically manipulated media that could “pose a serious risk of egregious harm” by misleading viewers.
According to the policy update introduced on Tuesday, content creators must label videos containing “manipulated or synthetic content that is realistic, including using AI tools” while uploading content on the platform.
This especially includes videos that falsely depict someone doing or saying something they didn’t or an incident that actually never happened.
Jennifer Flannery O’Connor and Emily Moxley, YouTube’s Vice Presidents of Product Management, wrote in a blog post that labeling is especially important for videos discussing sensitive topics like ongoing conflicts, elections, public officials, and public health crises.
Experts on digital information integrity have also been warning about the threat posed by generative AI in this regard. Especially in the face of upcoming elections in the US and other parts of the world next year, misinformation has become a growing concern.
The feature to add AI-disclosure labels while uploading videos will be rolled out early next year, a YouTube spokesperson said.
While the labels will usually be placed in the description panels of videos, they will appear in more prominent areas in the video player for “certain types of content about sensitive topics“.
YouTube will hold content creators failing to comply with YouTube’s new labeling requirements accountable under its new policy. Consistent policy violations can also result in penalties, such as inadequately labeled content being removed and, in more severe cases, the creator being suspended from the platform’s Partner Program.
Any AI-generated or otherwise synthetic content that violates the Google-owned platform’s community guidelines will also be treated the same way as other videos.
Users Can Now Request The Removal ofAI-Generated Content
Alongside the announcement of AI labels, YouTube also added that users will now be able to request for AI-generated and other types of manipulated content to be removed if they simulate the likeness of an identifiable person.
Whether YouTube will comply with a person’s request to remove such content would depend on several factors, including whether the video is meant to be satirical. At a time when non-consensual AI-generated images and deepfakes have become a major concern, the new privacy request process is certainly a helpful update.