YouTube Requires Creators to Label AI-Generated Content

YouTube Requires Creators to Label AI-Generated Content

Shipra Sanganeria
Published by Shipra Sanganeria on Mar 26, 2024
Fact-checked by Kate Richards
Fact-checked by Kate Richards

YouTube announced the implementation of its new labeling policy for AI-generated content on March 18. The new label in its Creator Studio requires uploaders to disclose “altered or synthetic” content that might be mistaken for a real person, place, or event. The new labels will initially be introduced to its mobile app, followed by desktop and TV in the coming weeks.

In the blog post, the social media platform cited examples that it considers “realistic” and requires AI labeling. For example, altering footage of real events and places, generating realistic-looking scenes, and digitally recreating a real person’s face or voice (using deepfake) to show them saying or doing something they didn’t actually do.

On the other hand, YouTube also said that creators using AI for production or post-production processes, like generating scripts, visual enhancements, and special effects, would be exempt from this disclosure policy. Also exempt is animation and unrealistic (fantasy) content creation.

For most AI-generated videos, the label will appear in the expanded description underneath the video player, but for videos related to real-world issues, like elections, health, finance, and news, YouTube will display a label or watermark on the video itself. Additionally, if a creator doesn’t include an AI label on content that could “confuse or mislead” people, YouTube holds the right to add this itself.

The new AI-labelling requirements come as part of an announcement that came out last November, regarding how YouTube intends to adapt and update its Community Guidelines to protect its users and community from false, manipulated content.

Part of this announcement was adding a new “privacy request” process, where anyone whose face or voice is digitally recreated and used to misrepresent or promote content can request to have the content removed from the platform.

In last week’s policy post, YouTube said that in the future, it also plans to penalize creators who repeatedly avoid disclosing this information.

YouTube follows in other social media platform’s footsteps with the introduction of AI labels on content. But putting all trust in the creators themselves to responsibly label their content might not be enough.  It remains to be seen how successful YouTube will be in identifying AI-generated content and enforcing penalty measures for those who don’t comply.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!
0 Voted by 0 users
Title
Comment
Thanks for your feedback