The new statement on YouTube Official Blog might seem like a light at the end of the tunnel but only for the likes of Drake and the Weeknd. That is, major label artists.
From now on, record companies will be able to request the withdrawal of AI-generated content, mimicking particular artists, from YouTube. One of the most notorious examples is a track allegedly by rappers Drake and Weeknd, uploaded by Ghostwriter. The YouTube management announced this decision on Official Blog specifying some details.
“We believe it’s in everyone’s interest to maintain a healthy ecosystem of information on YouTube”, said Jennifer Flannery O’Connor and Emily Moxley, YouTube product management Vice Presidents. “We have long-standing policies that prohibit technically manipulated content that misleads viewers and may pose a serious risk of egregious harm. However, AI’s powerful new forms of storytelling can also be used to generate content that has the potential to mislead viewers—particularly if they’re unaware that the video has been altered or is synthetically created”.
Although the same post indicates that the platform has combined human resources and machine learning systems to ensure its safety, it’s clear that there is nothing that can prevent one from uploading deepfake content. It’s down to a user and how she or he identifies their video. The new policy, however, requires content producers to tick the relevant boxes if their work “contains realistic altered or synthetic material”. “For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do”. This sounds like a good idea but the question is whether all content creators, especially those breaking copyright law, are really keen on telling you the truth. Probably not. YouTube ensures that those who choose not to disclose the information over some time (the length is not specified) will be subject to content removal, suspension and some other forms of digital punishment. How would this prevent them from creating other accounts and uploading things from scratch as if nothing happened?
If a video happens to be classified by its creator as synthetic or AI-generated, a special label will be added to the description.
Additionally, the post provides some insight into the process of content removal, the feature that will be in a test version over the next couple of months. A limited number of labels and distributors will be involved in the trial. That said, in future, the content removal request will need to go through the moderation process which is detailed in the post by Ms Flannery O’Connor and Ms Moxley.
“So in the coming months, we’ll make it possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, using our privacy request process. Not all content will be removed from YouTube, and we’ll consider a variety of factors when evaluating these requests. This could include whether the content is parody or satire, whether the person making the request can be uniquely identified, or whether it features a public official or well-known individual, in which case there may be a higher bar”. This implies the following: major labels and YouTube partners will find it easier to make a case for the removal than independent artists and smaller record companies.
On top of that, YouTube seems to be rather enthusiastic about the use of generative AI. This, however, doesn’t explain why the platform lacks a mechanism that would identify deepfake or similar and block the upload.
“We’re still at the beginning of our journey to unlock new forms of innovation and creativity on YouTube with generative AI”, explain Ms Flannery O’Connor and Ms Moxley. “We’re tremendously excited about the potential of this technology, and know that what comes next will reverberate across the creative industries for years to come”. One can only guess what they mean in their last sentence. Time will tell.