YouTube targets scams generated by artificial intelligence –

Videos that use artificial intelligence (AI) will soon be required to be labeled as such on YouTube and internet users will be able to ask the platform to remove AI-generated scams, the site recently announced.

These new rules are expected to come into force in the coming months. Many fear that videos containing AI-generated elements could be used to manipulate elections, commit fraud, or even put people in false situations (e.g. pornographic films).

“We will provide the ability to request removal of AI-generated content or other content that has been manipulated to simulate an identifiable person, including their face and voice,” Emily Moxley and Jennifer Flannery O’s said in a blog post by Connor, Vice President of Product Management.

The criteria taken into account for requesting a revocation relate in particular to the distinction between parodies and videos depicting real people who, according to the platform owned by Alphabet (Google’s parent company), can be identified.

The site will also require creators to clearly indicate when realistic content was created using artificial intelligence so that viewers are notified by a particular ad.

This could be an AI-generated video that realistically depicts an event that never happened, or content that shows someone saying or doing something they didn’t say or do.

This is particularly important in cases where the content concerns sensitive topics such as elections, conflicts, public health crises or personalities, the vice presidents said.

Failure to comply with these guidelines will result in content being removed from YouTube or excluded from its advertising revenue sharing program.

Artists can intervene

YouTube also plans to give the music industry the ability to request the removal of content created to copy an artist’s style.

The platform says it will take particular action against videos that imitate an artist’s unique singing or rapping voice.

Several YouTube channels owe their success to cover versions generated by AI, often parodies or satires. Many of these content creators’ channels are expected to be put at risk by record labels who may not value their videos.

YouTube has not yet issued specific guidelines for this type of content, but says it intends to exclude content that is the subject of reporting, analysis or even criticism of synthetic voices.

Meta is also taking precautions

YouTube is not the only platform cracking down on AI-generated content. Meta (Facebook, Instagram) announced on November 8 that political campaigns would require transparency about the use of AI in advertising by 2024.

Advertisers must disclose whenever an election, political or social ad contains a photorealistic image or video or realistic sound that has been digitally created or altered to portray a real person as saying or doing something they did not say or did. explained a spokesperson for Meta.

Users of metaplatforms are informed about the existence of such content.

With information from Agence France-Presse