November 21, 2024

Facebook, Instagram to label AI-generated content

The parent company Meta will classify information that has been edited by AI as “high-risk” if it misleads the public on significant issues

The parent firm of Facebook and Instagram, Meta, disclosed significant policy revisions on Friday involving digitally created and manipulated material, ahead of elections that would test Meta’s ability to keep an eye out for misleading content generated by artificial intelligence. In May, the social media giant will start labelling movies, photos, and music produced by AI and uploaded on Facebook and Instagram with the “Made with AI” tag. As stated in a blog post by Vice President of Content Policy Monika Bickert, this action broadens the scope of the policy that had previously only addressed a small portion of modified videos.

For digitally altered work that has a “particularly high risk of materially deceiving the public on a matter of importance,” Bickert said that Meta will also employ clear and obvious labels, regardless of whether AI or other techniques were employed in its creation. According to a spokesman, Meta will immediately begin using these more noticeable “high-risk” markings.

This strategy represents a change in the way the firm handles modified content; instead of just deleting a certain amount of postings, it will instead leave the content up and tell readers about how it was made.

In the past, Meta revealed a plan to insert invisible marks into files in order to identify images made with third-party generative AI tools. Unfortunately, this effort did not have a start date specified at that time.

The labelling method will be applied to content uploaded on Facebook, Instagram, and Threads, according to a company representative. For its other services, including virtual reality headsets Quest and messaging app WhatsApp, there are different regulations.

These adjustments arrive several months ahead of the US presidential election in November, which tech researchers caution could be significantly impacted by generative AI technologies. Political campaigns have already started utilizing AI tools, particularly in locations like Indonesia, challenging the limits set by platforms such as Meta and leading generative AI provider OpenAI.

In February, Meta’s oversight board criticized the company’s current regulations on manipulated media as “incoherent.” This came after the board reviewed a video on Facebook from last year featuring altered footage of Joe Biden, falsely implying inappropriate behavior by the US president.

The video was allowed to remain online because Meta’s current policy on “manipulated media” prohibits misleadingly altered videos only if they are created by AI or if they depict individuals saying words they did not actually say.

The oversight board suggested that this policy should also encompass non-AI content, which can be “just as misleading” as AI-generated content, as well as audio-only content and videos showing individuals appearing to do things they did not actually do or say.

Copyright © All rights reserved | WebbSocial |