On Friday, Apr 5 Meta stated that it will start labeling AI-generated content with “Made with AI ”, commencing May 2024. According to Monica Bickert, vice president of content policy at Meta, this decision has been taken after thorough public surveys, consultations with academics, and Meta’s Oversight Board suggestions.
As quoted by Meta, “We are making changes to the way we handle manipulated media on Facebook, Instagram, and Threads based on feedback from the Oversight Board that we should update our approach to reflect a broader range of content that exists today and provide context about the content through labels.”
The board also suggested changes regarding the moderation of AI-generated content that does not violate community standards. According to Meta, a less restrictive approach towards manipulated content like labels with context, instead of removing the content will promote freedom of speech. The manipulated media policy released in 2020 only covers AI-generated or AI-altered videos. Since then, the advancement in AI-generated content, such as audio and photos has increased significantly, requiring an update in the previous policies.
See Related: Meta Apes Launches on BNB Application Sidechain to Give Gamers the Best of Both Web2 and Web3 Gaming
AI Detection Parameters
Meta mentioned on its blog in February that it will detect AI content based on two important parameters:
- Detection of industry-shared signals of AI images
- Self-disclosure of AI-generated content
Instead of directly removing manipulated content, a contextual label providing information about the content will be displayed to reduce the risk of public deceit. Although the company believes in free speech, content violating community policies, such as bullying, harassment, violence, and incitement will be removed immediately. Based on consultations carried out with 120 stakeholders in 34 countries, most of the stakeholders supported the idea of labeling and self-disclosure of AI-generated content. The stakeholders also accepted the proposal to limit the removal of manipulated content only in case of violation of company policies.
Meta has already issued a timeline regarding the effectiveness of these plans that allows users to understand the self-disclosure process and modify their content to avoid removal of their content from Instagram, Facebook, and Threads.