Anticipating the forthcoming elections in India and the United States, policymakers are contending with the challenge of addressing deepfakes and AI-generated content. In response to this critical issue, Meta, the parent company of Facebook, Instagram, and Threads, revealed a significant step in their strategy on Tuesday.
In the upcoming months, Meta plans to introduce image labeling for content posted across its platforms that has been generated using artificial intelligence (AI). The objective is to enhance transparency for users regarding the authenticity of the content they encounter online.
Nick Clegg, President of Global Affairs at Meta, emphasized the importance of this initiative, stating, “We will require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so.”
Additionally, Meta aims to implement a feature allowing users to voluntarily disclose when sharing AI-generated video or audio. This disclosure will trigger Meta to add a visible label, alerting viewers to the artificial nature of the content.
Clegg highlighted the potential impact of these measures, explaining, “If the company determines that digitally created or altered image, video, or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, ‘we may add a more prominent label if appropriate, so people have more information and context.'”
With Meta’s family of apps boasting an immense 3.19 billion daily users, the implications of these measures are substantial. The company underscored its commitment to collaborating with industry partners to establish common technical standards for identifying AI-generated content, including video and audio.
“We’ve labeled photorealistic images created using Meta AI since it launched so that people know they are ‘Imagined with AI,’” Clegg remarked, emphasizing Meta’s proactive approach in this regard.
Furthermore, Meta is actively participating in discussions with other industry players, such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock, to develop comprehensive standards for identifying AI-generated content. Through forums like the Partnership on AI (PAI), Meta aims to ensure alignment with best practices in the field.
Clegg acknowledged the evolving nature of the debate surrounding AI-generated content, envisioning ongoing discussions on authentication methods for both synthetic and non-synthetic content. “These are early days for the spread of AI-generated content,” he observed. “As it becomes more common in the years ahead, there will be debates across society about what should and shouldn’t be done to identify both synthetic and non-synthetic content.”