Meta announces new policy on AI-generated media ahead of US polls.
Social media giant Meta has unveiled revisions to its policies for dealing with digitally manipulated content ahead of the crucial US midterm elections later this year. In a blog post, Vice President of Content Policy Monika Bickert stated the platform will begin applying “Made with AI” labels to AI-generated videos, images and audio from May onwards.
This expands the earlier narrow scope to a broader spectrum of AI-doctored material. Meta will also prominently flag any deceptively altered content deemed highly important to public discourse.
The new approach shifts from outright removal towards keeping such manipulated media accessible but with necessary contextual warnings. It aims to promote transparency over hiding information.
A spokesperson clarified the policy will apply to Facebook, Instagram and other major Meta services. Detection of generative AI tools’ watermarks and roll-out of the “high risk” tags will start immediately.
Experts have warned new generative technologies could significantly impact the upcoming polls in novel ways. Some political campaigns have already experimented with AI offshore.
Meta’s revisions are a pre-emptive attempt to revise guidelines to address evolving threats, with technology threatening to test content regulation abilities like never before.