
Meta Platforms announced on Thursday that it has addressed a significant error that resulted in Instagram users worldwide being inundated with violent and graphic videos in their personal Reels feeds.
The extent of the glitch’s impact remains unclear, but a surge of complaints flooded social media as users expressed frustration over violent and “not safe for work” content appearing in their feeds, even among those who had activated the “sensitive content control” feature designed to filter such material.
A Meta spokesperson stated, “We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake.” However, the company did not provide details on what caused the issue.
This incident comes at a time when Meta’s moderation policies are under scrutiny. Recently, the company decided to discontinue its U.S. fact-checking program across Facebook, Instagram, and Threads, platforms that collectively serve over 3 billion users globally.
Violent and graphic content is strictly prohibited by Meta’s policies, and the company typically removes such videos to safeguard users, with few exceptions made for content aimed at raising awareness on issues like human rights abuses and conflicts.
In recent years, Meta has increasingly relied on automated moderation tools, a strategy that is expected to intensify following the discontinuation of fact-checking in the U.S. The company has faced criticism for struggling to effectively manage content recommendations while ensuring user safety, highlighted by previous incidents involving the dissemination of violent content during the Myanmar genocide, the promotion of eating disorder content to teens, and the spread of misinformation during the COVID-19 pandemic.