Instagram users across the globe have recently reported a surge in violent and explicit content appearing on their feeds. Many took to X (formerly Twitter) and Reddit to share their concerns, stating they were seeing graphic images of severe injuries, dead bodies, and violent attacks.

Advertisment

While some of these posts were labelled as “Sensitive Content,” they remained accessible, raising questions about Instagram’s moderation policies.

Also read: Maha Kumbh 2025: Instagram, Telegram channels under the lens for videos of bathing women

Advertisment

One user took to X to express shock, “Wtf is happening to Instagram I literally saw the most violent scary shit a person have the misfortune to watch.”

What caused this surge in graphic content?

The sudden influx of such disturbing content left users wondering whether it was a technical glitch or a deliberate change in Instagram’s algorithm.

Advertisment

Meta addressed the issue on Thursday, confirming that an “error” had caused the problem. The company apologised, and assured users that the issue had been fixed.

“We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologise for the mistake,” a Meta spokesperson told CNBC.

Also read: 'I was blindsided': Laid-off Meta employee claims workers on leave were targeted

Meta’s content moderation policies

Meta clarified that the flood of violent videos was not linked to recent changes in its content moderation policies. The company has been shifting its approach to avoid censoring content unless it poses a high-severity violation.

Instead of relying heavily on AI to proactively remove or demote certain posts, Meta now allows more content to remain visible unless users report it. The aim, according to CEO Mark Zuckerberg, is to prevent unnecessary censorship while still addressing the most severe violations.

Also read: Mark Zuckerberg says he was 'almost sentenced to death in Pakistan' - VIDEO

Under its current policy, Meta removes content that is excessively violent, including footage depicting “dismemberment, visible innards, or charred bodies.” It also prohibits posts containing “sadistic remarks” about human or animal suffering.

However, some graphic content is permitted if it is intended to raise awareness of human rights violations, armed conflicts, or terrorism. Such posts may be restricted with warning labels rather than being removed entirely.

Also read: Trump inauguration: From Elon Musk's Nazi salute to Mark Zuckerberg's 'bad boy' moment, these are the viral instances you cannot miss

Users still reporting disturbing content

Despite Meta’s announcement that the issue has been resolved, some users claim they are still encountering violent and graphic content on their feeds. 

(With inputs from agencies)