Meta’s recent policy changes on Facebook have reportedly led to a measurable rise in violent content and harassment across the platform.
Bullying and harassment now affect around 0.07–0.08% of content views, while violent content surged to roughly 0.09%, according to Meta’s report.
The shift follows a January revision of Facebook’s moderation guidelines, relaxing restrictions on controversial speech topics including gender identity and immigration.
Meta also narrowed its hate speech definition and began using community-sourced fact-checking, replacing third-party moderation tools used in previous enforcement.
These changes, intended to reduce moderation errors and increase speech latitude, have raised concerns about unchecked misinformation and online hostility.
ALSO READ
Meta claims that error rates in moderation have dropped by up to 50%, framing the change as a boost in enforcement precision.
While some view this as improved platform transparency, others argue it enables more aggressive content to bypass filters and gain traction.
The company admits that user feedback remains mixed, as people navigate the effects of looser policies on community safety and dialogue.
Historical trends show that second-quarter spikes in harassment aren’t new, but recent increases may be more directly tied to rule changes.
The platform has promised ongoing evaluations of its new system and adjustments if harmful behavior or misinformation trends continue to escalate.
