Meta, the company behind Facebook, has ramped up its efforts to tackle fake engagement and impersonated content. In the first half of 2025 alone, the platform removed more than 10 million fake accounts and took enforcement action against about 500,000 spam-related profiles.
In a blog post shared on Monday, Meta said the measures are part of a broader strategy to eliminate inauthentic activity, discourage duplicate content, and promote originality. “We’re making progress,” the company noted, explaining that it had targeted hundreds of thousands of accounts involved in spam or fake engagement, and deleted millions of profiles impersonating top content creators.
Meta emphasized that repeated posting of recycled content—whether text, images, or video—harms the credibility of the platform. Such practices drown out authentic voices and make it harder for new, original creators to gain visibility.
To help support genuine content creators, Meta is rolling out new features that can automatically identify reposted content and trace it back to its original source. The aim is to ensure that those who create original content receive proper recognition and greater reach.
Meta added that pages consistently posting original material typically achieve wider distribution on Facebook. Simply combining video clips or adding watermarks is no longer considered significant editing. Posts that deliver real value and share authentic stories are more likely to succeed.
The company also warned that uploading videos bearing watermarks from other platforms could result in lower reach or loss of monetisation privileges.
As part of these updates, Meta has enhanced its Professional Dashboard with post-level insights. This allows creators to monitor individual post performance and view any limitations affecting content reach or income on the Support Home screen.
In a related move, YouTube also revised its monetisation policies. Content that appears mass-produced or overly repetitive may no longer be eligible for ad revenue.
The change initially led to confusion among creators, some of whom feared a ban on AI-generated content. YouTube later clarified, saying AI tools used to enhance storytelling are still welcome, and such content can still be monetised.
Both Meta and YouTube stress that these changes aim to boost content quality and protect the rights of original creators amid growing competition in the digital content space.