safety

Meta’s celeb scam breakthrough

Meta is testing a new facial recognition system to detect scam celebrity ads

Martin Crowley
October 22, 2024

Meta is trialing a new facial recognition system designed to combat celebrity scam ads, making its social media platforms—namely Instagram and Facebook—” difficult for scammers to use.” 

Scammers often use AI and fake images of celebrities or public figures to encourage people to engage with their ads, which then leads them to scam websites where they’re asked to share private information or even send money.

Meta’s new facial recognition system, which is being tested on 50,000 global celebrities who have been affected by this type of scam, compares the images of the celebrities used in suspected scam ads, against their profile images, and if there’s a match and the ad is a scam, the ad gets blocked immediately. 

Early tests show “promising results” as it’s improving the speed and efficacy of detecting and banning scam celebrity ads. 

The celebrities involved in the trial have been sent an in-app notification, explaining they have been opted in, by default, but also giving details on how to opt-out, if they wish to.

This isn’t Meta’s first foray into facial recognition. In 2021, they were forced to pull their Facebook "face recognition" photo-tagging feature and slammed with two lawsuits: one, which they settled for $650M, for violating privacy legislation, and the other, which they settled for $1.4B, for using the technology without permission. 

And while they have put their new system through a "robust" privacy review involving "regulators, experts, policymakers and other key stakeholders," questions are still being raised around privacy laws, with Meta refusing to test it in the UK or EU, where comprehensive data protection regulations apply.