According to a report by consumer watchdog Which?, big tech companies such as Facebook and Google aren’t doing enough to prevent scam advertisements on their platforms.
Scam Ads Are Rarely Removed Even After Being Reported
The report by Which? laid out some statistics that show how little these companies are doing to remove scam adverts from their platforms.
Adam French, Consumer Rights Expert at Which?, said:
Our latest research has exposed significant flaws with the reactive approach taken by tech giants including Google and Facebook in response to the reporting of fraudulent content – leaving victims worryingly exposed to scams.
In Google’s case, 34% of fraud victims said that even after reporting a scam ad, it wasn’t taken down by the search engine. For Facebook this number was 26%.
When it comes to the scams themselves, a quarter of the victims said that they were scammed via an ad on Facebook. Additionally, 19% of the victims reported being targeted via Google adverts. In Twitter’s case this number was three percent. These numbers aren’t surprising considering the sheer number of ads that users are displayed when scrolling through these platforms.
About 43% of victims said that they didn’t even report the ad that scammed them.
Furthermore, the study also revealed the qualms victims had with the reporting process and response of these companies. Victims felt that although Facebook’s reporting process was fairly straightforward, the company wouldn’t do anything about the advert. On the other hand, users found the reporting process of Google to be cumbersome. Essentially, victims did not know how to report the fraudulent ad to Google.
The Response by Facebook and Google
Facebook and Google both responded to the report by Which?
The response primarily enumerated the actions taken by both companies against scam ads.
Google claims to have blocked over 3.1 billion ads on their platform for violating its policies. In addition to this, the company stated that bad ad reports are always manually reviewed while potential violations of policy are manually as well as automatically reviewed.
Facebook said that it makes use of a 35,000 member team of safety and security experts who “work alongside sophisticated AI to proactively identify and remove” malicious ads from the platform.
Facebook further stated:
Our teams disable billions of fake accounts every year and we have donated £3 million to Citizens Advice to deliver a UK Scam Action Programme.
While Facebook and Google claim to be doing their part to battle bad ads, the report by Which? clearly shows that the opposite is true. It is imperative that these platforms be given a legal responsibility to tackle harmful and fraudulent content on their websites.
Facebook has lots of problems and frustrations. Here are fixes for the most annoying Facebook issues and errors you’ll come across.
About The Author