Twitter Ads Suspended By Top Brands Because Of Child Exploitation via @sejournal, @martinibuster

Top brands suspend advertising after ads shown next to solicitations for child porn The post Twitter Ads Suspended By Top Brands Because Of Child Exploitation appeared first on Search Engine Journal.

Twitter Ads Suspended By Top Brands Because Of Child Exploitation via @sejournal, @martinibuster

At least 30 major advertisers have dropped advertising on Twitter after it was revealed that their ads were displayed alongside tweets soliciting illegal child abuse content.

For example, a promoted tweet by a Scottish Rite Children’s Hospital in Texas was shown alongside toxic tweets related to child sexual abuse.

Advertisers Revolt Against Twitter

Reuters reported that at least 30 large brands have stopped their accounts when it was revealed that their promoted tweets were shown alongside toxic tweets.

A Twitter spokesman was quoted as saying that the research and conclusions by the cyber security firm (which studied tweets and accounts during the first twenty days of September 2022) was not representative of Twitter’s efforts to combat illicit activities.

But the article by Reuters quoted multiple big brand advertisers who were notified that their ads appeared next to the toxic tweets.

Reuters quoted an executive from Cole Haan:

“”We’re horrified,” David Maddocks, brand president at Cole Haan, told Reuters after being notified that the company’s ads appeared alongside such tweets.

“Either Twitter is going to fix this, or we’ll fix it by any means we can, which includes not buying Twitter ads.”

Twitter Inability to Accurately Detect Toxic Content

The background to Twitter’s toxic content problem first came to light within an article published by The Verge.

The article recounts a Twitter project to create a platform similar to OnlyFans where users can pay to share sexually explicit content.

Before launching the new service, Twitter tasked a group of employees to test if Twitter could be successful in weeding out harmful content so that the platform didn’t devolve into the sharing of illegal content.

This group of employees were called the Red Team.

Twitter’s project was halted when the Red Team determined that Twitter was incapable of detecting abusive and toxic content.

According to The Verge article:

“What the Red Team discovered derailed the project: Twitter could not safely allow adult creators to sell subscriptions because the company was not — and still is not — effectively policing harmful sexual content on the platform.

“Twitter cannot accurately detect child sexual exploitation and non-consensual nudity at scale,” the Red Team concluded in April 2022. The company also lacked tools to verify that creators and consumers of adult content were of legal age, the team found.”

So, in the Spring of 2022 Twitter concluded that it was ill-equipped to launch the service and it was shelved.

However, according to the cybersecurity firm Ghost Data, Twitter continued to have difficulties catching rogue users and accounts that were sharing illicit content.

Ghost Data conducted an investigation in September 2022 to discover how widespread the child exploitation problem is on Twitter.

Starting with a group of known child exploitation accounts they mapped out toxic accounts through the linked follower social connections between accounts, eventually identifying over 500 accounts responsible for nearly 5,000 tweets related to illicit child abuse activities.

The researchers noted that these accounts were all in English and that they hadn’t investigated the child abuse Twitter networks in other languages.

They concluded that further research into accounts in non-English accounts may reveal even more users sharing child abuse content.

Researchers Claim Twitter Ineffectual

A startling finding from the report is that Twitter only took action against just over 25% of the accounts that they identified as sharing explicit child abuse content, during the period of research covering the first twenty days of September 2022,

The researchers wrote (PDF):

“We noted that Twiter (sic) suspended less than 30% (27.5%) of the users that publicly shared child pornography links, material, and references in the first 20 days of September.

To date, still, more than 400 users are active after « the purge».

Many of these users have been active for months.”

The researchers concluded that though they identified many illicit activities and accounts on Twitter, they estimate that this is just a fraction of the true scope of the problem.

They observed that Twitter could do a better job at stopping the toxic activities:

“These results validate the alarming problem already discovered by internal employees and exposed by online media: Twiter (sic) cannot accurately detect child sexual exploitation and its executives are doing almost nothing to address this disaster.

We also uncovered evidence that such permissive policies on pornographic content are also inducing users to post non-consensual and rape videos, not to mention minors trying to sell their own nudity or sexual content.

…Probably a modest investment and a dedicated team, even applying our own basic techniques, would be suficient (sic) to easily locate and drastically reduce the illicit activities…”

That conclusion reached by the cybersecurity firm Ghost Data seemingly contradicts a statement issued by Twitter and reported by Reuters that Twitter has “zero-tolerance” for these kinds of activities because it’s been months since Twitter’s Red Team identified issues in detecting toxic content.

Reuters also reported that Twitter stated that it is hiring more employees to “implement solutions.”


Citations

Read the full report by cyber security firm Ghost Data (PDF)

Read the report by Reuters

EXCLUSIVE Brands blast Twitter for ads next to child pornography accounts

Read Report by The Verge on how Twitter abandoned an OnlyFans competitor

How Twitter’s child porn problem ruined its plans for an OnlyFans competitor

Featured image by Shutterstock/Pixel-Shot