Data Shows X Is Suspending Far Fewer Users for Hate Speech
X has shared insight into how its updated approach to moderation is impacting its enforcement.
Elon Musk’s X has released its first transparency report since renaming from Twitter, which outlines all of the reported rule violations, content removal requests, and other enforcement actions that it actioned over the first half of this year.
And given X’s updated approach to content moderation, many have been combing through the data trying to see how Musk’s more relaxed approach to moderation has impacted the user experience. But in terms of total actions, it’s actually pretty similar, at least based on the most recent comparative updates from Twitter.
First off, X has provided an overview of the total enforcement actions that it took based on detected rule violations between January to June 2024.
As you can see in this listing, hateful content, abuse and harassment, and violent content were the main reasons for removals, as they’ve traditionally been in X/Twitter’s reports.
In total, X reports that it suspended 5.2 million accounts in the first half of this year based on rule violations, and removed 10.7 million posts due to the same.
Which is more than what Twitter reported in its last published transparency report, which covered the first half of 2022.
Back then, Twitter’s team (pre-Elon) enforced the removal of 6.6 million posts, and took action against 5 million accounts, though Twitter did note that it suspended only 1.6 million accounts for violations, which is a lot fewer than what X has undertaken recently.
In the first half of 2021, Twitter reported that it had removed 5.9 million tweets, and took action against 4.8 million accounts. It suspended 1.2 million of those profiles.
So on balance, X is removing a lot more content, and suspending more accounts, despite its new, more open moderation policies.
That’s likely reflective of a broader reliance on automation, due to fewer staff, which now sees X taking a more binary view of violations. So accounts that would once have been asked to remove content are now being suspended more readily.
With that in mind, the accounts actioned and suspensions are actually very close, with X removing almost double as much content as Twitter had been for violations.
Which is probably not the result that many expected.
Though the specifics here are more in line with broader concerns.
According to X’s data, users reported 81 million incidents of abuse and harassment in the period, and 66 million cases of hate speech. Of those, 1.35% of the abusing accounts (per user reports) were suspended, and 0.004% of those posting hate speech were removed. Millions of hate speech posts were removed, but X is now far less likely to suspend users for such.
In 2022, 111k profiles were suspended for hateful conduct, versus 2.3k in this report.
In terms of increases, X suspended over 2 million more accounts for child safety than it did in 2022, and around 400k more “deceptive entities.”
It is, however, taking less action against suicide and self harm content (2k suspensions versus 11k in 2022).
In terms of legal requests, X received 72,000 legal action requests in the first half of the year, and complied with 70% of them
In 2022, Twitter received 53,000 legal requests, while it received 43,387 legal demands in 2021. It had previously complied with around 54% of these requests.
So, overall, despite Elon Musk’s claims of a more open and “free speech” aligned network, X is actually removing more content, suspending more users, and actioning more legal requests than Twitter had been. Yet, its policies now allow those promoting hate speech to remain active, while self-harm content is also seemingly not being enforced as harshly.
None of this is overly surprising. X has clearly stated that it’s now relying more on reach restrictions than account suspensions and content removals, while Elon himself has made a big show of opposing certain government orders, while complying with others.
All of the data here aligns with this, but at the same time, X is removing more content, and suspending more accounts overall, under this approach.
Is that a better situation for the app? I mean, advertisers clearly don’t think so, but I would also question whether this is a more “free speech” friendly approach than what previous Twitter management had enforced.
It does seem that Twitter staff had actually been less likely to suspend accounts overall, based on the numbers, which meant more opportunity for rectification, as opposed to binary bans.
Or maybe, X is just attracting more posts that invite suspension under X’s areas of focus.
It does seem like a more risky strategy, overall, but it is interesting to note the comparative variance in X’s moderation strategy.
You can check out X’s latest transparency report here.