TikTok bans paid political influencer videos ahead of US midterms
TikTok lays out its strategy for combatting misinformation ahead of the election.
TikTok is banning influencers from posting paid political content, one of the steps the social network is taking to shore up its platform against misleading information ahead of the 2022 US midterm elections.
While TikTok doesn’t allow any political advertisements, the 2020 election presented the challenge of educating influencers specifically on the rules around paid content, Eric Han, head of US safety at TikTok, said Wednesday in a statement. Any posted paid political content that the company identifies will be removed.
For all users, organic video posts referencing politics or the elections will remain on the app, as long as they follow the platform’s community guidelines and don’t engage in actions such as spreading misinformation about how to vote, calling for harassment of election workers, posting deep fakes of candidates or inciting violence, Han said on a call with reporters. Videos that may violate guidelines will be restricted from widespread distribution while moderators evaluate the posts, he said.
In addition, the company is launching an elections center on the app that will provide information about polling places, ballots and candidates from authoritative partners such as the National Association of Secretaries of State. Resources for deaf voters will be provided from the Center for Democracy in Deaf America.
Hashtags like “#elections2022” will be added to content identified as being election-related or from the US government, politicians or political parties to make it easier for users to find the center, Han said in the statement.
Social media has been a breeding ground for misinformation and harassment during US election cycles. Notably, Facebook and Instagram unwittingly sold ads to Russian trolls aiming to sow discord among US voters before the 2016 general election. Four years later, misinformation about that election’s legitimacy spread on social media sites, while insurrectionists used platforms like Facebook to stage an assault on the US Capitol. Major social media companies have been trying to devise policies and moderation procedures to combat bad posts.
On Monday, Meta Platforms Inc. rolled out its playbook, sticking to the tactics the company used in 2020 to police political advertisements and organic content. Its effort largely includes a focus on scrubbing Facebook and Instagram for misinformation about voting logistics and restricting any new political ads in the week prior to Election Day.
With TikTok’s outright ban on political ads, the short-form video platform owned by China-based ByteDance Ltd. will put much of its efforts on content that users post organically with a mix of technology-driven and human moderation. The company has been working with fact-checkers PolitiFact, Science Feedback and Lead Stories to help identify keywords, audio, codewords and other signals for the election that should trigger scrutiny, Han said on the call. The work will continue through the election.
Any post that may violate the guidelines and requires additional review from human moderators will be tagged and restricted from appearing on “For You,” the app’s home feed where videos can go viral and reach a larger audience. The limitation is removed if a video passes app guidelines; the content is taken down if it fails. If the review is inconclusive, the restriction will remain to throttle potential views and users who try to share the video will get a pop-up message notifying them that the content may be making unverified claims.
The company will also redirect user searches or hashtags that promote misinformation to the community guidelines, Han said.