Meta Shares Update on Detected Foreign Influence Operations

Meta continues to evolve its tactics to detect and counter foreign influence operations.

Meta Shares Update on Detected Foreign Influence Operations

While debate continues over the threat posed, or not, by TikTok, findings like this don’t seem to help TikTok’s case.

Today, Meta has published its latest “Adversarial Threat Report,” which provides an overview of the various coordinated manipulation efforts detected and removed from Meta’s apps in Q1 2023.

And among them:

We removed 37 Facebook accounts, 13 Pages, five Groups, and nine accounts on Instagram for violating our policy against coordinated inauthentic behavior. This network originated in China and targeted the global Sikh community, including in Australia, Canada, India, New Zealand, Pakistan, the UK, and Nigeria.”

Chinese-based groups have long been actively seeking to use social media platforms to influence opinion on issues related to China’s political ambitions. Indeed, China-based networks are among the largest and most persistent, and there’s direct evidence to suggest that these groups are being funded by the Chinese Government, in order to both influence global opinion and drive beneficial outcomes for the C.C.P.

As such, TikTok, which is a Chinese-owned app, with significant influence in regions outside of China, seems like an ideal vector for the same. And while few specifics have been shared publicly on the actual threat posed by TikTok in this respect, it does logically seem to follow that TikTok could pose a risk, now and/or in future.

We may get more insight into this as part of TikTok’s challenge to the U.S. Senate ruling that it needs to be sold into U.S. ownership, but it’s findings like this that reiterate the scale and ambition of such groups, and another reason why TikTok is under scrutiny.

Meta also disrupted operations originating from Bangladesh, Croatia, Iran and Israel in Q1, while it also continues to combat a Russian network of influence operations called “Doppelganger,” which is focused on weakening international support for Ukraine.

“Nearly two years ago, we were the first technology company to publicly report on Doppelganger, an operation centered around a large network of websites spoofing legitimate news outlets. The EU Disinfo Lab and the Digital Forensic Research Lab published open source research at the same time. In December 2022, we were first to publicly attribute it to two companies in Russia who were sanctioned by the EU in 2023 and by the US Treasury Department in 2024.”

Meta has also provided a specific update on the use of AI in misinformation and deception efforts, and how its countermeasures are holding up thus far:

So far, we have not seen novel GenAI-driven tactics that would impede our ability to disrupt the adversarial networks behind them. We’ve observed instances of: photo and image creation, AI-generated video news readers, and text generation. We have not seen threat actors use photo-realistic AI-generated media of politicians as a broader trend at this time.”

The qualifier “at this time” is important, because the expectation is that, over time, more and more of these groups will employ AI-based tactics. But it hasn’t been a major factor as yet, while Meta continues to refine and revise its detection systems.

“While we continue to monitor and assess the risks associated with evolving new technologies like AI, what we’ve seen so far shows that our industry’s existing defenses, including our focus on behavior (rather than content) in countering adversarial threat activity, already apply and appear effective.”

Overall, the threat actors identified in Meta’s latest report remain largely the same, driven, seemingly, by largely the same ambitions, and Meta continues to evolve its approaches to detect and remove each before they can have significant influence.

But the report also underlines the fact that this type of activity is persistent, and always evolving. Foreign adversaries are always seeking to use high reach and high influence surfaces like social media to expand their messaging, which is why it’s important for Meta, and other platforms, to continue to work to improve their detection and removal efforts.

You can read Meta’s latest “Adversarial Threat Report” here.