How Facebook does (and doesn’t) shape political views

Illustration by Nick Barclay / The VergeThis is Platformer, a newsletter on the intersection of Silicon Valley and democracy from Casey Newton and Zoë Schiffer. Sign up here. Today let’s talk about some of the most rigorous research we’ve...

How Facebook does (and doesn’t) shape political views

This is Platformer, a newsletter on the intersection of Silicon Valley and democracy from Casey Newton and Zoë Schiffer. Sign up here.

Today let’s talk about some of the most rigorous research we’ve seen to date on the subject of social networks’ influence on politics — and the predictably intense debate around how to interpret it.  

I.

Even before 2021, when Frances Haugen rocked the company by releasing thousands of documents detailing its internal research and debates, Meta has faced frequent calls to cooperate with academics on social science. I’ve argued that doing so is ultimately in the company’s interest, as the absence of good research on social networks has bred strong convictions around the world that social networks are harmful to democracy. If that’s not true — as Meta insists it is not — the company’s best path forward is to enable independent research on that question.

The company long ago agreed, in principle, to do just that. But it has been a rocky path. The Cambridge Analytica data privacy scandal of 2018, which originated from an academic research partnership, has made Meta understandably anxious about sharing data with social scientists. A later project with a nonprofit named Social Science One went nowhere, as Meta took so long to produce data that its biggest backers quit before producing anything of note. (Later it turned out that Meta had accidentally provided researchers with bad data, effectively ruining the research in progress.)

Three papers sought to understand how the Facebook news feed affected users’ experiences and beliefs

Despite those setbacks, Meta and researchers have continued to explore new ways of working together. On Thursday, the first research to come out of this work was published.

Three papers in Science and one in Nature sought to understand how the contents of the Facebook news feed affected users’ experiences and beliefs. The studies analyzed data on Facebook users in the United States from September to December 2020, covering the period during and immediately after the US presidential election.

Kai Kupferschmidt summarized the findings in an accompanying piece for Science:

In one experiment, the researchers prevented Facebook users from seeing any “reshared” posts; in another, they displayed Instagram and Facebook feeds to users in reverse chronological order, instead of in an order curated by Meta’s algorithm. Both studies were published in Science. In a third study, published in Nature, the team reduced by one-third the number of posts Facebook users saw from “like-minded” sources—that is, people who share their political leanings.

In each of the experiments, the tweaks did change the kind of content users saw: Removing reshared posts made people see far less political news and less news from untrustworthy sources, for instance, but more uncivil content. Replacing the algorithm with a chronological feed led to people seeing more untrustworthy content (because Meta’s algorithm downranks sources who repeatedly share misinformation), though it cut hateful and intolerant content almost in half. Users in the experiments also ended up spending much less time on the platforms than other users, suggesting they had become less compelling.

By themselves, the findings fail to confirm the arguments of Meta’s worst critics, who hold that the company’s products have played a leading role in the polarization of the United States, putting the democracy at risk. But nor do they suggest that altering the feed in ways some lawmakers have called for — making it chronological rather than ranking posts according to other signals — would have a positive effect.

“Surveys during and at the end of the experiments showed these differences did not translate into measurable effects on users’ attitudes,” Kupferschmidt writes. “Participants didn’t differ from other users in how polarized their views were on issues like immigration, COVID-19 restrictions, or racial discrimination, for example, or in their knowledge about the elections, their trust in media and political institutions, or their belief in the legitimacy of the election. They also were no more or less likely to vote in the 2020 election.”

II.

Against this somewhat muddled backdrop, it’s no surprise that a fight has broken out around which conclusions we should draw from the studies.

Meta, for its part, has suggested that the findings show that social networks have only a limited effect on politics.

“Although questions about social media’s impact on key political attitudes, beliefs, and behaviors are not fully settled, the experimental findings add to a growing body of research showing there is little evidence that key features of Meta’s platforms alone cause harmful ‘affective’ polarization or have meaningful effects on these outcomes,” Nick Clegg, the company’s president of global affairs, wrote in a blog post. “They also challenge the now commonplace assertion that the ability to reshare content on social media drives polarization.”

But behind the scenes, as Jeff Horwitz reports at The Wall Street Journal, Meta and the social scientists have been fighting over whether that’s true. 

The leaders of the academics, New York University professor Joshua Tucker and University of Texas at Austin professor Talia Stroud, said that while the studies demonstrated that the simple algorithm tweaks didn’t make test subjects less polarized, the papers contained caveats and potential explanations for why such limited alterations conducted in the final months of the 2020 election wouldn’t have changed users’ overall outlook on politics.

“The conclusions of these papers don’t support all of those statements,” said Stroud. Clegg’s comment is “not the statement we would make.”

Science headlined its package on the studies “Wired to Split,” leading to this amazing detail from Horwitz:  “Representatives of the publication said Meta and outside researchers had asked for a question mark to be added to the title to reflect uncertainty, but that the publication considers its presentation of the research to be fair.”

Meagan Phelan, who worked on the package for Science, wrote to Meta early this week saying that the journal’s findings did not exonerate the social network, Horwitz reported. “The findings of the research suggest Meta algorithms are an important part of what is keeping people divided,” she wrote.

What to make of all this?

While researchers struggle to draw definitive conclusions, a few things seem evident.

Facebook represents only one facet of the broader media ecosystem

One, as limited as these studies may seem in their scope, they represent some of the most significant efforts to date for a platform to share data like this with outside researchers. And despite valid concerns from many of the researchers involved, in the end Meta did grant them most of the independence they were seeking. That’s according to an accompanying report from Michael W. Wagner, a professor of mass communications at the University of Wisconsin-Madison, who served as an independent observer of the studies. Wagner found flaws in the process — more on those in a minute — but for the most part he found that Meta lived up to its promises.

Two, the findings are consistent with the idea that Facebook represents only one facet of the broader media ecosystem, and most people’s beliefs are informed by a variety of sources. Facebook might have removed “stop the steal”-related content in 2020, for example, but election lies still ran rampant on Fox News, Newsmax, and other sources popular with conservatives. The rot in our democracy runs much deeper than what you find on Facebook; as I’ve said here before, you can’t solve fascism at the level of tech policy.

At the same time, it seems clear that the design of Facebook does influence what people see, and may shift their beliefs over time. These studies cover a relatively short period — during which, I would note, the company had enacted “break the glass” measures designed to show people higher-quality news — and even still there was cause for concern. (In the Journal’s story, Phelan observed that “compared to liberals, politically conservative users were far more siloed in their news sources, driven in part by algorithmic processes, and especially apparent on Facebook’s Pages and Groups.”)

Perhaps most importantly, these studies don’t seek to measure how Facebook and other social networks have reshaped our politics more generally. It’s inarguable that politicians campaign and govern differently now than they did before they could use Facebook and other networks to broadcast their views to the masses. Social media changes how news gets written, how headlines are crafted, how news gets distributed, and how we discuss it. It’s possible that the most profound effects of social networks on democracy lie somewhere in this mix of factors — and the studies released today only really gesture at them.

III.

The good news is that more research is on the way. The four studies released today will be followed by 12 more covering the same time period. Perhaps, in their totality, we will be able to draw stronger conclusions than we can right now.

I want to end, though, on two criticisms of the research as it has unfolded so far. Both come from Wagner, who spent more than 500 hours observing the project over more than 350 meetings with researchers. One problem with this sort of collaboration between academia and industry, he wrote, is that scientists must first know what to ask Meta for — and often they don’t.

“Independence by permission is not independent at all.”

“One shortcoming of industry–academy collaboration research models more generally, which are reflected in these studies, is that they do not deeply engage with how complicated the data architecture and programming code are at corporations such as Meta,” he wrote. “Simply put, researchers don’t know what they don’t know, and the incentives are not clear for industry partners to reveal everything they know about their platforms.”

The other key shortcoming, he wrote, is that ultimately this research was done on Meta’s terms, rather than the scientists’. There are some good reasons for this — Facebook users have a right to privacy, and regulators will punish the company mightily if it is violated — but the trade-offs are real.

“In the end, independence by permission is not independent at all,” Wagner concludes. “Rather, it is a sign of things to come in the academy: incredible data and research opportunities offered to a select few researchers at the expense of true independence. Scholarship is not wholly independent when the data are held by for-profit corporations, nor is it independent when those same corporations can limit the nature of what it studied.”