Natalie Matthews-Ramo

Facebook’s Revamp Includes an Effort to Fight Fake News

Will it work?

Facebook is shifting tactics in the war on fake news. A few weeks ago, in the quiet lead-up to the major revamp of its news feed announced Thursday, the company made another tweak to what users see: It said it would no longer mark bogus headlines with a red-flag warning, as had been its practice since the end of 2016. Previously, these “Disputed” tags showed up beneath any story that had been rated false by at least two independent, fact-checking organizations. Now those tags have been replaced by something less intrusive—one or more “Related Articles,” supplied by fact-checkers, that offer context for (and perhaps debunking of) the headline’s claims.

Advertisement

The new system should have some clear advantages: First, it gives the checkers greater flexibility and room to challenge stories that are not entirely, 100 percent made up; second, it will speed things up, because Facebook will no longer require two assessments before it starts to show corrective facts; third, it reduces the number of clicks or taps required before a user sees specific fact-check information. But taken as a whole, the change is somewhat mystifying and maybe even ill-advised. Its design appears to be based (at least in part) on the science of post-truth—and on the flashy but fishy notion that debunking myths only makes them stronger.

This idea, that addressing lies with facts may backfire, has been widely shared in both the media and social science literature over the past 10 years. Now it’s cited by the team at Facebook in explaining its approach to fake news: “Academic research on correcting misinformation has shown that putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs,” wrote product manager Tessa Lyons on Dec. 20, 2017. A concurrent Medium post, from three more Facebookers on the project, also mentioned this research. And Facebook’s CEO and founder, Mark Zuckerberg, alluded to the backfire effect in his manifesto on building an “informed community” from last February: “Research shows that some of the most obvious ideas,” he wrote, “like showing people an article from the opposite perspective, actually deepen polarization.”

The problem is, the backfire effect that so worries Facebook may not exist at all. The Medium post described the above links to a review of debunking research from 2012, published in Psychological Science in the Public Interest, which indeed contains a section, titled “Making Things Worse,” on the risk of backfire. But more recent efforts to study this phenomenon more carefully—large-scale, preregistered studies using thousands of participants—have turned up little evidence in its favor. That’s not to say that backfires never, ever happen. It’s possible that a red-flag warning on Facebook could end up entrenching false beliefs for certain users under certain circumstances. But, according to the latest science (which I reviewed in detail for Slate last week), this danger has been greatly overstated.

In fact, if we’re going by the academic research literature, there’s good reason to believe that Facebook’s abandoned red-flag warnings were somewhat useful and effective. Last May, Dartmouth’s Brendan Nyhan and his students conducted a preregistered study of news feed warnings on a sample of about 3,000 adults. The researchers showed each participant a half a dozen fake-news headlines—e.g. “Trump Plagiarized the Bee Movie for Inaugural Speech”—sometimes adding a red flag of just the kind that Facebook had been using to indicate the story was disputed by independent fact-checkers. Then they asked their subjects to rate these headlines on a four-point scale from “Not at all accurate” to “Very accurate.” Nyhan and his students found a nice effect: In the absence of a warning flag, 29 percent of their subjects said the bogus headlines were either “somewhat accurate” or “very accurate,” but when the flag was shown, that proportion of fake-news believers dropped by about one-third, to 19 percent.

Advertisement

Another very similar study—from Yale’s David Rand and Gordon Pennycook—was posted in September 2017. In that one, researchers showed real- and fake-news headlines, with or without red-flag warnings, to more than 5,000 participants. Like Nyhan and his students, Rand and Pennycook found the warnings worked, at least a bit: Subjects described the tagged headlines as being slightly less accurate, on average, than the ones that did not have a warning. (The warnings might have been even more effective, Pennycook told me in an interview this week, if Facebook had put them just above the fake-news headlines instead of just below them.)

Rand and Pennycook’s research did have one crucial caveat. According to their study, the presence of a warning flag on one fake-news story could make other, untagged stories seem more accurate. They referred to this as an “implied truth effect.” For certain subjects—especially young adults or those who supported Donald Trump—the absence of a fact-check tag ended up seeming like a badge of credibility; it made them more believable. (Nyhan failed to find the same, but when the data from the two papers are combined, the effect appears to be there.) Given the disturbing scope of Facebook’s fake news problem, it’s hard to see how human-powered fact-checks could ever tag more than a fraction of the phony headlines on the site. And if the implied truth effect applied to all the others, the end result would be catastrophic.

But this concern must be squared with the results from Rand and Pennycook’s earlier study of the Facebook warnings, first posted last April. That one, which had a somewhat different design and about 1,000 subjects, confirmed the basic finding that red flags in the news feed lower people’s belief in fake-news headlines. It also showed that those warnings made subjects more skeptical overall—rather than more credulous—when it came to other, untagged headlines.

In a brief interview on Tuesday, I asked the members of the Facebook team whether and how their work was influenced by the academic research literature. User-experience researcher Grace Jackson said that the backfire effect is “something that we wanted to be aware of based on the academic literature,” but that “in our own research, it actually only happens extremely rarely.” She did not address the Rand and Pennycook papers but did mention that the team had been inspired by a 2015 paper from Leticia Bode and Emily Vraga, which, said Jackson, found that corrective information “worked really well” when presented in a format similar to the one that Facebook just rolled out.

Advertisement

For that study, Bode and Vraga showed students postings from a mocked-up Facebook news feed, including a bogus story claiming that genetically modified foods will make you sick. Students in one experimental condition saw a pair of “Related Links” below that item, showing refutations of the claim from Snopes.com and the American Medical Association. Bode and Vraga found that among the people who came into the study with the belief that GMOs are harmful, the related links helped to change their minds.

Facebook’s new approach, in which fact-check information gets presented as “Related Articles,” closely mirrors Bode and Vraga’s experimental treatment. Yet the 2015 paper is, in fact, equivocal in its results. In addition to the claims about the health effects of GMOs, Bode and Vraga also looked at fake-news headlines on a supposed link between vaccines and autism. In this latter case, they found no effect from their corrections. The “Related Links” from Snopes.com and the American Medical Association did not change believers’ attitudes.

Bode says it’s not clear why their treatment didn’t work for the myth about vaccines. It could be that the false belief is more established, so it’s harder to uproot. Or else it could be that the anti-vaxxer myth is more politicized and thus more amenable to motivated reasoning. The same ambiguity applies to Facebook’s efforts. Are the most dangerous fake-news stories in people’s news feeds like the ones about GMOs—and thus perhaps amenable to this format of debunking? Or are they like the ones about vaccines, where “Related Stories” might have no effect?

It’s also hard to know how much confidence one should have in extrapolating from the Bode and Vraga findings. Their study had about 500 subjects, but these were split across eight experimental conditions. And their positive result concerned just the subset of participants—about half, overall—who believed that GMOs will make you sick. That means they looked at subgroups of about several dozen people per condition. For the vaccines question, these sample sizes were smaller still. That’s not to say that Bode and Vraga got things wrong—only that their findings were preliminary and constrained by opportunity and cost. (In subsequent work, they’ve found similar results on the topic of GMOs.) If Facebook cared to know whether “Related Stories” really work to counter lies, the company could check the numbers for itself. Instead of citing modest research done by academics, its employees could, in theory, run something like the same experiment on 1 million Facebook users, or 10 million, or 100 million. Do “Related Articles” change behavior? Do red-flag warnings backfire? Is there an implied truth effect when not every article is tagged? If anyone will ever know the answer to those questions, it’s the team at Facebook.

Advertisement

In fact, they may already have those answers. In our conversation on Tuesday, I asked what they’ve found from their own analyses. Tessa Lyons, the product manager, only mentioned one result: When the team compared its new “Related Articles” format to the old red-flag warnings, they found that click-thru rates remained the same, while shares declined. In other words, users were just as likely to read the fake-news articles that appeared on their feeds but less likely to repost them for their friends. How much less likely? Lyons wouldn’t say. What about comparisons to baseline? How effective were either of these formats at reducing fake-news spread when compared to giving no fact-check information whatsoever? Again, Lyons wouldn’t say.

It’s possible Facebook has mined its vast supply of internal data and optimized its fake news–fighting tactics accordingly. As Bode points out, the company is full of very smart people, including many social science Ph.D.s. But if that were really true, then why bother with a smoke screen of citations to a wobbly academic research literature? Why not just say, “Look, we’ve crunched the numbers for ourselves, and this approach works best,” without sharing proprietary details?

Here’s another thought: It could be that the change from red-flag warnings to “Related Articles” isn’t really that important anyway. According to Lyons, the most effective way to slow these stories’ spread is to bury them on news feeds, and Facebook already does that. Once a story has been tagged as “false” in either system, it gets demoted by the Facebook algorithm and becomes much less likely to appear to users—Lyons says this intervention is the main driver in reducing a fake story’s reach by 80 percent. Links to “Related Articles” from third-party fact-checkers only come into play in those instances when the fake-news story does pop up in spite of its demotion. In other words, even if the fact-check links were quite effective, their real-world impact would be marginal.

This all raises an unnerving question: Given that both the red-flag warnings and “Related Articles” methods likely offer little more than a limp, second-line defense against fake news, why bother with them at all? If Facebook can demote these stories in its users’ feeds, such that their spread will be shrunk by 80 percent, then certainly it has the the power to eliminate them altogether. Indeed, the main takeaway from Thursday’s large announcement is that Facebook is adjusting its news feed to focus less on news overall—at least, less on news shared from publishers. (Individuals can still share whatever they want.) “We don’t want any ‘false news’ on Facebook,” said Lyons, using the company’s preferred name for the phenomenon.

Advertisement

The best way to accomplish this would be to pull stories from the site as soon as they’ve been identified as fakes. Instead of squeezing bogus headlines through tighter filters in the news feed algorithm, the site could just delete them. In practice, though, that would look a lot like censorship—a top-down decree about what’s true and what isn’t. (Lyons says items are removed this way only when they violate Facebook’s community standards.) So instead the company has staked out a middle ground, where fake news isn’t deleted; it’s disappeared.

That seems a little icky, too: If it isn’t censorship, then it’s certainly censorshipish. On the other hand, Facebook’s second-line approach—giving context for a bogus story, surrounding it with facts—has the benefit of seeming ethical and optimistic; it assumes that people care about the truth and that, all things being equal, they’ll tend to handle information in a responsible way. Of course, it may not work as well as disappearance at reducing shares and clicks. Of course, it may not work at all. But at least it sends a signal to the rest of us: Facebook wants to keep us as informed as possible.

Maybe this explains its making hay of a subtle shift from red-flag warnings to “Related Stories.” Whether this was based on solid social science or a careful audit of internal numbers, the story hinges on the feel-good notion that Facebook will bury lies with wholesome facts—and in a way we all can see. Here’s the ugly truth behind that curtain of transparency: If the social network wins its war against fake news, it will be driven by the secret, brutal engines of its code.

More from Health and Science