Noah Berger/AFP/Getty Images

Why Social Media’s Misinformation Problem Will Never Be Fixed

Facebook and others have gotten more serious about hoaxes, hate speech, propaganda, and foreign election interference. Here’s how it helped in the midterms—and why they aren’t going away.

On Facebook, right-wing hoaxes about the migrant caravan, racist dog whistles about a black gubernatorial candidate, and “false flag” conspiracy theories about the pipe bomber spread like fire. On Twitter, videos falsely claiming to show rigged voting machines in Ohio racked up thousands of retweets. Twitter appears to have played a role in radicalizing the mail bomb suspect; Gab fueled the hate of the synagogue mass shooter. Memes hatched on Reddit fledged into GOP campaign slogans. On Election Day, the most-shared news stories on Facebook sprang from outlets that were either explicitly partisan (Daily Caller, Ben Shapiro), unjournalistic (Unilad), or both (Bernie Sanders, Rush Limbaugh).

Advertisement

At first grimace, the role of social media in the 2018 U.S. midterm elections looked a lot like the role it played in the 2016, when the hijacking of tech platforms by foreign agents and domestic opportunists became one of the major subplots of Donald Trump’s victory and sparked a series of high-profile congressional inquiries. Despite all of the backlash, all the scrutiny, all the promises made by the likes of Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey to do better, the boogeymen that reared their head then are still snarling today.

That’s dispiriting, because the tech companies had two years to prepare, and untold resources at their disposal. Facebook even had a well-staffed election “war room” tasked with finding and addressing the very kinds of hoaxes that continued to crop up throughout the election cycle. If they haven’t fixed things by now, well: When will they?

The answer is probably “never.” And yet the situation is not quite as hopeless as that makes it sound, because the big internet platforms have made demonstrable progress on nearly every front. Consider:

• Disclosures by Facebook make clear that Russian agents are still using the social network to try and meddle in U.S. elections, but Facebook and U.S. authorities are learning how to catch them. On election eve, the company blocked more than 100 Facebook and Instagram accounts, acting on a tip from law enforcement. Facebook said Tuesday night that they were responding to concerns of a link to Russia’s pesky Internet Research Agency. And earlier this year, both Facebook and Twitter introduced advertising transparency measures designed to prevent foreign actors from buying U.S. political ads.

Advertisement

• Evidence suggests that while misinformation has continued to flourish on social media, Facebook’s efforts have made a significant dent in its prevalence. Three independent analyses published in recent months found that content from dubious news sources has sharply declined on Facebook since mid-2017. (Twitter, which has not explicitly targeted misinformation, has seen the opposite effect.)

• Anecdotes indicate that while bots and trolls continue to permeate Twitter, the platform has made it harder to create and maintain them. NBC News’ Ben Collins, who gained entrance to a private “strategy chat” among far-right trolls, found them complaining that Twitter’s new policies had thwarted some of their efforts to create fake accounts and execute coordinated disinformation campaigns. In September and October, Twitter took down some 10,000 accounts that were engaged in voter suppression, many of them posing as Democrats and dispensing disinformation to members of that party.

• While hate speech and harassment remain common on numerous social networks, from Twitter to YouTube to Reddit, virtually every online platform has begun to crack down on the vilest of them, if only when compelled by public pressure. Milo Yiannapoulos and the neo-Nazi Daily Stormer were early test cases; the deplatforming of Infowars’ Alex Jones by everyone from Facebook to Spotify to Apple Podcasts marked an inflection point. When it emerged that Gab had helped incubate the Pittsburgh synagogue shooter, major tech companies refused to work with the site at all, temporarily taking it down.

In other words, when you look at the broader picture, social media in 2018 starts to look significantly different than it did in 2016—and the comparison is mostly favorable.

Advertisement

That’s not to say Facebook and its ilk deserve three cheers and a round of drinks for securing their platforms and saving democracy. They’re guilty of constructing platforms whose very structure lends itself to exploitation by hoaxsters, manipulators, extremists, and propagandists. As the sociologist and writer Zeynep Tufekci has argued, it’s precisely those characteristics that make social networks such effective vehicles for advertising. My Slate colleague April Glaser made a similar point in the context of Georgia gubernatorial candidate Brian Kemp’s viral smear of his opponent, Stacey Abrams, tying her to the New Black Panther Party.

That should help to explain why even the earnest and expensive efforts that Facebook and others have undertaken since 2016 have succeeded only in narrowing the flood of lies and incitements, not stemming it. A network built on encouraging people to spontaneously broadcast anything that strikes their fancy, and amplifies those messages based on their propensity to spark gut reactions in others, can’t fully root out sensationalism or misinformation without undermining its own core business. Even if it could, the bulk of the damage is often done by the time human content moderators take action.

That said, they could be doing much better. If a small team at the New York Times, headed by tech reporter Kevin Roose and aided by the paper’s readers, could turn up example after example of viral posts that appeared to blatantly violate Facebook’s policies, surely Facebook with its ungodly fortune could build a better war room. And it must. Social networks didn’t invent misinformation, obviously, but they’ve fanned it to unacceptable levels with their engagement-based algorithms and their assault on traditional media norms and business models. It took sustained pressure from Congress, media, and users to get them to acknowledge the problem, and it will take more of the same to hold them responsible for it in the years and elections to come.

At this point, however, even Facebook admits that the problem isn’t going away. “It’s possible to pick out specific examples of things we miss,” a company official said in a statement, responding to a question about the posts flagged by Roose and others. “But what we’re particularly interested in is the overall amount of misinformation on Facebook, and whether that’s trending down.” Elsewhere, CEO Mark Zuckerberg has compared its efforts to stop election meddling to “an arms race,” which is hardly reassuring.

Advertisement

We may not yet know the full extent to which online misinformation and meddling marred these midterms. Remember, it wasn’t until September 2017 that Facebook first disclosed that Russian operatives had bought U.S. political ads on its network. The heightened awareness of the problem this year is progress in itself.

Progress is not the same as a solution. Barring a seismic upheaval in the industry, however, progress is all we can hope for from Silicon Valley when it comes to stemming a tide of propaganda that shows no sign of abating on its own.

Listen to this week’s episode of Slate’s If Then podcast for a tech reporters’ roundtable on misinformation in the 2018 midterms.