Illustration by Derreck Johnson. Images by Sudowoodo/iStock; z_wei/iStock.

Twitter Is Escalating Its War on Trolls

Why isn’t it working?

Twitter has been suspending 1 million fake or suspicious accounts per day in recent months, the Washington Post reported last week. That’s more than double the rate at which it was suspending accounts in October, and it could dent the size of Twitter’s user base. The company’s stock took a hit after the publication of the Post story, which seemed to call into question Twitter’s prior assertion that less than 5 percent of its active accounts were fake, an apparently important assurance to its investors. The Post frames Twitter’s increasingly aggressive policing of its platform as part of a shift in philosophy for a company that once considered itself a bastion of free speech. “Free expression doesn’t really mean much if people don’t feel safe,” Twitter executive Del Harvey told the Post.

Advertisement

But it’s not as though Twitter just started zapping accounts yesterday. Even suspending 500,000 a day, as it did in 2017, is staggering. Twitter dropped its laissez-faire approach years ago and has been actively battling spam, abuse, harassment, and other terms of service violations ever since.

So the Post’s report raises the question: Is Twitter turning the tide on fake accounts with its latest campaign? Or is it fighting an unwinnable battle?

Actually, it might be both.

Twitter has long been accused of not doing enough to protect its users and keep its platform clean. The accusers included its own former CEO, Dick Costolo: “We suck at dealing with abuse and trolls on the platform, and we’ve sucked at it for years,” he wrote in a 2015 internal memo obtained by the Verge. The problem was so severe that it reportedly scared off suitors such as Disney and Salesforce when Twitter tried to sell itself in 2016.

Advertisement

The company’s business has bounced back since then, its relevance to politics, media, and activism bolstered by President Donald Trump’s heavy use of it. Yet despite its increased enforcement efforts, it’s not clear that Twitter today is any safer or freer of bots and trolls today than it was before.

That Twitter could ban 1 million accounts per day for several months, with no end in sight, suggests that the obstacle is no longer just a lack of will or resources. Rather, the problem is structural: No matter how many accounts Twitter suspends, the people creating them can always make more. They create and control them via automated systems that work faster than any human moderator can. In response, Twitter has its own automated systems working to identify and suspend fake accounts. It’s like the world’s largest whack-a-mole game.

For what it’s worth, Twitter appears to be getting faster with the mallet. Its chief financial officer, Ned Segal, said in response to the Post article that the company now catches many fake accounts as soon as they sign up, and many others within their first 30 days. So the majority of the million accounts per day that it’s suspending were never counted among its active users.

Advertisement

It’s worth distinguishing here between the problem of fake accounts—like spam bots and sockpuppets designed solely to amplify certain kinds of tweets—and abuse or harassment by real human users. They’re often lumped together, but the human trolls are probably both fewer in number and more damaging to the overall site experience. They’re also harder to police: Twitter relies on other users to flag tweets that might violate its terms of service, then reviews them manually. It’s difficult, perhaps impossible, to automate the types of judgment calls needed to reliably identify hate speech or personal threats. Yet Twitter has taken a step in that direction by building software to hide tweets from accounts that it suspects of being trolls, based on their activity and interactions.

All of which is to say, Twitter really does seem to be getting better at this. But that doesn’t mean the problems it’s fighting will ever go away. Twitter’s Harvey and Yoel Roth admitted as much in a June 26 blog post about new spam- and troll-fighting measures. “We will never be done with our efforts to identify and prevent attempts to manipulate conversations on our platform,” they wrote.

At its core, Twitter is just really hard to police. It’s both anonymous and public, a combination that seems to bring out the worst in internet users. Just try venturing into Reddit’s shadier corners or moderating a media website’s online comments section.

I’ve suggested in the past that if Twitter really wants to make a dent in its abuse problem, it has to deprecate anonymous accounts by default. But the company has understandably balked at that, partly because some types of anonymous users, like activists fighting authoritarian regimes, remain central to its identity as a platform. Whatever it does, the company certainly shouldn’t take the depth of the task as a reason to stop policing so aggressively, even though Wall Street is continuing to punish its stock.

Advertisement

Costolo was probably right when he wrote in 2015 that Twitter sucked at dealing with abuse and trolls. The company still has a long way to go, as anyone who has been suspended unjustly, or tried unsuccessfully to get a harasser suspended, can tell you.

But at some point—with Twitter devoting ever larger teams of humans to dealing with abuse, honing algorithms to quiet trolls, and suspending 1 million accounts a day—the company’s critics may be forced to acknowledge that the main problem is no longer a lack of effort or ingenuity on Twitter’s part. The main problem is Twitter’s fundamental structure. And when that time comes, we’ll have to either accept a certain level of sketchiness as the price of a global public square or demand that Twitter change its core workings and become more like Facebook, which requires most users to go by their real names. It’s not going to be an easy choice.