The use of such analysis is welcomed by those monitoring disinformation and tech policy. “New online safety regulators and independent auditors should be looking at deploying tech such as TrollMagnifier, to assess existing safety systems, thereby making social media more accountable for online harms,” says Max Beverton-Palmer, director of the internet policy unit at the Tony Blair Institute for Global Change. A Reddit spokesperson says their policies prohibit content manipulation, which covers coordinated disinformation campaigns as well as any content presented to mislead or falsely attributed to an individual or entity. “We have dedicated teams that detect and prevent this behavior on our platform using both automated tooling and human review,” the spokesperson says. “As a result of our teams’ efforts, we remove 99 percent of policy-breaking content before a user sees it.”
But Higgins and another researcher, Yevgeniy Golovchenko of the University of Copenhagen, who studies disinformation, are circumspect about the replicability of the academics’ troll hunting approach. Some organic behavior can appear troll-like, Higgins says, pointing to errors in earlier, more basic academic research that wasn’t able to as accurately distinguish between inauthentic and authentic behavior. “I would be interested in diving into the data that’s being produced from this to see how much of it is just communities who are interacting with each other versus actual state-sanctioned trolls,” he says. Golovchenko is concerned about the results themselves. “It’s a very interesting topic, and the paper is ambitious, but I’m not entirely sure how to evaluate the accuracy of the tool the authors present,” he says. For one thing, the tool is trained on accounts that have been discovered—so, the worst-designed ones, which perhaps only represent the tip of the iceberg of state-sponsored disinformation capabilities. “These accounts are made to be undetected,” says Golovchenko. “Studies like this will always give us the bare minimum—by design, because we’re talking about state actors that spend resources to stay hidden.”
Others are more welcoming of the paper and its findings. “The proof of any tool is in its application, and, judging by the results here, these researchers have developed a clever way of scaling up the identification of accounts engaged in coordinated troll activity,” says Ciaran O’Connor, an analyst at the Institute for Strategic Dialogue, who monitors disinformation and extremism online. O’Connor does, however, point out that it’s difficult to do such tracking without a seed list of known accounts to see echoes of—something possible on Reddit, which is open about releasing data to help researchers. “Transparency from social media platforms is an ongoing challenge, and we will also argue that more data is always the answer to help us, and subsequently help platforms help themselves, to understand and tackle emerging tactics, tools, and narratives favored by bad actors on social media,” he says.
That transparency has helped researchers spot troll-like behavior—and is an act of beneficence the researchers hope they can pay back to Reddit. “I think this kind of technique is definitely going to help social network companies,” says Stringhini. He points out that while they have more indicators to look at that could provide hints about a troll users’ real background, such as IP addresses and browser fingerprints, examining the pattern of content posting could help identify more inauthentic users more accurately.
Finding those inauthentic users could still prove tricky, though, given the mundanities of Reddit. Bootinbull went silent on the platform on December 3, 2015, 50 days after first posting, the mission to stir hearts and minds seemingly unsuccessful—or concluded. Their farewell post? This one: Responding to a setup for a lengthy joke in r/jokes beginning with a woman asking a man “Do you drink beer?” Bootinbull blundered in with the reply, “Just beer :)”.
More Great WIRED Stories