• Tech
  • Social Media

How The Internet Can Make Hate Seem Normal — And Why That’s So Dangerous

8 minute read
Updated: | Originally published: ;

As America comes to grips with two more violent, homegrown plots — an attempt to mail pipe bombs to prominent Democrats and a mass shooting at a Pittsburgh synagogue — reality and surreality may seem hard to disentangle. Experts are working to figure out exactly what happened in each case and why, on levels ranging from the societal to forensic. But it appears that the two suspects shared at least one habit: engaging with extreme content online.

Robert Bowers, the suspect in the Pittsburgh shooting, posted a message on a niche social network known to be used by white supremacists shortly before opening fire at the Tree of Life synagogue. Cesar Sayoc, the Florida man charged with sending explosive material to political figures, left a trail of conspiracy theories and right-wing sensationalism on Facebook. While their use of technology may help reveal their motives, it also speaks to bigger problems that researchers are racing to better understand. Chief among them is the way that the Internet can make irrational viewpoints seem commonplace.

“A lot of our behavior is driven by what we think other people do and what other people find acceptable,” says Nour Kteily, an associate professor at Northwestern’s Kellogg School of Management who studies dehumanization and hostility. And there’s a good chance that even those who avoid the dark corners of the web are encountering extreme ideas about what is right and who is wrong. A Facebook spokesperson says the company took action on 2.5 million pieces of content classified as hate speech in the first quarter of 2018.

There have always been people who espouse vitriol. “But the emergence of these online platforms has reshaped the conversation,” Kteily says. “They in many ways amplify the danger of things like dehumanizing speech or hate speech.” Marginal ideas can now spread faster and further, creating an impression that they are less marginal and more mainstream.

Big technology companies are acknowledging the dangers researchers have already uncovered when it comes to the ways that encountering hateful speech can skew attitudes. In one 2015 study, people who were exposed to homophobic epithets tended to rate gay people as less human and physically distance themselves from a gay man in subsequent tasks. And researchers have long warned that dehumanizing people is a tactic that goes hand-in-hand with oppressing them, because it helps create mental distance between groups.

“We are permitted to treat non-human animals in ways that are impermissible in the treatment of human beings,” David Livingstone Smith, a professor of philosophy at the University of New England, explained in a previous interview with TIME. Such language can help “disable inhibitions against acts of harm,” he said.

One question raised by the Pittsburgh shooting is what happens when extremists are shut out of mainstream social networks, as companies like Facebook and Twitter take a harder line on these issues. Facebook has been hiring content moderators and subject matter experts at a rapid clip, hoping to do a better job of proactively finding hate speech and identifying extremist organizations. Twitter continues to develop a more stringent policy on what constitutes dehumanizing speech that violates its terms. “Language that makes someone less than human can have repercussions off the service, including normalizing serious violence,” Twitter employees wrote in a post announcing proposed policy language.

Gab, a social media site on which Bowers wrote anti-Semitic posts, disavowed all acts of violence and terrorism in statements to TIME and other publications in the aftermath of the shooting. But the site has become a haven for white supremacists and other extremists, given its promise of letting people espouse ideas that might get them banned elsewhere, says Joan Donovan, an expert in media manipulation at research institute Data & Society. “What that does is create a user population on Gab of people who are highly tolerant of those views,” she says. That, in turn, might make things like rantings about Jewish conspiracies seem more widespread than they would on a platform where poisonous posts are surrounded — and perhaps diluted — by billions of rational ones.

Bowers’ final post before the shooting read, in part, “Screw your optics, I’m going in.” The term “optics,” Donovan says, likely refers to tactics discussed among white supremacists, specifically the idea that the movement will be more successful if its members are perceived as non-violent victims of “anti-white” thought police. Among the figures the movement portrays as its own oppressors, she says, are big technology companies. “[W]e are in a war to speak freely on the internet,” a Gab-associated account wrote on Medium, before that company suspended it in the wake of the shooting. The post accused Silicon Valley companies of “purg[ing] any ideology that does not conform to their own echo chamber bubble world.” Such sites, where the alt-right flocks, have been described as “alt tech.”

Donovan says that these niche platforms are places “where many harassment campaigns are organized, where lots of conspiracy talk is organized.” Racist and sexist memes that might get an account suspended on other platforms are easy to find. “The problem is when you’re highly tolerant of those kinds of things,” Donovan explains, “other more sane and more normal people don’t stay.”

Though social networks might seem well-established at this point, more than a decade after Facebook was founded, academics are lagging behind when it comes to understanding all the effects these evolving platforms might be having on users’ behavior and well-being. Experts interviewed for this article were not aware of research that investigates, on an individual level, the possible link between posting extreme or hateful content online and the likelihood of being aggressive offline. Posting can serve “a public commitment device,” Kteily says. But that’s far from a causal link.

Newer research is attempting, at least in the aggregate, to better understand the relationship between activity on social networks and violence in the offline world. Carlo Schwarz and Karsten Müller, researchers associated with the University of Warwick and Princeton University, respectively, analyzed every anti-refugee attack that had occurred in Germany over a two-year period — more than 3,000 instances — and looked at variables ranging from the wealth of each community to the numbers of refugees living there. One factor that cropped up across the country is that attacks tended to occur in towns where there was more usage of Facebook, a platform where users encounter anti-refugee sentiment.

The study’s methodology has come under some criticism, and Schwarz emphasizes that the findings need to be replicated before universal conclusions are drawn, especially because isolated Internet outages across Germany helped provide special circumstances for their study. (When access to the Internet went down in localities with high amounts of Facebook usage, they found that attacks on refugees dropped too.) But what their research suggests, Schwarz says, is that there is a sub-group of people “who seem to be pushed toward violent acts by the exposure to online hate speech.” The echo chamber effect of social networks may be part of the problem. When people are exposed to the same targeted criticisms over and over, he says, it may change their perception about “how acceptable it is to commit acts of violence against minority groups.”

Facebook, Twitter and Google are dedicating resources to the problem, yet there are many challenges: as algorithms are designed to pick up certain red-flag words, extremist groups adopt coded language to spread the same old ideas; content moderators need to understand myriad languages and cultures; and the sheer volume of posts on Facebook alone, which number in the billions each day, is overwhelming. The company says that it finds 38% of hate speech before it’s reported, a smaller proportion than for terror propaganda and nudity. The company expects that number to improve, a spokesperson says, also acknowledging the difficulty of tackling content that tends to be context-dependent.

And while major tech companies may feel that getting a handle on this problem is a business imperative — a Twitter spokesperson says that maintaining healthy conversation is a “top priority” — current law largely shields platforms from responsibility for the content on their platforms. That means that while some social networks may get serious in tackling extremist speech, there is no legal mandate for all platforms to follow suit. That is one reason, in the wake of these latest plots, that some lawmakers are renewing calls for tighter regulation on social media.

In the meantime, academics will keep trying to provide research that helps companies make decisions based on data rather than good intentions. “Research is obviously slow,” says Schwarz, who is now investigating whether there is a connection between Twitter usage and offline violence in the U.S. “It’s still a new field.”

More Must-Reads From TIME

Contact us at letters@time.com