AIF Blog

Hacking Hate and Extremism

Aug 16, 2017
CATEGORY: Technology, U.S.A., World

 

Haris Tarin, Wajahat Ali, Steve Clemons, and Parisa Zagat speak at the Aspen Ideas Festival in June 2017.

 

Hate groups in America are proliferating, and they're using social media to spread their messages and ideologies. On Saturday, hundreds of white nationalists gathered in Charlottesville, Virginia to march in protest of the removal of a Robert E. Lee statue. How has technology fueled the white supremacist movement? And do tech companies have a moral obligation when it comes to combatting the spread of online hate?

 

A recent study shows that in the past two years, hate groups — like the KKK who participated in Charlottesville — have increasingly embraced social media sites like Facebook and Twitter to garner a following and promote their toxic agenda. In 2015, the average number of “likes” on tweets and comments produced by hate groups jumped from less than one to almost eight.

 

The tech industry is caught in the crossfires between free speech and hate speech. Some are calling on Google and Facebook to increase their regulating protocols on hate speech while other libertarians push back against the idea of suppressing free speech.  

 

Steve Clemons, editor at large for The Atlantic, sat down at the Aspen Ideas Festival in June to discuss the solutions to online extremism with New York Times op-ed contributor Wajahat Ali, Facebook’s head of global counter-speech program Parisa Zagat, and the US Department of Homeland Security’s senior policy advisor Haris Tarin.

 

“Social media is a tool,” Ali says. “And it’s a tool that is as useful, as beneficial, as detrimental as the person who uses it.”

 

Toxic narratives are not new. The US has a long history of white supremacism, hate, and extremism. Social media has only served to amplify these groups’ agendas and make it easier to target like-minded individuals across a broader geographical landscape. Their strategies allow them to filter out what Ali refers to as the “flabby majority” and focus on a small portion of highly invested individuals willing to commit themselves to a group's ideology.

 

Social media platforms have nearly exhausted methods to control and police these organizations. Between 2015 and 2016, Twitter suspended 360,000 accounts that promoted a terrorist agenda. Similarly, Facebook increasingly removes prohibited hate speech reported by users. But for extremists, closing one door only opens another.

 

Facebook’s Parisa Zagat manages the social media site's global programmatic efforts on counter-speech. She believes the solutions lie in overriding extremist narratives with an alternative agenda. With the help of NGOs, Zagat and her team are training community leaders in vulnerable areas across the world to effectively use Facebook to promote positive counter-speech. Facebook is involved in an ongoing effort to define hate speech and to develop valuable ways to enforce regulations.

 

And while the problem of online extremism is global, Haris Tarin stresses the importance of combating it at home.

 

“If we don’t get it right domestically, we risk marginalizing the very communities that we need on our side,” Tarin explains. “Our focus needs to be on empowering and engaging these communities and ensuring we trust each other.”

 

Zagat commends Facebooks efforts to restrict white supremacist figures presence on the platform — whether it be promoting white nationalist propaganda or sharing posts about their childhood cat. Localized efforts have also played a large role in suppressing the white supremacist agenda.

 

Despite social media efforts to combat online extremism, we have a long way to go to put an end to giving groups the online means to create the momentum for rallies like the one we saw in Charlottesville.

 

Watch the full session here: 

 

 

By Eliza Costas, Editorial Assistant, Aspen Ideas Festival