Articles

The Nature of the Platform: Dealing with Extremist Voices in the Digital Age

Posted By August 5, 2016 No Comments

“We shape our tools and therefore our tools shape us.”
Marshal McLuhan


Defining extremist content online is no simple task. There is no clear-cut framework organizing this content into neat categories. Instead, there exists only a subjective stratification between content that is considered “disturbing” or “violent.” Distinctions between “political” and “extremist” present similar challenges. Ultimately, these labels are inherently politicized.[1]

Protecting the liberal value of freedom of expression, while simultaneously eradicating hate speech, and thus the incitation of violence, is a seemingly impossible feat. One that we have been dealing with for centuries. However, the current issues we face regarding extremist content online represents the new space in which this delicate, yet dangerous balancing act continues to unfold.

For Canada, these debates are comparatively easier to address from a legislative standpoint than in other countries. Our criminal code prohibits hate propaganda, and empowers our government to censor this kind of content, and prosecute the perpetrators.[2] We have largely settled the freedom of speech arguments that our American counterparts appear to perpetually wrestle with. However, laws do not always reflect the policies in action, and the issues of monitoring, and eradicating extremist content is no exception.

As of now, extremist content is not typically addressed through government legislation and intervention.

Instead, industry terms of service are the tool of choice in responding to this type of propaganda, and for their part, social media outlets have been particularly active in this regard. Today, Twitter, Instagram, and Facebook ban users from depicting gratuitous violence on their websites, and reserve the right to “remove certain kinds of sensitive content or limit the audience that sees it.”[3] In the social media realm, a post ‘takedown’ is generally not the result of a government agency systemically scrubbing the Internet for extremist content. It is largely an exercise by social media providers self-regulating according to their policies. This can be attributed to not only an altruistic aversion to hate speech by these companies, but also self-interest. Most social media platforms understand that hosting hate speech is not appreciated by the general population. For free speech advocates however, the concern that comes from this, and indeed from any case where the ‘soapboxes’ control what can be said upon them, is where the line is drawn between curtailing radicalization, and censoring debate.

eye-1553789_1920

Facebook, and other social media providers, are currently engaged in this balancing act. In fact, the company is being sued by the father of Nohemi Gonzale – the only American killed in the November Paris attacks. Gonzale’s family asserts that the company is responsible for its role in hosting and propagating ISIS content. The suit claims, “[Facebook’s] material support has been instrumental to the rise of ISIS and has enabled it to carry out numerous terrorist attacks, including the Nov. 13, 2015, attacks in Paris where more than 125 were killed.”[4] The traction this litigation will gain in the courtroom is currently unknown, but it has led to external criticisms, and undoubtedly inward reflection, of Facebook’s policies towards user content.

On the other hand, Facebook is simultaneously dealing with criticisms for being too active in its takedowns. The recent posting – then removal – then re-posting of the shooting death of Philando Castile has led many to accuse Facebook of scrubbing the incident. The video, and account of Castile’s girlfriend, Diamond Reynolds, were ultimately restored the next day after Facebook claimed the error was the result of a technical glitch.

The ‘live-stream’ dimension of the Castiles’ video, and others, is also a point of contention. For example, the recent stabbing of a French couple by an ISIS sympathizer was accompanied by a live-streamed ‘call to action’ by the perpetrator from within the victim’s home. His account was suspended, and the video removed promptly thereafter.[5]

The purpose of the previous examples is neither to scrutinize Facebook, nor compare the hosting of ISIS content to content depicting a possible police homicide. The purpose is to illustrate the fluid power these companies have in regards to moderating digital activity.

This is a power that these companies have largely held without abuse, and undoubtedly deserve credit in their attempts to manage this tight-rope act between civil liberty, and public safety. However, it is necessary to recognize, and address the regulatory greyscale of the status quo. The takedown of the Castiles’ homicide may very well have been a result of a technical glitch, but if it was the result of Facebook’s team intervening, the company was entirely empowered to do so by its own bylaws. It is a troubling truth that a video of a black man shot by the police, and the beheading of a journalist by ISIS, could both be subject to “review” and subsequently taken-down by Facebook for depicting graphic content. Or, on the other hand, allowed to remain on the site given the contents’ “social awareness and newsworthiness.”[6] Facebook (and other social media sites) make exceptions for certain content based on this determination, despite their graphic nature.[7] This caveat allows for selective intervention in these types of cases.

When additional sections of the terms of service agreements pertaining to supporting criminal and/or terrorist groups (as well as hate-speech, and counter-speech) are considered in evaluating an individual graphic or video, this issue becomes even more complicated. The necessity and justification is clear cut when a piece of content is shared with overt messages calling for terrorist attacks or inciting general violence. However, an image of a terrorist attack could be shared with messages of condemnation or support, which is more difficult to differentiate, and ultimately rests outside of an algorithmic models’ capabilities. Social media users who promote, and disseminate extremist content are often aware of this, and thus more sophisticated in their approach by avoiding posting content that calls for violence in an obvious manner. By creating more subversive messages, the ethics of removing such content, and the standards to do so, are blurred even further.

The issue of prohibiting extremist content is a question of practicality, as much as it is a question of ethics.

Countering the onslaught of ISIS recruitment online is an increasingly insurmountable task. While the rate and quality of ISIS online propaganda has decreased, it is impossible to fully eradicate. The challenges of moderating online content – extremist, ISIS or otherwise – are significant. In the United States, the principles bound in the First Amendment can be interpreted as oppositional to moderating online speech. Even in countries with more robust hate speech legislation such as France, the United Kingdom and Canada, the necessary checks and balances for evaluating and differentiating potentially harmful content from benign content, are not evolving at the same rate as the content itself.[8]

In the Middle East and North Africa (where the radicalization/foreign fighter problem is far more grave) the host sites’ capacity, and engagement for monitoring extremist content is comparatively lacking. However, many of these countries rely less on civil society organizations to help in this regard, as they have their own state censorship, and surveillance apparatuses. Many of these countries, such as Turkey and Jordan, are far more willing to use these tools to censor content online, as well as place drastic limits on free speech.

ExtremistContentGraphic

Once content is posted online for a certain period of time (between 3-4 hours), it is essentially online forever. As of now, there is simply not enough human capital within governments, and companies to monitor all of this content manually, and on a case-by-case basis.[9]

Principal Adviser to the European Union Counter Terrorism Coordinator, Christiane Hohn, responded to these issues at a recent conference at the International Centre for the Study of Radicalisation and Political Violence (ICSR). Hohn said, “(I) would argue in favour of automatic, proactive software, because it’s just not sufficient to rely on the manual flagging of individual polls”. These types of algorithmic models Hohn mentioned analyze hashtags and keywords in order to flag content, which is subsequently suspended by the service provider, or by law enforcement. However, even the most efficient software inevitably suffers from incidents of false-positives; incorrectly identifying and suspecting content that is unrelated to terrorism or extremism. Moreover, automatized suspensions without human discretion pose the risk of disrupting investigations of extremists online. Automatic triggers for shut down mean that accounts monitored by law enforcement may be inadvertently shut down, causing them to lose the network and thus, restart costly investigations.

The human capital problem in this regard is serious, and to further complicate the problem, the impact of successful takedown campaigns is murky.

In some extremist online circles, including ISIS, users view having a suspended account as a badge of honour. Essentially, increased suspensions equate to greater legitimacy.[10] Hohn, and the EU CTC, conceptualize takedowns as just one strand of the response, emphasizing the importance of counter-narratives to augment monitoring efforts. Governments, private corporations, and civil society have all been active in responding to digital radicalization through the digital communications’ space, mobilizing moderate and alternatives voices to appeal to those most vulnerable to radical recruitment.

Brian Fishman, Facebook’s Policy Manager covering terrorism and violent extremism, remains optimistic about the problem. At a recent panel on ISIS in Europe, hosted by Trends Research and Advisory, Fishman said, “[Facebook] holds true to the value that we want to connect the world. We are in a process of flux when it comes to trying to deal with this. We want there to be contentious discourse on Facebook, and we want there to be political discussion around the political tensions that give rise to terrorism sometimes. But you cannot be a terrorist and operate on Facebook.”

The purpose of the above discussion has been to show that when dealing with these obviously violent, politically charged examples, there lies an invisible line social media platforms straddle when it comes to extremist content. These companies either deem horrendous content unfit for mass circulation, or when they believe it’s in their civic responsibility, provide the platform that ensures its mass circulation. The content cannot be quantified by level of bloodshed, by its specific proclamations, or by its level of exposure. It is a nuanced, painstaking, and ethically loaded process to make these types of differentiations. Which is also mired in a policy process that, while not opaque, is slow, incredibly undefined, and constantly evolving. As print, television and radio did in the past, social media companies must come of age, and form their individual traditions, as well as create the precedents of their industry.

Unfortunately however, they must do so during a time that is increasingly tumultuous.