Institute News

“Insight or Incite ?”

By June 7, 2019 No Comments

“How Social Media Based Inflammatory Speech Can Catalyze Violence”


Introduction

On February 26 of 2019, during a House of Commons Ethics Committee session, Liberal MP Nathaniel Erskine-Smith raised concerns about inflammatory speech targeting Prime Minister Justin Trudeau, found on “Canada’s Yellow Vest” Facebook page. Comments on this page called the Prime Minister a “traitor to our country” who deserves to be “hung for treasonous crimes.”[1] Minister of Democratic Institutions, Karina Gould responded to this by stating, “I think that we are moving in a direction where we need to require social-media companies to act.”[2] Staff Sergeant Tania Vaughan, stated the RCMP are aware of the comments made on Facebook and take all threats made against the Prime Minister very seriously.[3] The issue regarding this type of speech targeting the PM was first brought to public attention by Clerk of the Privy Council, Michael Wernick. He told the Commons Justice Committee that he worries about incitement to violence when people use terms such as “treason” and “traitor” in open discourse. Mr. Wernick stated that this type of language leads to “assassination” and that he has fears somebody is going to be shot during the federal election campaign later this year.[4] Following his SNC-Lavalin testimony, Mr. Wernick was also a target of such speech, being called  a “traitor,” in a profanity-laced social-media post.[5]

Facebook has began removing content from many of the Canadian “Yellow Vest’s” Facebook pages – stating that the content violated its community standards. However, comments advocating for the death of the Prime Minister remained.[6]  At the time of this report one stated, “Trudeau needs to be shot” and another is a posted image of a sniper with the caption “one man can change the world with a bullet in the right place,” posted in the comment section of a story that was posted about the PM.

The media coverage and conversation surrounding these types of comments made on Facebook, has died down following recent developments in the SNC Lavalin affair. Neverthe less, the issue of inflammatory online posts advocating for violence against this country’s political officers, is an important issue that deserves to be explored.( Initial investigation of online discourse surrounding these posts revealed that many individuals defend this type of language; much of that commentary has since been deleted.) These commentators believed it is their right to express their current political dissatisfaction – and it allows the government and people of Canada to know how they feel about the current administration. Indeed, freedom of political speech is a centerpiece of democracy. However, when does speech become something more – something dangerous?

Purpose and Overview

The concerns brought forward by MP Nathaniel Erskine-Smith and retiring PCO Clerk Michael Wernick raise an interesting and concerning question about inflammatory politically dissident language and speech that may sponsor violence in Canada. This article looks to examine the issue of this type of speech from both an evidence-based academic context, and a legal context. The article demonstrates how this type of language is not just an exercise of one’s Charter rights, which provides insight into their political views, but rather, is incitement to violence. Ultimately the article will demonstrate the connection between speech and political violence, and examine the case law of social media-based incitement speech.

Dangerous Speech:
Inflammatory Speech Leading to Violence

History is rife with examples of ideologically based speech acting as a catalyst or a driving factor for political violence. A 2013 United Nations General Assembly cited Nazi Arthur Seyss-Inquart’s pre-genocidal speech – in which he refers to the Jewish population as the enemy of national socialism and as enemies with whom Nazi Germany can not co-exist with – as  language which “[cultivated] fear and hatred, and [incited] mass violence against civilians.”[7] Further, following the 2003 war in Iraq, al-Qaeda in Iraq’s leader, Abu Musab al-Zarqawi, instigated a sectarian war against Iraq’s Shia population. [8] al-Zarqawi apply takfir law, openly branding the Shia as apostates and heretics as well as used language to amplified longstanding tensions and grievances between the two groups. [9] The result was a catalyst to mass violence by radical Sunni on the Shia population. This culminated in an attack against the Shia holy Mosque in Samarra, where radical Sunnis destroyed this holy building with a suicide bomb. [10]

While examples of speech acting as a catalyst to violence are ubiquitous throughout history, few academic works specifically focus on examining how social media-based content can influence politically motivated violent acts. Much of the work in this area has focused on the language used in social media posts by extremists or other violent offenders, before engaging in mass public or political violence. This is covered in some detail in my previous work “Twitter Fingers Turn to Trigger Fingers”: An Analysis of Online Based Terrorist Risk Assessment, published by the Mackenzie Institute.

One of the best contemporary researchers on the relationship between public messaging and violence is Susan Benesch. Through numerous theoretical and empirical studies Benesch has examined how inflammatory speech can lead to political violence and genocide in developing or conflict nations.[11] While this research does not explicitly focus on political activist groups in developed nations, the research provides an appropriate theoretical framework for the correlated relationship between types of messaging and politically or ideologically motivated violence.

Through their work, Benesch and colleagues provide evidence that speech has the ability to encourage or provide the justification for acts of violence in the minds of listeners. In other words, speech can act as a catalyst for violence.[12] Much of this research collected data from media content broadcasted during the 2007, 2008, and 2013 Kenyan elections, as well as the 1994 Rwandan genocide – in which competing political and ethnic groups would use inflammatory speech to incite violent acts by their supporters.[13] Benesch found that the broadcasting of inflammatory, dehumanizing, and hate speech tended to increase dramatically before outbreaks of mass political violence. During these times political speakers would exploit and inflame animosity between ethnic or political groups, which acted as an effective form of persuading a target audience to conduct and condone violent acts by members of its own in-group. Further, this inflammatory political language would aid in bolstering support within their own communities.[14]  This type of language has been classified by Benesch as Dangerous Speech and is formally defined as, an act of speech that has a reasonable chance of catalyzing or amplifying violence by one group against another, or brings about harm directly or indirectly, with equal or greater brutality by motivating others to think and act against members of another group.[15]While the cultures and languages of African countries are in ways dissimilar to Canada, human social-psychologyas it relates to  group-based dynamics is arguably universal. An extensive body of academic literature, spanning multiple discipline, supports the basic premise that much of human conflict is structured within in-group out-group processes. [16] This will be explored in more detail later in the article.

Through her research, Benesch created a list of five indicators that can be used to identify speech or messaging as Dangerous Speech. This is unique, as there are no other frameworks or methodologies available in the academic literature used to identify the propensity for speech to act as a catalyst for violence. While the term “Speech” is used, this is not isolated to just spoken words, it may include any form of expression, including text, images such as drawings, photographs, or films.[17]  

The five variables that are used to determine Dangerous Speech are as follows:             

  • A powerful speaker with a high degree of influence over the audience:

Case studies conducted by Benesch indicate this is one of the most significant and influential indicators. The speaker, or source of the message, can hold their authority over an audience through a number of different forms. This can include holding a legitimate office, holding a position of de facto leadership, being a prominent cultural or religious figure, or holding status as a celebrity. In some cases, the authority of the speaker may even be present if there is no social basis. For example, a speaker may acquire influence through a style of speaking that employs charisma persuasion or skillful rhetoric.[18]

  • The audience has grievance and fear that the speaker can cultivate:

The primary audience is often indicated by the language or venue in which the message was delivered. When audiences are fearful of being victimized, harmed, or marginalized by an out-group or external threat they are more likely to resort to violence, or to condone violence when it is committed on their behalf. Further if a target audience is young males, there is a greater likelihood of violence.[19]

  • A speech act that is clearly understood as a call to violence:

The language itself is an influential feature, in some cases it will be inflammatory speech with a direct call to engage in violent acts on behalf of a political cause. A rhetorical hallmark of this includes using dehumanizing terms to refer to out-groups; this has included vermin, pests, insects or animals. In other cases, language will be indirect but is understood as a call to violence. In the case of the 2015 elections in Burundi the verbs meaning “to eat” and “to wash” were used to refer to killing. During the Rwandan genocide the phrase “go to work,” was used as veiled language for killing, and the regional dialect word for cockroach, was used to refer to Tutsi or Tutsi sympathizers.[20]  

  • A social or historical context that is propitious for violence, for any of a variety of reasons including a longstanding competition between groups for resources, a lack of effort to solve grievances, or previous episodes of violence:

Longstanding underlying or previous conflicts between relevant groups is another highly significant influential indicator. Grievance and a cultural narrative of historical injustice form an ideological landscape and climate that validates violence as a means of righting grievances.[21]  

  • A means of dissemination that is influential of itself, for example it is the sole or primary source of news for the relevant group:

Speech may hold more influence or have a greater impact based on its source. In some cases, there is only one source of news or information with no publicly available alternative suppliers of information, such as in the case of the notorious Rwandan radio station RTLM. Under these conditions the impact of a message is magnified. Other means of dissemination that may contribute to a greater impact are platforms that make members of an audience feel as though they are part of a select and favoured group, such as closed social media groups. The insertion of a message into popular music may also aid in amplifying that message.[22]  Moreover, “even the choice of language itself can be a critical form of dissemination: the same message communicated in the ‘mother tongue’ of an ethnic group can have more force than if it were delivered in a language shared by other groups; this reinforces the sense of solidarity within the group, and may encourage a feeling of impunity given the presumption that one will only be understood by co-linguals.”[23]

These five indicators were developed through extensive fieldwork, empirical research and analysis of academic literature on topics including genocide studies, social psychology, and the philosophy of language. In her research, Benesch asserts that while a number of variables may affect the impact of speech on violent behaviour, the most relevant factors are all included in the five indicators that have been identified. It is important to re-emphasize that Benesch’s research into Dangerous Speech has not focused on messaging as it relates to political dissident or activist groups in developed nations. However, there are clear parallels between Dangerous Speech seen in vulnerable nations struggling from ethnic conflict and some of the speech or rhetoric used by some activists or fringe political groups in developed nations. Witness the contemporary Antifa and extreme Alt-Right confrontations and conflicts in the USA.

Understanding Group -Based Identity and Violence:

The aforementioned review on Dangerous Speech provides a set of principles that allows for a better understanding of the factors that link speech to violence. However, to enhance the understanding of why these factors influence individuals to engage in violence, it is prudent to delve into the social psychology of in-group out-group dynamics. An extensive body of academic literature, spanning multiple disciplines, supports the basic premise that much of human conflict is structured within in-group out-group processes.[24] In other words, negative beliefs and values towards an out-group can encourage or excuse violence by the in-group, towards that out-group.[25] This is greatly influenced by an individual’s group-based identity, and to what degree their group-based identity  has been internalized or entrenched into their personal identity. The more an individual’s identity is rooted in a group, the more susceptible that individual is to engaging in perceived or defined pro-group behaviour, that even may go beyond their own personal desires; this can include violence.[26] Online messaging can play into the concept of group-based or collective identity and can shape or influence behaviours, such as violent behaviour.[27] From the examined research, one can see how social media content advocating for violence has the potential to be very dangerous.

“Yellow Vests” in Canada:
A Movement for Reform, or Anti-Government?

The online content found on the social media platforms of the “Yellow Vest” movement in Canada is reported as holding significantly greater fringe anti-government and extreme views that are not seen in the group’s French counterpart, where the movement originated.[28] These views are consistent with those seen in many of Canada’s growing far-right extremists and  patriot movement groups [29], groups such as the “III% ers” which appeared shortly after Justin Trudeau became Prime Minister.[30] Members of these groups are known to post photos of their numerous firearms and firearms training online,  demonstrating they have the capacity to engage in violence if motivated to do so.[31] It is important  to note that the III% ers are not designated as a domestic terrorist group, and this article is not suggesting they are. However, more radical groups or radical individuals who self-identify with these types of groups may become inspired to engage in violent acts by online anti-government speech or content, which could include the calls to violence made by the Canadian “Yellow Vest” movement. Lone actor terrorism today is often associated with jihadism. However, the term lone wolf terrorist was first developed in the 1980s as an extension of the far-right anti-government movement.[32] Through the spread of far-right extremist rhetoric and literature, such as The Turner Diaries, many individuals who were otherwise not affiliated with any far-right group, became radicalized by the ideology and engaged in self-directed violent acts on behalf of the greater movement.[33]  Adding to this, recent research has demonstrated the consumption of online content can act as a proximal mobilizer for extremist violence. A 2017 study conducted by Paul Gill and colleagues found 14% of terrorists in their data sample engaged in violence after witnessing something online.[34] To illustrate this, Gill and colleagues’ make reference to the case of Roshonara Choudhry, who stabbed and attempted to kill, British Labour Party MP Stephen Timms.  Choudhry stated that after watching an online video of Sheilkh Abdullah Azzam she understood that, “even women are supposed to fight”.[35] Moreover, Brenton Harrison Tarrant, the New Zealand, Christchurch Mosque shooter, was reported to have been active on online extremist forums.[36]

Online Incitement of Violence and the Law:

This section of the article will refrain from an examination on Canadian’s section 2(b) Charter rights: rather it will focus on applicable case law. In my previous article “Twitter Fingers Turn to Trigger Fingers”: An Analysis of Online Based Terrorist Risk Assessment, I examine the cases of R v. Le Seelleur (2014, QCCQ) and R. v. Hayes (2017, SKPC). In both these cases, the accused used social media platforms to make direct threats of violence against politicians. The messages projected on the ‘Yellow Vest” Facebook pages differ from these, as they don’t make direct threats against the PM, but rather advocate for violence against him.

A comprehensive search was conducted, and only one Canadian court case could be identified, in which an accused was charged with a similar offence – this is Joad c. R., (2016 QCCA). Here the accused was convicted under s. 464 of the Criminal Code, (counselling the commission of an indictable offence – murder), for posting Facebook messages that encouraged others to murder Syrian journalists supporting Bashar al-Assad.[37] Under this section of the Criminal Code, anyone who counsels another person to commit an indictable offence is guilty of an indictable offence – regardless of whether the offence is committed.[38]  In Joad c. R., the accused posted, “We must kill all Syrian journalists: those who participated in justifying the actions of Assad. We must kill them all and have peace.” [39] In his verdict “the [trial] judge explained that the text posted on Facebook clearly calls for murder without any further proof being needed to conclude that the …alleged crimes were committed.” [40] However, the accused successfully appealed his conviction, explaining that, when he advocated for the death of journalists who defend the actions of President al-Assad, he meant for it to occur after a trial, by means of the death penalty. He added, “…that he never wanted to encourage people to do anything illegal.”[41] The appeal judge concluded that the  trial judge could not determine the question of mens rea for the charges without first addressing the accused’s explanations, and that the trial judge came to his ruling in haste and vaguely refers to the essential elements necessary for the accused to be convicted.[42]

It should be noted, that while Joad c. R., (2016 QCCA), is an example of a call to violence against a group of people, the actions of the accused do not qualify as hate speech. This is an important distinction to understand; like Joad c. R, the calls to violence against the PM also do not qualify as hate speech. In Canada, what is commonly referred to as hate speech falls under sections 318 and 319 of the Criminal Code.  Section 318 – Advocating Genocide, requires the promotion or advection, with intent, to destroy in whole or in part any identifiable group. This can include, “killing members of the group; or deliberately inflicting on the group conditions of life calculated to bring about its physical destruction.”[43] The section defines an identifiable group as “…any section of the public distinguished by colour, race, religion, national or ethnic origin, age, sex, sexual orientation, gender identity or expression, or mental or physical disability.” [44]

Section 319 – Public Incitement of Hatred, requires communicating statements in any public place or inciting hatred against any identifiable group where such incitement is likely to lead to a breach of the peace. [45] This section holds the same definition of an identifiable group as s. 318, and defines a public place, as “any place to which the public have access as of right or by invitation, express or implied.” [46] Social media has been deemed as a public place in Canadian courts. [47]

However, arrest or conviction related to hate speech may be more challenging that one might initially think. An example of this, is the recent conviction of James Sears and LeRoy St. Germaine, for publishing hate speech in their Toronto area newspaper titled Your Ward News. [48] The two men behind the free newspaper published articles which promoted homophobia and hatred against women and Jews. Including advocating for the legalization of rape and denying the Holocaust occurred. While the Judge did rule, “both men were fully aware of the unrelenting promotion of hate,”[49] it was noted by prosecutor Jamie Klukach that “obtaining the guilty verdict was difficult, because the level of intent the Crown had to prove was extremely high”.[50]  A Further issue that may prove to be problematic in Canadian courts has to do with the type of language used. The speech used to refer to an identifiable group (as defined under the law) or the speech used to incite hatred or violence may be veiled in euphemism or come from a sub-culture-based lexicon, that is not commonly understood to the general public. As referenced above, Benesch discusses this – noting the regional words for “eat” and “wash” were used to refer to killing during the 2015 elections in Burundi and the regional dialect word for cockroach, was used to refer to Tutsis during the Rwandan genocide. [51] Demonstrating in Canadian courts, that veiled language is in reference to an identifiable group or is a call to violence may prove to be challenging.  Moreover, as Benesch and colleagues point out, the use of veiled or covert language “…reinforces the sense of solidarity within the group, and may encourage a feeling of impunity given the presumption that one will only be understood by co-linguals”.[52] Thus, it is not only harder to prove the language constitutes the commission of an offence, it makes the call to violence more effective.

Conclusion

Freedom of political speech is a key centerpiece of democracy; yet freedom of speech is not and cannot be an absolute. Words can be dangerous. It is understood in Canada that freedom of speech must be exercised within reason, and there are processes and laws to restrict expressions which incite violence or other breaches of the peace. Academia has identified indicators to appraise speech for its risk to violence, which may aid policy makers to better determine what content should not be allowed online and may aid in intelligence efforts to identify individuals that may need to be further investigated. However, when it comes to enforcing the law, establishing that high-risk speech, constitutes the commission of a criminal offence, at times can be difficult and problematic. Law enforcement and courts in this country have a challenging job. They need to demonstrate beyond a reasonable doubt that dangerous speech is not just politically dissident insight, but in-fact incites.

References:

[1] Dickson, Janice. “Liberals May Have to Require Social-Media Companies to Act on Hate Speech, Minister of Democratic Institutions Says.” The Globe and Mail, The Globe and Mail, 27 Feb. 2019, www.theglobeandmail.com/politics/article-gould-says-liberals-may-have-to-require-social-media-companies-to-act/.

[2] ibid.

[3] ibid.

[4] ibid.

[5] The Canadian Press. “’You #$@&% Idiot’: What the Trolls Called Michael Wernick after His SNC-Lavalin Testimony.” National Post, 8 Mar. 2019, nationalpost.com/news/politics/top-bureaucrat-gets-profane-messages-after-defending-government-on-snc-lavalin.

[6] Bell, Stewart, and Mercedes Stephenson. “Facebook Begins Removing Comments from Yellow Vests Canada Group Following Talk of Killing Trudeau.” Global News, Global News, 10 Jan. 2019, www.globalnews.ca/news/4830265/facebook-removes-comments-yellow-vests-canada-trudeau-threats/.

[7] Leader Maynard, Jonathan, and Susan Benesch. 2016. “Dangerous Speech and Dangerous Ideology: An Integrated Model for Monitoring and Prevention.” Genocide Studies and Prevention: An International Journal 9, no. 3: 8 (p. 70)   https://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=1317&context=gsp

[8] Fishman, Brian. 2009. ‘Dysfunction and Decline: Lessons Learned from Inside Al Qa’ida in Iraq’, Combatting Terrorism Center at West Point: Harmony Project;Greenwald, Aaron. 2010. ‘Trends in Terrorist Activity and Dynamics in Diyala Province, Iraq, during the Iraqi Governmental Transition, 2004-2006’ Perspectives on Terrorism Vol. 4, No. 1, pp. 47-62;Kilcullen, David. 2009. The Accidental Guerrilla (Oxford: Oxford University Press).

[9] ibid.

[10] ibid.

[11] Leader Maynard, Jonathan, and Susan Benesch.2016. “Dangerous Speech and Dangerous Ideology: An Integrated Model for Monitoring and Prevention.” Genocide Studies and Prevention: An International Journal 9, no. 3: 8; Benesch, Susan. 2014. “Countering dangerous speech to prevent mass violence during Kenya’s 2013 elections.” Final Report: 1-26. Benesch, Susan and Abramowitz, Michael. 2013. “A Prelude to Murder: Calling Humans Vermin.” Wall Street Journal, December 18.  http://www.wsj.com/articles/SB10001424052702303497804579241983104982434; Benesch, Susan. 2011. “The Ghost of Causation in International Speech Crime Cases.” In Propaganda, War Crimes Trials & International Law: From Speakers’ Corner to War Crimes. Edited by Dojcinovic, Predrag, New York: Routledge; Benesch, Susan. 2007. “Vile Crime or Inalienable Right: Defining Incitement to Genocide.” Virginia Journal of International Law .vol 48: 485-528

[12] ibid.

[13] Benesch, Susan. 2011. “The Ghost of Causation in International Speech Crime Cases.” In Propaganda, War Crimes Trials & International Law: From Speakers’ Corner to War Crimes. Edited by Dojcinovic, Predrag, New York: Routledge; Susan. 2014. “Countering dangerous speech to prevent mass violence during Kenya’s 2013 elections.” Final Report: 1-26.

[14] ibid.

[15] Benesch, Susan. “Dangerous Speech: A Proposal to Prevent Group Violence (2012). https://worldpolicy.org/wp-content/uploads/2016/01/Dangerous-Speech-Guidelines-Benesch-January-2012.pdf

[16] Hewstone, Miles, Mark Rubin, and Hazel Willis. 2002. Intergroup bias. Annual Review of Psychology 53:575-604; Levine, John M., and Norbert L. Kerr. 2007. Inclusion and exclusion: Implications for group processes. In Social psychology: Handbook of basic principles, edited by A. W. Kruglanski and E. T. Higgins. New York: Guilford.

[17] Benesch, Susan. “Dangerous Speech: A Proposal to Prevent Group Violence (2012). https://worldpolicy.org/wp-content/uploads/2016/01/Dangerous-Speech-Guidelines-Benesch-January-2012.pdf

 [18] Leader Maynard, Jonathan, and Susan Benesch.2016. “Dangerous Speech and Dangerous Ideology: An Integrated Model for Monitoring and Prevention.” Genocide Studies and Prevention: An International Journal 9, no. 3: 8; Benesch, Susan. “Dangerous Speech: A Proposal to Prevent Group Violence (2012). https://worldpolicy.org/wp-content/uploads/2016/01/Dangerous-Speech-Guidelines-Benesch-January-2012.pdf; Benesch, Susan. 2011. “The Ghost of Causation in International Speech Crime Cases.” In Propaganda, War Crimes Trials & International Law: From Speakers’ Corner to War Crimes. Edited by Dojcinovic, Predrag, New York: Routledge.

[19] ibid.

[20] ibid.

[21] ibid.

[22] ibid.

[23] ibid.

[24] Hewstone, Miles, Mark Rubin, and Hazel Willis. 2002. Intergroup bias. Annual Review of Psychology 53:575-604; Levine, John M., and Norbert L. Kerr. 2007. Inclusion and exclusion: Implications for group processes. In Social psychology: Handbook of basic principles, edited by A. W. Kruglanski and E. T. Higgins. New York: Guilford.

[25] Angie, Amanda D., Josh L. Davis, Matthew T. Allen, Cristina L. Byrne, Gregory A. Ruark, Cory B. Cunningham, Toni S. Hoang et al. 2011.”Studying ideological groups online: Identification and assessment of risk factors for violence.” Journal of Applied Social Psychology 41, no. 3: 627-657.

[26] Borum, Randy, and M. Pynchon. 1999. “Assessing threats of targeted group violence: Contributions from social psychology.” Behavioral Sciences & the Law17, 339-355; Swann Jr, William B., Jolanda Jetten, Angel Gómez, Harvey Whitehouse, and Brock Bastian. 2012. “When group membership gets personal: a theory of identity fusion.” Psychological review 119, no. 3: 441.

[27] Angie, Amanda D., Josh L. Davis, Matthew T. Allen, Cristina L. Byrne, Gregory A. Ruark, Cory B. Cunningham, Toni S. Hoang et al. 2011.”Studying ideological groups online: Identification and assessment of risk factors for violence.” Journal of Applied Social Psychology 41, no. 3: 627-657.

[28] Dawson, Tyler. “Credibility of Canada’s Fledgling Yellow Vest Movement Threatened by Extremists, Fringe Groups.” National Post, 21 Jan. 2019, https://nationalpost.com/news/canada/canadas-fledgling-yellow-vest-movement-struggles-to-distance-itself-from-extremists-fringe-groups.

[29] Tunney, Catharine. “CSIS Chief Says His Agency Is Dealing with Right-wing Extremism ‘more and More’ | CBC News.” CBCnews. April 10, 2019, https://www.cbc.ca/news/politics/csis-right-wing-white-supremacy-1.5092304.

[30] Lamoureux, Mack. “The Birth of Canada’s Armed, Anti-Islamic ‘Patriot’ Group.” Vice, Vice, 14 June 2017, www.vice.com/en_ca/article/new9wd/the-birth-of-canadas-armed-anti-islamic-patriot-group.

[31] ibid.

[32] Bates, Rodger A. 2012. “Dancing with wolves: Today’s lone wolf terrorists.” The Journal of Public and Professional Sociology 4, no. 1: 1.

[33] ibid.

[34] Gill, Paul, Emily Corner, Maura Conway, Amy Thornton, Mia Bloom, and John Horgan. “Terrorist use of the Internet by the numbers: Quantifying behaviors, patterns, and processes.” Criminology & Public Policy 16, no. 1 (2017): 99-117.

[35] ibid.

[36] Ravndal, Jacob Aasland. “The Dark Web Enabled the Christchurch Killer.” Foreign Policy. March 16, 2019. Accessed April 11, 2019. https://foreignpolicy.com/2019/03/16/the-dark-web-enabled-the-christchurch-killer-extreme-right-terrorism-white-nationalism-anders-breivik/.

[37] Summary: Joad c. R., 2016 QCCA 1940 https://canliiconnects.org/en/summaries/45450; Joad c. R., 2016 QCCA 1940 (CanLII), http://canlii.ca/t/gvz1t

[38] Criminal Code of Canada, Section 464 Counselling an offence. https://laws-lois.justice.gc.ca/eng/acts/C-46/section-464.html

[39] [Para 1] Joad c. R., 2016 QCCA 1940 (CanLII), http://canlii.ca/t/gvz1t

[40] [Para 7] Joad c. R., 2016 QCCA 1940 (CanLII), http://canlii.ca/t/gvz1t

[41] ibid., [Para 20]

[42] ibid.

[43] Criminal Code of Canada, Section 318 – Advocating Genocide, https://laws-lois.justice.gc.ca/eng/acts/c-46/section-318.html

[44] ibid.

[45] Criminal Code of Canada, Section 319 – Public Incitement of Hatred, https://laws-lois.justice.gc.ca/eng/acts/c-46/section-319.html

[46] ibid.

[47] R . c . Rioux , 2016 QCCQ 6762 (CanLII),  http://canlii.ca/t/gsmc1; Powell, Betsy. “Toronto Man Guilty of Rare Charge of Advocating Genocide through Twitter and Emails.” Toronto Star. February 26, 2019. https://www.thestar.com/news/gta/2019/02/25/toronto-man-guilty-of-rare-charge-of-advocating-genocide-through-twitter-and-emails.html.

[48] Perkel, Colin. “2 Men behind Free Toronto-area Paper Guilty of Promoting Hate against Women, Jews | CBC News.” CBCnews. January 24, 2019. Accessed May 26, 2019. https://www.cbc.ca/news/canada/toronto/your-ward-promoting-hate-1.4990806.

[49] ibid.

[50] ibid.

[51] Leader Maynard, Jonathan, and Susan Benesch.2016. “Dangerous Speech and Dangerous Ideology: An Integrated Model for Monitoring and Prevention.” Genocide Studies and Prevention: An International Journal 9, no. 3: 8; Benesch, Susan. “Dangerous Speech: A Proposal to Prevent Group Violence (2012). https://worldpolicy.org/wp-content/uploads/2016/01/Dangerous-Speech-Guidelines-Benesch-January-2012.pdf; Benesch, Susan. 2011. “The Ghost of Causation in International Speech Crime Cases.” In Propaganda, War Crimes Trials & International Law: From Speakers’ Corner to War Crimes. Edited by Dojcinovic, Predrag, New York: Routledge.

[52] Leader Maynard, Jonathan, and Susan Benesch. 2016. “Dangerous Speech and Dangerous Ideology: An Integrated Model for Monitoring and Prevention.” Genocide Studies and Prevention: An International Journal 9, no. 3: 8 (p. 79).   https://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=1317&context=gsp

Biography: Jason Bakas holds a Master’s of Arts in Intelligence and Security Studies, and is a research affiliate with The Canadian Network for Research on Terrorism, Security and Society (TSAS). He has previously worked as  a researcher analyst for the government of Canada.

Leave a Reply