The below is Part Two of Two of one of the winning essays from our inaugural “Langley Hope Academic Excellence in Security and Defence Commentary Award Programme.” Stay tuned in the coming weeks for the publication of more winning and noteworthy submissions.
The Online Disinhibition Effect:
Research by Suler (2004) found that “people say and do things in cyberspace that they wouldn’t ordinarily say and do in the face-to-face world… so pervasive is the phenomenon that a term has surfaced for it: the Online Disinhibition Effect.” 1 Online Disinhibition is effected by six fluid factors that intersect and interact with each other. (1) Dissociative Anonymity; the internet allows for individuals to have altered identities. Ultimately this leads some to feel that whatever they say or do cannot be linked back to their off-line life.2 As Suler states, “people might even convince themselves that those online behaviors ‘aren’t [them] at all’.” 3 (2) Invisibility; individuals feel the internet allows them to conceal their presence when visiting web sites, and other online locations. This leads some to explore online sites that they otherwise wouldn’t; such as those related to criminality or deviance. (3) Asynchronicity; the text-driven communication of the internet allows individuals the freedom to, take hours, days, or even months to reply. “Some people may even experience asynchronous communication as ‘running away’ after posting a message that is personal, emotional, or hostile. It feels safe putting it “out there” where it can be left behind.” 4 (4) Solipsistic introjection; allows for individuals to become assimilated or “introjected” in to their online relationships. An individual’s internal representation of their online relationships may drive that individual to play out a role or fantasy based on “personal expectations wishes, and needs.” 5 (5) Dissociative imagination; the internet creates an environment where an individual can dissociate their online behaviour from offline behaviour. In this way the “distinction between online fantasy environments and online social environments may be blurred.” 69 This is particularly problematic for individuals with “a predisposed difficulty in distinguishing personal fantasy from social reality.” 6 Lastly, (6) Minimization of authority; the perceived absence of authority figures allows an individual the feeling of more of a peer relationship; giving the perception that everyone has an equal opportunity to voice their views and beliefs. This is significant, as people are reluctant to say what they really think when in the presents of an authority figure.7 It is important to note that online disinhibition is not absolute. The degree to which an individual feels or behaves in a disinhibited manor will vary; with some displaying small deviation from their baseline or offline behaviour, and others having a more dramatic disinhibited effect.
Suler describes how some, or all of, the six factors that make up the Online Disinhibition Effect can manifest in an individual in two ways. (1) Toxic disinhibition; in which individuals engage in behaviour or explore parts of the internet that are uncharacteristic of their true self. (2) Benign disinhibition; in which individuals share very personal things about themselves, including revealing secrets, emotions, and desires.8
The Online Disinhibition Effect presents a dichotomy of reliability with regards to standalone online-base risk assessment. An individual in the “real world” may not display any outward indicators of extremism, or risk of mobilizing to violence; but use the internet as a way of expressing his true self. Conversely, an individual may just be playing the role of an extremist on the edge of mobilizing to violence, simply for the thrill of living out an altered identity, or for other personal esteem and status driven desires. Both scenarios are problematic when conducting standalone online-based risk assessment – as they bring into question the reliability of data.
More on Online-Offline Behaviour
None of the online analytical risk assessment or sentiment analysis models examined in this paper discussed the Online Disinhibition Effect, or any factors related to it. In total, there are a limited number of risk assessment related studies that have researched the relationship between online actions and offline behaviours.9 In conducting this research over 100 peer reviewed academic works related to risk assessment were examined using a qualitative Critical Interpretive Synthesis (CIS) methodology. Only two studies were identified that discussed the limitation to assessments based on online behaviours: Hui‘s (2016) publication “Social Media Analytics for Intelligence and Countering Violent Extremism” and Shortland’s (2016) publication, “On the Internet, Nobody Knows You’re a Dog: The Online Risk Assessment of Violent Extremists”.
Hui‘s paper does not directly cite Suler (2004), or directly discusses the Online Disinhibition Effect. However, Hui, does discuss how online activity may, “underrepresent reality.” 10 In citing Carter, Maher, and Neumann (2014), Hui explains how individuals could be motivated to create false extremist accounts to gain online status or influence.11 From this Hui concludes, “Mining and analysis of accounts for security purposes must therefore be aware that online support and influencers need not indicate similar dynamics in the real world.” 12
Like Hui, Shortland’s paper does not directly cite Suler (2004), or directly discuss the Online Disinhibition Effect. Nevertheless, he does assert that online high risk behaviour is not always manifested in actual risk, attributing “online anonymity” as a factor leading to an individual’s online conduct.13 To illustrate this point Shortland discusses research related to individuals who consume online images of child sexual abuse. In citing Long et al. (2013) and McManus et al. (2011), Shortland concludes “not all individuals engaged in online depictions of child abuse had a sexual interest in children, and the majority would not engage in an act of child sexual abuse, not due to lack of opportunity, but because their prurient interests are solely confined to the online sphere.” 14 Based on this analysis, Shortland states “this research domain contains several lessons that we can apply to the issue of online risk assessment of violent extremism, namely that ‘severity’ of online behaviour is not always indicative of greater risk.”15
Out of a systematic review of 7,262 articles that subsequently led to a synthesising of “84 papers” on studies “that addressed the risk assessment of violent extremists, the process of conducting risk assessment online, and extremism online,” 16 Shortland, concluded: “To date, no research has systematically compared the online behaviours of those who also engage in extremist behaviour offline. At the same time, research that does investigate the online behaviours of individuals engaged in extremist materials online (outlined above) does not identify, nor factor in, the (albeit) small subsample of those individuals who are currently, or will in the future, move their online behaviour to offline acts of extremist violence.” 17 In this way one can see the significant limitation to standalone online-based risk assessment. However, that is not to state that online based risk assessment does not hold value – it just should not be relied upon as a standalone method of assessment; if possible. Establishing an individual’s offline behaviours, attitudes, and beliefs is a critical preliminary step. Only in this way can an evaluator determine that an individual’s online activities are representative of that individual in the “real-world”. Further, even after a consistency has been established between online and offline behaviours, attitudes, and beliefs, the information acquired from online based collections must be considered within the context of its limitations – it can not be seen as absolute in its reliability.
Risk Assessment and the Law in Canada
This article has discussed how conclusory analysis based on online violence risk assessment, is seen as structured professional judgement. The intent of risk assessment tools are to provide robust decision making. Violence risk assessment in a clinical setting, is used in various Canadian legal settings including criminal, civil, and family law, as well as bail determinations and decisions to release inmates from correctional facilities.18 While repeated validity has been shown in many tools related to criminal violence (outside of terrorism), risk assessment is still nevertheless not statistically absolute in its prediction and appraisal of future violence.19 For this reason it does not meet the burden of proof – of beyond a reasonable doubt – to solely convict on heightened violence risk alone. Therefore, Canadian law enforcement’s decisions to arrest and prosecute a suspected terrorist cannot be based solely on the conclusory judgments predicated on an online appraisal of risk. As stated earlier in the paper, one exception to this may be section 83.3 of the Criminal Code of Canada. Under this terrorism specific law, the police, on the suspicion of reasonable grounds, can arrest a person, if they believe the arrest is likely to prevent the carrying out of a terrorist activity. To the authors knowledge, section 83.3 has not yet been used to arrest a terrorist suspect or challenged in Canadian courts. It is unclear if online high-risk behaviour that demonstrated a mobilization to violence on its own, would be sufficient to arrest a potential terrorist under this section of the Code. With that aside, the following cases demonstrate how online outbursts of grievance or leakage – which are not direct threats – but exhibits risk for mobilizing to violence, are unable to satisfy courts in demonstrating the threshold needed for criminal conviction. Further, even social media post that contain the essential elements of uttering threats of violence, do not always satisfy the courts in convicting an accused – even when it is in a national security context. This is a significant point of frustration for law enforcement and public safety agencies, who need to act quickly after discovering posts that may be in advance of an attack and arrest terrorist actors before they engage in violence.
Canadian Case Law
In R v. Lee (2010, ONCJ), the accused had previously posted, comments related to his disdain for the police and the Jewish community, which is consistent with a radical right-wing ideology. In addition, he posted biblical references, political opinions, images of swastikas and references to mass killings on his Facebook. This gained the attention of the police, who interviewed and cautioned the accused. Following his interaction with the police, the accused posted a status update on his profile which stated: “I’m wearing black and I’m riding black this time around…I’m really sorry however you never thought this day would come, and didn’t want it to be this way… I never knew why you would frame me nor put me on the cross for a few dollars… but I don’t care, if you are a priest, judge, cop, lawyer, commoner, or teacher… I ‘m bringing death with me this time around.” 20 The accused was subsequently arrested and charged with uttering death threats. While the accused’s post was not an explicitly worded death threat, it was argued, the accused had referenced the colour black with death in previous posts.21 It is clear the accused’s online activity constituted risk related warning behaviours, and most online risk methodologies would identify this behaviour as indicating a mobilization to violence. The charges in this case against the accused were solely based on social media evidence. The court concluded that the crown could not prove, without doubt, that any of the posted content constituted a threat. It is unclear if law enforcement in this case presented their evidence in a way that explained, that the accused’s language was consistent with the warning behaviour of leakage – that is often signalled in the pre-attack stage of violent action 22 – in this way the post should be interpreted as a threat. However, the judgment stated, “The context of threat combined with Mr. Lee’s explanation as to when and why he wrote the message has left me in a state of reasonable doubt.” 23
In R v. Hamdan (2017, BCSC), the accused was charged with four terrorism counts in relation to posts he made on various Facebook pages and profiles he operated. R v. Hamdan is widely recognised as a watershed case which changed the way Canadian law enforcement conduct and collect social media-based evidence for criminal investigations. Like R v. Lee, the evidence of this case was solely predicated on the accused’s Facebook posts. Crown prosecutors identified numerous key posts as having particular relevance to terrorism which included ISIS symbols, and references to other terrorist attacks that took place in Canada and the United States. As well as re-posts made by others who supported jihadist terrorism. In one of these reposts, “the author … [described] in detail how a lone wolf can kill non-believers and apostates and recommends weapons: knives, guns with silencers, poison and vehicles for running over people. He then describes methods of transportation to escape after killing an apostate. He also gives advice on the clothing for a lone wolf to wear to avoid detection. Finally, he reminds lone wolves to keep their actions a secret from all.” 24 It should be noted that none of the charges against the accused were related to threats of violence, rather counselling and instructing others to engage in terrorism related offences.25 However, some would argue the accused’s online actions did constitute as warning behaviours of mobilization to violence, and most online risk methodologies would identify the accused with a high level of risk.
Ultimately the judge ruled that it was difficult to conclude beyond a reasonable doubt that the accused’s Facebook content deliberately encouraged or directly induce terrorist activity, and that the accused’s post where an expression of political views, related to the current geo-politics of the Middle East. Subsequently the accused was found not guilty on all four charges.26 Based on the accused’s claim of “expression of political views,” it is doubtful that framing the evidence in the context of a structured or unstructured methodology related to risk, would have held any influenced in the judge’s decision, as this would not influence the burden of proof related to the charges.
Most recently, an interesting case has presented itself, which is some what contrary to the one’s discussed above. On September 7th of this year, following a drug trafficking investigation, Hells Angels’ associate Denis Desputeaux posted a photo to his Facebook page of a police officer with a bullet wound to his face – adding the comment “Talk to the police in a language they understand.” 27 Desputeaux Facebook post was considered by the court to be a threat. The judge noted that “given the context it was comparable to someone posting a hate message on a social network,” and “It is more than a photo… why [should it] be less important than a [case] where a member of a visible minority was [threatened].” 88 As a result Desputeaux was required to provide a sample of his DNA to be stored in a national database, which is used to determine if an individual is linked to other unsolved crimes.88
This case is distinguished from R v. Hamdan and R v. Lee, by the fact that an image and comment that did not convey a direct threat of violence was considered to be a threat by the court, and not a public expression protected by The Charter. However, unlike Hamdan and Lee, Desputeaux offline behaviour, as an associate of a violent motorcycle gang can be established – albeit this may not have been established in court. The details if this case are somewhat unclear. At the time of writing this article the court ruling is not available on a public legal data base. Nevertheless, the potential implications for legal precedent in counterterrorism rulings are interesting. Future rulings may find a suspected terrorist guilty based on posted images and comments that convey a violent message – but not necessarily a direct threat.
Apart from Desputeaux, the above examined rulings demonstrate the potential challenges of prosecuting individuals based on high-risk-to-violence related internet posts, that are not phrased as direct threats. Yet, even when social media posts convey direct threats of violent actions within the context of a national security offence – it still may not necessarily satisfy Canadian courts. A review of the literature shows a variability in rulings.
In R v. Le Seelleur (2014, QCCQ), the accused was arrested and charged with uttering threats after posting the comment on her twitter “Good get the bitch out of there before I bomb her,” which was in relation to a CTV article entitled “Pauline Marois ready to call an election.”28 The accused in this case was 19 years old at the time of the offence and had stated to police she wrote the message “without thinking about it clearly” and that she wrote the post in a moment of anger following one of the Minister’s comments. 29 Further, the accused stated that she had no intention of conducting any negative action towards Mrs. Marois and was noted by the court for fully cooperating with the police when they came to her parents’ house regarding her post.30 Despite this, the accused was found guilty of uttering threats.31
Similarly, in the case of R. v. Hayes (2017, SKPC), The accused made a number of Facebook posts conveying threats to Prime Minister Justin Trudeau and Alberta Premier Rachel Notley. In one post the accused wrote: “Imma buot to go shoot this mother [expletive]dead. Ya hearing this Facebook? RCMP whom troll Facebook lookn for threatening comments? Hear me now you dumb [expletive] who voted Liberals into power. You have just started the war against all Canadians by our federal government… And I will not stand it… (sic).” 32 Despite the accused stating that he had no intention of harming the Prime Minster, and that he wrote the posts out of frustration, 33 and despite the defence arguing, that based on the context in which the posts were written, a reasonable person would not find these words to be a credible threat to cause harm or death to the Prime Minister 34 – the accused was found guilty of uttering threats.35 The rulings in the cases of R v. Le Seelleur and R. v. Hayes, may not come as a surprise to most. Many would see these as clear-cut cases of uttering threats. However, in R. v. McRae (2013, SCC), the court states that the mens rea of this offence is the intent to have threatening words uttered or conveyed to intimidate or be taken seriously.36 Yet, in both cases the court had established that the accused did not have any intention of acting on the threats and that the post were meant to be an expression of frustration. Which in the authors opinion could be reasonably seen as not being serious threats.
Interestingly, the case of R. v. Sather (2008, ONCJ) is contrary to the above examined. While this case falls outside the context of terrorism or national security. It may nevertheless, have implications in future counterterrorism rulings. In R. v. Sather, the accused made several Facebook posts in which he made death threats to members of the York Region Children’s Aid Society. 37 This was related to the loss of the accused’s son. The court determined these posts “viewed objectively…would convey a threat of serious bodily harm to a reasonable person reading them.” 38 However, the accused argued that his posts were not meant to instil fear or intimidate, and that he used Facebook to construct an alternate persona.39 Further, the posts were intended to be “expressions of emotions directed towards people who might be sympathetic to [the accused’s] anger.” 40 The accused, “testified that he posted these items to blow off steam as he had been taught in a prior anger management course.” 41 The judge ruled in favour of the accused, stating “given all that I find from the above reasons, and observing Mr. Sather to testify in a straightforward manner about these matters, including a number of admissions that did not cast a positive light on himself, including admitting he thought about killing someone, I conclude he was telling the truth and I accept his evidence.”42 Accordingly the accused was found not guilty of uttering threats. 43
While this court ruling does not reference the Online Disinhibition Effect or any of the factors found within it. It is interesting that defence used elements of this theory successfully; arguing that because the accused displayed deviation from his offline (real-world) behaviour,44 the threats lacked the essential element related to the mens rea of this offence.45 This case has critical ramifications to online based risk assessment of terrorist actors. The case has the potential to allow suspected terrorists to argue that their online leakage, or observable warning behaviours were simply acting out an online-based alternate persona. Time will tell if this type of defence will be successful in future trials.
The expansive online presence of potential lone wolf terrorist actors has led to an increased number of individuals being placed under some degree of online surveillance by law enforcement. This has created a pressing concern for law enforcement and security agencies – they need to be able to reliably judge the risk an online actor poses in the offline world. Risk assessment that analyzes online behaviour has the ability to provide a critical perspective which can aid law enforcement and national security agencies in better understanding the risk posed by extremists found in cyberspace. However, as demonstrated, this method is not without its limitations. Assessing an individual from standalone online materials can be problematic. The experience of being online may lead individuals to conduct themselves in a manner that may not be representative of their off-line self. Further, it has been demonstrated how indicators of risk, are not necessarily sufficient to convict a potential terrorist and in some cases even conveying direct threats over social media may not lead to a conviction. While online based risk assessment should not be viewed as a “magic bullet” and is best suited to aid law enforcement and security agencies in identifying potential terrorists – then using conventional investigative methods – to disrupt or arrest these actors. It must be understood that the modern threat of lone actor terrorism does not always afford law enforcement the time to use conventional investigative methods. The case of Arron Driver is an example of this. Driver posted a “martyrdom” video prior to his failed attack in Strathroy Ontario. The FBI became a ware of Driver’s video and informed the RCMP at 8:30am the morning of the attack. At 4:30 pm that same day Driver entered a taxi with an explosive device intent on killing Canadians.46 Despite the limitation this article has outlined, online based risk assessment is a powerful intelligence tool. The laws related to it need to come into better harmony with the current terrorist climate Canada faces. In this way we can ensure that law enforcement have the tools they need to keep Canadian’s safe and intervene before “twitter fingers turn to trigger fingers”.
Amble, John Curtis. “Combating terrorism in the new media environment.” Studies in Conflict & Terrorism 35, no. 5 (2012): 339-353.
Bartlett, Jamie, and Carl Miller. “The state of the art: A literature review of social media intelligence capabilities for counter-terrorism.” Demos 4 (2013).
Bell, Stuart. Edmonton attack: How police decide who is a terrorist threat and who isn’t. Global News. (2017, October 2), accessed multiple times https://globalnews.ca/news/3780473/edmonton-attack-how-police-decide-who-is-a-terrorist-threat-and-who-isnt/
Borum, Randy, Robert Fein, Bryan Vossekuil, and John Berglund. “Threat assessment: Defining an approach to assessing risk for targeted violence.” Behavioral Sciences & the Law 17, no. 3 (1999): 323.
Borum, Randy. “Radicalization into violent extremism I: A review of social science theories.” Journal of Strategic Security 4, no. 4 (2011): 7-36
Borum, Randy. “Assessing risk for terrorism involvement.” Journal of Threat Assessment and Management 2, no. 2 (2015): 63-87
Bouchard, Martin, Kila Joffres, and Richard Frank. “Preliminary analytical considerations in designing a terrorism and extremism online network extractor.” In Computational models of complex systems, pp. 171-184. Springer, Cham, 2014.
Brachman, Jarret M. “High-tech terror: Al-Qaeda’s use of new technology.” Fletcher F. World Aff. 30 (2006): 149.
Brachman, Jarret M. “My Pen Pal, the Jihadist.” Foreign Policy, 29. (2010), accessed multiple times, http://foreignpolicy.com/2010/07/29/my-pen-pal-the-jihadist/
Carter, Joseph A., Shiraz Maher, and Peter R. Neumann. “# Greenbirds: Measuring Importance and Influence in Syrian Foreign Fighter Networks.” (2014).
Center on National Security. “Case by Case, ISIS Prosecutions in the United States.” Fordham Law (2016), accessed multiple times http://centeronnationalsecurity.squarespace.com/research.
Cherry, Paul. “Drug Dealer Tied to Hells Angels Ordered to Turn over DNA Because of Facebook Posting.” Montreal Gazette, (2018 September 7), accessed multiple times http://montrealgazette.com/news/drug-dealer-tied-to-hells-angels-ordered-to-turn-over-dna-because-of-facebook-posting.
Chung, Cindy K., and James W. Pennebaker. “Using computerized text analysis to assess threatening communications and behavior.” Threatening communications and behavior: Perspectives on the pursuit of public figures (2011): 3-32.
Cohen, Katie, Fredrik Johansson, Lisa Kaati, and Jonas Clausen Mork. “Detecting linguistic markers for radical violence in social media.” Terrorism and Political Violence 26, no. 1 (2014): 246-256.
Cook, Alana, N., Stephen. D. Hart and P. Randall Kropp. Multi-level guidelines for the assessment and management of group-based violence. Burnaby, Canada: Mental Health. Law, & Policy Institute, Simon Fraser University. (2013).
Davis, Kerry. Can the US military fight a war with Twitter? Computerworld. (2012, Nov 8), accessed multiple times http://www.computerworld.com/article/2493445/social-media/can-the-us-military-fight-a-war-withtwitter-.html
Douglas, Kevin S., Stephen D. Hart, Christopher D. Webster, Henrik Belfrage, Laura S. Guy, and Catherine M. Wilson. “Historical-clinical-risk management-20, version 3 (HCR-20V3): development and overview.” International Journal of Forensic Mental Health 13, no. 2 (2014): 93-108.
Feldman, Ronen. “Techniques and applications for sentiment analysis.” Communications of the ACM 56, no. 4 (2013): 82-89.
Ghiassi, Manoochehr, James Skinner, and David Zimbra. “Twitter brand sentiment analysis: A hybrid system using n-gram analysis and dynamic artificial neural network.” Expert Systems with applications 40, no. 16 (2013): 6266-6282.Guay, J. P. (2012). Predicting Recidivism with Street Gang Members. Ottawa: Public Safety Canada.
Guy, Laura S. “Performance indicators of the structured professional judgment approach for assessing risk for violence to others: A meta-analytic survey.” PhD diss., Dept. of Psychology-Simon Fraser University (2008).
Hart, Shephen. D. Preventing violence: The role of risk assessment and management.
In A. C. Baldry & F. W. Winkel (Eds.), Intimate partner violence prevention and intervention: The risk assessment and management approach. (2008): 7-18
Hauppauge, NY: Nova Science Publishers.
Horgan, John, and Max Taylor. “Disengagement, de-radicalization, and the arc of terrorism: Future directions for research.” Jihadi terrorism and the radicalisation challenge: European and American experiences (2011): 173-186.
Hui, Jennifer Yang. “Social Media Analytics for Intelligence and Countering Violent Extremism.” In Combating Violent Extremism and Radicalization in the Digital Era, pp. 328-348. IGI Global (2016).
Lloyd, Monica, and Christopher Dean. “The development of structured guidelines for assessing risk in extremist offenders.” Journal of Threat Assessment and Management 2, no. 1 (2015): 40.
Long, Matthew L., Laurence A. Alison, and Michelle A. McManus. “Child pornography and likelihood of contact abuse: A comparison between contact child sexual offenders and noncontact offenders.” Sexual Abuse 25, no. 4 (2013): 370-395.
“Man Killed in Strathroy, Ont., Was Planning ‘imminent’ Attack, Police Say | CBC News.” CBC news. (2016 August 12), accessed multiple times https://www.cbc.ca/news/canada/aaron-driver-imminent-attack-1.3716997.
McCauley, Clark, and Sophia Moskalenko. “Mechanisms of political radicalization: Pathways toward terrorism.” Terrorism and political violence 20, no. 3 (2008): 415-433.
McManus, Michelle Ann, M. L. Long, and L. Alison. “Child pornography offenders: towards an evidenced-based approach to prioritizing the investigation of indecent image offences.” (2011).
Meloy, J. Reid, Kris Mohandie, James L. Knoll, and Jens Hoffmann. “The concept of identification in threat assessment.” Behavioral sciences & the law 33, no. 2-3 (2015): 213-237.
Meloy, J. Reid, Jens Hoffmann, Karoline Roshdi, and Angela Guldimann. “Warning behaviours and their configurations accross various Domains of targeted violence.” (2014a): 39-53..
Meloy, J. Reid, Jens Hoffmann, Karoline Roshdi, and Angela Guldimann. “Some warning behaviors discriminate between school shooters and other students of concern.” Journal of Threat Assessment and Management 1, no. 3 (2014b): 203-211.
Meloy, J. Reid, and Jessica Yakeley. “The violent true believer as a “lone wolf”–psychoanalytic perspectives on terrorism.” Behavioral sciences & the law 32, no. 3 (2014): 347-365.
Meloy, J. Reid, Jens Hoffmann, Angela Guldimann, and David James. “The role of warning behaviors in threat assessment: An exploration and suggested typology.” Behavioral sciences & the law 30, no. 3 (2012): 256-279.
Meloy, J. Reid. and Mary Ellen O’Toole. “The concept of leakage in threat assessment.” Behavioral Sciences and the Law, 29, (2011): 513-527.
Meloy, J. Reid. “Indirect personality assessment of the violent true believer.” Journal of personality assessment 82, no. 2 (2004): 138-146.
Meet Catalyst. IARPA’s entity and relationship extraction program. (2012, April 4). Public Intelligence, accessed multiple times http://publicintelligence.net/meet-catalyst
Monahan, John. “The individual risk assessment of terrorism.” Psychology, Public Policy, and Law 18, no. 2 (2012): 167-205.
Pennebaker, James W., and Cindy K. Chung. “Computerized text analysis of Al-Qaeda transcripts.” A content analysis reader 453465 (2008).
Pressman, D. Elaine, and Cristina Ivan. “Internet use and violent extremism: A cyber-VERA risk assessment protocol.” In Combating Violent Extremism and Radicalization in the Digital Era, pp. 391-409. IGI Global (2016).
Pressman, D. Elaine, and John Flockton. “Violent Extremist Risk Assessment: Issues and application of the VERA-2 in a high–security correctional setting” (2014). In A. Silke (Ed.), Prisons, terrorism and extremism: Critical issues in management, radicalisation and reform (pp. 122-143). New York: Routledge.
Pressman, D. Elaine. “Risk assessment decisions for violent political extremism.” (2009). Ottawa: Ministry of Public Safety and Solicitor General, Government of Canada, accessed multiple times, https://www.publicsafety.gc.ca/cnt/rsrcs/pblctns/2009-02-rdv/index-en.aspx
R v.Clemente,  2 S.C.R. (CanLII), accessed multiple times https://www.canlii.org/en/ca/scc/doc/1994/1994canlii49/1994canlii49.html
R v. Hamdan, 2017 BCSC 1770 (CanLII), accessed multiple times https://www.canlii.org/en/bc/bcsc/doc/2017/2017bcsc1770/2017bcsc1770.html?searchUrlHash=AAAAAQAGSGFtZGFuAAAAAAE&resultIndex=1
R v. Hayes, 2017 SKPC 8 (CanLII), accessed multiple times https://www.canlii.org/en/sk/skpc/doc/2017/2017skpc8/2017skpc8.html?searchUrlHash=AAAAAQAXUi4gdi4gSGF5ZXMsIDIwMTcgU0tQQyAAAAAAAQ&resultIndex=1
R v. Le Seelleur, 2014 QCCQ 12216 (CanLII), accessed multiple times https://www.canlii.org/en/qc/qccq/doc/2014/2014qccq12216/2014qccq12216.html?searchUrlHash=AAAAAQARUiB2LiBMZSBTZWVsbGV1ciAAAAAAAQ&resultIndex=1
R v. Lee, 2010 ONCJ 291 (CanLII), accessed multiple times https://www.canlii.org/en/on/oncj/doc/2010/2010oncj291/2010oncj291.html?resultIndex=1
R v. McRae, 2013 SCC 68 (CanLII), accessed multiple times https://www.canlii.org/en/ca/scc/doc/2013/2013scc68/2013scc68.html?searchUrlHash=AAAAAQASUi4gdi4gTWNSYWUsIDIwMTMgAAAAAAE&resultIndex=17
R v. Sather, 2008 ONCJ 98 (CanLII), accessed multiple times https://www.canlii.org/en/on/oncj/doc/2008/2008oncj98/2008oncj98.html?searchUrlHash=AAAAAQANUi4gdi4gU2F0aGVyIAAAAAAB&resultIndex=1
Reilly, Ryan. J. FBI: When it comes to @ISIS Terror, Retweets=Endorsements which makes Twitter one of the Bureau’s best informants. Huffington Post. (2015, August 8), accessed multiple times http://www.huffingtonpost.com/entry/twitter-terrorism-fbi_55b7e25de4b0224d8834466e
Roberts, Karl, and John Horgan. “Risk assessment and the terrorist.” Perspectives on Terrorism 2, no. 6 (2008): 3-9.
Sarma, Kiran M. “Risk assessment and the prevention of radicalization from nonviolence into terrorism.” American Psychologist 72, no. 3 (2017): 278.
Scrivens, Ryan, Garth Davies, and Richard Frank. “Searching for signs of extremism on the web: an introduction to Sentiment-based Identification of Radical Authors.” Behavioral Sciences of Terrorism and Political Aggression 10, no. 1 (2018): 39-59.
Sageman, Marc. Leaderless jihad: Terror networks in the twenty-first century. University of Pennsylvania Press (2011).
Shortland, Neil D. ““On the Internet, Nobody Knows You’re a Dog”: The Online Risk Assessment of Violent Extremists.” In Combating Violent Extremism and Radicalization in the Digital Era, pp. 349-373. IGI Global (2016).
SITE Intelligence Group. February 6 2018. Weekly inSITE report: The Islamic State. Https://ent.siteintelgroup.com/Weekly-inSITE-on-Islamic-State/weekly-insite-on-the-islamic-state-jan-31-feb-6-2018.html Accessed February 8, 2018.
Singh, Jay P., and Seena Fazel. “Forensic risk assessment: A metareview.” Criminal Justice and Behavior 37, no. 9 (2010): 965-988.
Suler, John. “The online disinhibition effect.” Cyberpsychology & behavior 7, no. 3 (2004): 321-326.
Webster, Christopher D., Kevin Douglas, D. Eaves, and Stephen Hart, S. D. HCR-20: Assessing risk for violence, Version 2. Burnaby, Canada: Mental Health, Law, & Policy Institute, Simon Fraser University (1997).
Weimann, Gabriel. “Terror on facebook, twitter, and youtube.” The Brown Journal of World Affairs 16, no. 2 (2010): 45-54.
- Suler (2004), 321.
- Ibid., 322.
- Ibid., 323.
- Ibid., 324.
- Ibid., 321.
- See: Shortland (2016).
- Hui's (2016), 343.
- Ibid., 342.
- Ibid., 343.
- Shortland (2011), 361-362.
- Ibid., 362.
- Ibid., 354.
- Ibid., 361.
- Guy (2008), 2-4.
- Pressman and Ivan (2016), 405.
- R v. Lee (2010), Para 7.
- Ibid., Para 18.
- Cohen et al. (2014), 248
- Ibid., Para 23.
- R v. Hamdan (2017), Para 81.
- Ibid., Para 2.
- Ibid., Para 191 -195.
- Cherry (Sept 2018)
- R v. Le Seelleur (2014), Para 2
- Ibid., Para 3
- Ibid., Para 14
- R. v. Hayes (2017), Para 3
- Ibid., Para 15
- Ibid., Para 32
- Ibid., Para 43
- R. v. McRae (2013), Para 17 & Para 23; R. v. Clemente, (1994) 2 S.C.R.
- R. v. Sather (2008), Para 1
- Ibid., Para 7
- Ibid., Para 9
- Ibid., Para 10
- Suler (2004), 321
- R. v. McRae (2013), Para 17 & Para 23; R. v. Clemente, (1994) 2 S.C.R.
- CBC News (Aug 2016)