The below is Part One of Two of one of the winning essays from our inaugural “Langley Hope Academic Excellence in Security and Defence Commentary Award Programme.” Stay tuned in the coming weeks for the publication of more winning and noteworthy submissions.
Identifying the threat of individual terrorist actors is a significant challenge faced by law enforcement and national security agencies. Recent years have seen an exponential increase in internet media used by violent extremists. In particular, lone actor terrorists, such as those who become inspired by international Islamic extremist groups, are known to use social media in order to announce their intentions in advance of attacks. Using the internet to uncover signs of extremists who will mobilize to violence is one of the most significant policy issues faced by governments and national security agencies globally. Despite various methods having been developed to better understand the risk of violence online actors may pose – there exist critical limitations that have not been addressed in almost all of the related risk assessment literature. Specifically, the inability to accurately establish online behaviour to offline actions. Further, an individual’s terrorist related online actions may not satisfy criminal courts in Canada, thus posing a challenge for law enforcement. This paper is the first of its kind to explore these issues. Ultimately, the research will demonstrate both the value, and limitations of the most current analytical methods used to assess which online extremist pose the greatest risk for mobilization to terrorist violence.
Communication is at the very heart of terrorism. This notion is of particular significance when considering the ubiquitous use of social media by terrorist groups. All active terrorist groups have established at least some degree of presence on the internet and most are using all social media platforms.1 Recent years have seen an exponential increase in websites, social media, blogs, and message boards advocating violent extremism; this is especially true of jihadist content.2 Inspiring individuals to conduct autonomous or semi-autonomous terrorist attacks in the name of the larger global Salafi jihadist movement has become the status quo for Islamic terrorist organizations.
A July 2016 review of U.S. Federal Court documents conducted by the Fordham Law, Center on National Security, revealed from March 1st 2014 to June 30th 2016 there were 101 total cases of terrorism offences linked to Islamic extremists within the U.S.3 Of those 101, 42 cases were domestic attacks or plots to engage in domestic attacks.4 The report found that 60% of terrorist offenders consumed jihadist influencing material that featured al-Baghdadi, al-Awlaki, and/or bin Laden; and 22% used the internet to publicly pledge their allegiance to a terrorist group. Further, one third of the total 101 cases came to the attention of law enforcement through social media means.5 It is expected that domestic lone actor attacks will continue in the coming years as groups like the “Islamic State” and al-Qaeda will continue to use their hard-earned influence and globally recognised radical-Islamist brand to aggregate radical lone extremists and succeed in inspiring them to conduct future attacks. This is supported by a report published February 2018 by SITE Intelligence Group,6 which states “the Islamic State (IS) and its media affiliates have shifted from a long-standing focus on ‘hijrah’ (migration) to IS territories, instead focusing dominantly on lone wolf attacks in enemy nations.”7
The growth of internet use by terrorists has led to a significant body of research being generated in an effort to better understand how extremists use online cyber communication mediums.8 This has provided an understanding of several different ways terrorists and terrorist organisations use or have used the Internet and social media. The most common, are spreading propaganda and communicating ideological narratives. However, terrorists have leveraged social media to do much more than just spread their ideological message in order to gain new recruits. Lone wolf terrorists, such as those who become inspired by international Islamic extremist groups, are known to use social media in order to announce their intentions in advance of attacks. These often come in the form of outright public announcements of violence, manifestos, or public outbursts of grievances.9 Now more than ever, understanding this phenomenon is of particular importance to law enforcement and national security agencies. Uncovering signs of violent extremists online has become one of the most significant policy issues faced by governments and national security agencies globally.10 It is no surprise that the strategy of exploiting Social Media Intelligence (SOCMINT), is one method used by security agencies to identify individuals that pose a risk to committing lone wolf terrorist acts. Online activity and behaviour have been shown to provide rich risk related data on the views, beliefs, attitudes, grievances, intentions and ideologies of identifiable extremists.11 The expansive web presence of individuals linked to terrorist groups, has led to an increasing number of individuals who are under some degree of online surveillance by law enforcement, or who require closer monitoring.12 This has created a pressing concern for law enforcement and security agencies – as they need to be able to reliably judge the risk an online actor poses in the offline world. In other words, they need to accurately assess when “twitter fingers will turn to trigger fingers.”
This is of significant importance, as not all extremists, or those that identify with larger terrorist groups are at risk for engaging in violent acts.13 Holding an extremist ideology or social engagement in a terrorist group does not necessarily put an individual on a trajectory for engaging in terrorism related violence. In fact, most people who hold an extremist ideology do not engage in violence.14 Moreover, even those that are found to be both consumers and producers of violent extremist media are also not necessarily on a trajectory for engaging in violent acts.15
Risk and Threat
Understanding an individual’s risk is inherently important to understanding threat. Extremists who present a high risk for violent action often also represent a high level of threat. The concept of threat is often confused with risk. Risk is seen as the probability of an individual to engage in an action that is intended to cause harm (risk = probability of danger × severity of the harm); in this way risk can be seen as the probability of acting on intent.16 Threat is seen as an individual’s intent and capability (threat = intention × capacity). Threat often co-exists with risk 17 but, risk can be seen as a precursor to threat. That is to say, if an individual is motivated to engage in harm, they are far more likely to seek out the means to be capable of engaging in that harm. This has led researchers to develop various psychometric and behavioural analysis tools that can better allow law enforcement and the justice system to distinguish between criminal actors who pose a risk of engaging in violence, from those that don’t.18 This type of analysis has been adapted to fit a national security context, aiding in the ability to discriminate between extremists who pose a risk of mobilizing to violence. However, using an online method of risk assessment – in the context of law enforcement – to demonstrate an extremist is at a high risk for engaging in violence, holds significant challenges. Exploring and analysing these challenges is the primary focus of this article.
This paper will outline two of the most current analytical methods developed in order to assess which online extremist actors are at greatest risk for mobilization to terrorist violence. Further, a third method of identifying extremists online will be discussed. Following this, a review of current social-psychology literature on the topic of online behaviour – most notably the Online Disinhibition Effect -will be presented. This literature outlines how an individual’s online behaviours may not be indicative of that individual’s true offline opinions and behaviours. Of note, none of the literature on the current online analytical methods for assessing mobilization to terrorist violence take into consideration the disconnection between online-behaviours and offline behaviours. Thus, this article will demonstrate the limitation to the current analytical methods. Further, the paper will explore how an individual’s online high-risk actions or behaviours may not satisfy criminal courts in legal proceedings. Ultimately, this article will demonstrate both the value, and limitations of the most current analytical methods used to assess which online extremist pose the greatest risk for mobilization to terrorist violence.
Cyber Risk Assessment
In recent years there have been significant advancements in the development of violence risk assessment instruments specifically constructed to appraise terrorist or extremist violence. Traditionally the discipline of violence risk assessment functions within a criminal justice and forensic mental health context. Over the past 50 years this discipline has extensively evolved and progressed.19 In 2010 scholars conducting a meta-review of all risk assessment instruments identified a total of 126 structured tools that have been designed for use in the forecasting of an individual to engage in criminal violence.20 These assessments employ a systematic set of empirically or theoretically derived risk indicators, developed from social-psychological research. The risk indicators are designed to evaluate both static and dynamic factors, which serve as markers of future violence. There is a large body of literature supporting the effectiveness of traditional methods of violence risk assessment in accurately forecasting future violence in criminal populations.21 For example one study found, that a commonly used gang violence risk assessment tool, was highly statistically predictive of violent criminal recidivism – demonstrating a significance of p < .01, for new violent arrests in gang populations.22
The success of the violence risk assessment approach in the non-ideological criminal field has led to the development of instrument specifically designed for the extremist population. The first and most prominent of these instruments is the Canadian developed Violent Extremist Risk Assessment (VERA) approach now in its third version; the VERA 2R.23 The VERA model uses the Structured Professional Judgment (SPJ) methodology, which is deemed to be the most appropriate methodology in assessing terrorist actors.24 Other methodologies of risk assessment, which include actuarial, and clinical unstructured analysis, are seen as less effective in assessing violent extremism.25 The success of the VERA has led to its adoption by various law enforcement and correctional agencies including, the Royal Canadian Mounted Police’s National Security and counter-terrorism units.26 In addition to the VERA, only one other defined violence risk assessment tool exists that has been developed to appraise extremists; the Multi-Level Guidelines known by the abbreviation MLG – also Canadian developed.27 Comparatively much less has been published on the MLG, and it is yet to be publicly adopted by any government agency. However, the MLG has demonstrated the protentional to be highly valuable in extremist related risk assessment. It should be noted that both these risk assessment tools are not intended to evaluate those in the general population who are at risk of becoming an extremist – and thus are not “radicalization assessment instruments”. Rather these instruments are designed to determine those in the extremist population (already hold a radicalized ideology) that pose the greatest risk of engaging in extremist related violent acts.
As stated, lone wolf terrorists such as those who become inspired by international Islamic extremist groups, are known to use social media to announce their intentions in advance of attacks. Moreover, online activity and behaviour has been shown to provide rich risk related data on the views, beliefs, attitudes, grievances, intentions and ideologies of identifiable extremists.28 As a result the VERA (and its subsequent versions) have the ability to integrate online behavioural observations, as a means of assessing an individual’s violence risk. 29 However, With an increased number of potentially violent extremists under internet surveillance by law enforcement, 30 the VERA’s developer realized a cyber-specific instrument would hold more utility and reliability when conducting a standalone internet based assessment of risk.31 From this the developer of the VERA innovated this area and filled an analytical gap – by creating a specific online based extremist risk assessment instrument; titled the Cyber Extremist Risk Assessment (CYBERA). The CYBERA assessment tool was developed to provide a more discerning and insightful analysis that is “empirically grounded, flexible and practical,” when assessments are completed using standalone cyber related activities.” 32
The CYBERA’s risk indicators were developed in a manner consistent with the methodology of other reputable SPJ assessment instruments. Using “an established and accepted behavioural science methodology.”33 This means the CYBERA’s indicators are developed from evidence based empirical data in the area of radicalisation, violent extremism, national security and cyber analysis.34 The CYBERA consists of six categories, each of which contains a separate set of risk indicators. These categories include: (1) imagery, (2) semantic content, (3) beliefs, attitudes, intention, (4) virtual social network context, (5) individual online activity related to capacity, and (6) leadership, organisation, skills. In total the instrument contains 25 separate risk indicators, which are coded to rate as either low, moderate, or high.35 The CYBERA is designed to present objective evidence-based information and employs a structured and rigorous method intended to limit subjectivity and bias in analysis. However, the CYBERA is still not considered to be unequivocally scientifically objective.36
Conclusory analysis based on risk assessment tools (including the CYBERA) are seen as structured professional opinion. The intent of risk assessment tools like the CYBERA are to provide robust decision making, “users should not claim statistical certainty in the risk assessment nor absolute prediction.”37 Thus in most cases law enforcement’s decisions to arrest and prosecute a suspected terrorist cannot be based solely on the conclusory options predicated on risk assessment – regardless of what the risk to the public may be – and regardless of how imminent the threat is. An exception to this may exist, found in the Criminal Code of Canada Section 83.3 – were law enforcement, can arrest a person, if they believe the arrest is likely to prevent the carrying out of a terrorist activity. The legal applications of cyber or online risk assessment will be explored in more detail later in the paper.
Cyber Warning Behaviours Analysis
Structured Professional Judgment violence risk assessment is just one method developed for assessing which online extremist actors are at the greatest risk for mobilization to terrorist violence. In recent years there have been advancements in assessing online extremist risk that falls outside the parameters of these instruments. This can trace its roots to a plethora of social-psychological research conducted by Maloy and colleagues over the past decade and a half.38 Meloy and colleagues numerous research found that acts of public violence, including attacks by lone wolf terrorists, are often signalled in the pre-attack stage by detectable behavioural markers, defined as Warning Behaviours. “As such, warning behaviours can be viewed as indicators of increasing or accelerating risk” 39. Meloy (2011), Meloy and O’Toole (2011), Meloy et al. (2012), and Meloy, Hoffmann, Roshdi, and Guldimann (2014a), have identified eight different warning behaviours for targeted or intended violence: (1) pathway, (2) fixation (3) identification, (4) novel aggression, (5) energy burst, (6) leakage, (7) last resort, and (8) directly communicated threat.
Building off of Meloy and colleagues research into warning behaviours, Cohen and colleagues 40 looked to identifying behavioural markers for extremist violence in social media content that would be linked to warning behaviours. The authors determined three of the total eight warning behaviours would hold the highest potential of being identified based on textual content in social media, when looking to identify lone extremist who may be on a trajectory towards violence. These three are: (1) leakage, (2) fixation, and (3) identification.
Leakage can be defined as communication of an intent to engage in violence to a third party. This “usually infers a preoccupation with the target and may signal the research, planning, and/or implementation of an attack.” 41 Leakage can be both intentional and unintentional communication and can include both specific and unspecific content related to the act.42 In one study on school shootings, the occurrence of pre-attack social media leakage ranged from 46% to 67%.43
Fixation; can be defined as a pathological preoccupation with a person or a cause.44 As Cohen and colleagues, state, “the fixated person expresses a preoccupation with the group or person considered responsible for the subject’s grievance by allocating large amounts of time to discussing, theorising about, or studying the perceived enemy.” 45
Identification; this consists of behaviour which indicates a desire to be a pseudo-commando or identify oneself as an agent to advance a particular cause.46 This can be divided into three subcategories: identification with radical action – where an individual identifies themselves as a warrior, who is justified to use violence in order to promote that cause. Identification with a role model – where one identifies themselves as a leader, or teacher of a cause.47 Lastly group identification – where one identifies themselves as a member of a group, regardless of whether they are officially a member of that group. This can be observed through speech or text that indicates sharing strong group norms, collectivistic values, moral commitment, and strong negative identification with the out-group.48
Cohen and colleagues, proposed a set of linguistic markers (words and phrases) that could be used to identify each of the three warning behaviours – this is referred to as sentiment analysis.49 Sentiment analysis is used mostly in consumer research to analyze large amounts of social media for product reviews, and attitudes toward events.50 While analyzing extremist content is a relatively novel application of sentiment analysis, it nevertheless follows the same principals, and can therefore be seen as methodologically sound.
In Cohen and colleagues’ model, the linguistic markers set to identify each of the three warning behaviours would be searched through the use of a WebCrawler that could navigate websites, forums, or other kinds of social media.51 Cohen and colleague’s study never implemented the use of their linguistic marker model, rather they proposed this idea which was supported by theoretical evidence. In this way they laid the ground work for future research. Although this proposed analysis tool is similar to the CYBERA – in that it is grounded in theoretical and empirical social psychological evidence – it differs in two critical pragmatic factors. First, the Cohen et al., tool actively patrols the internet looking to identify those that may pose an elevated risk of mobilizing to violence; opposed to the CYBERA which requires a suspected terrorist to already be identified and then be assessed for their level of risk. In this way the Cohen tool provides the utility of being able to “cast a net” into the vast body of the internet and identify extremist that may be at an elevated risk of mobilization to violence. This is a major strength of Cohen’s tool. Scholars have argued that successfully identifying online signs of extremism, is a critical first step in reacting to them.52 Second, the Cohen et al. tool uses computer base algorithms to identify linguistic markers related to risk. In this way the Cohen et al., tool functions in a similar methodology to Actuarial risk assessment tools. This is dissimilar to the CYBERA, which uses a SPJ methodology. Actuarial instruments have been criticized for requiring isolated empirical evidence.53 Further, some researchers see them as limited, as they do not allow for the flexibility of the evaluator to take into consideration situational and contextual factors, and thus is seen as rigid and restrictive.54 Moreover, leading researchers in terrorism psychology have concluded the SPJ approach, to be the most appropriate for assessing the individual violence risk of terrorist actors.55
Cohen and colleagues’ sentiment research on warning behaviour indicators, laid the ground work as computational-based research for identifying online based high-risk extremists. However, in not applying this theory to practice, it left their research lacking an empirical backing. Canadian researchers, Scrivens, Davies, and Frank (2018), build upon this foundation and developed a sentiment analysis tool and algorithm, which was put into practice, and used to identify extremist authors from four online Islamic discussion forums. These four forums consisted of a total of approximately 1 million posts.56 Scrivens and colleagues termed their model the Sentiment-based Identification of Radical Authors (SIRA). Using SIRA, Scrivens and colleagues identified five of the most radical users across the four forums. These individuals “showed a consistent pattern of extremely radical online discourse and a high level of dedication to extremist beliefs.”57 Moreover, this analysis demonstrated internal validity by identifying the same individual who was present across two separate forums. It should be noted that this is not the first research to explore text analysis, other work has been conducted in this field.58 However, the current research by Scrivens, Davies, and Frank (2018), is the most recent of this type of analysis to be published in the academic literature.
Scrivens and colleagues’ indicators were developed from an analysis of language contained within the internet forums. The researchers used a language data method to identify “Parts-of-Speech” (POS), “that had the highest rate of occurrence within the [forum] under the assumption that the most frequently discussed topics would most likely be the ones in which extremist content was likely to be detected.”59 This allowed the researchers to query content into easy to sort groups, and produced frequency distributions for each word. From this the researchers, use the Java-based software, SentiStrength to analyse the data. SentiStrength allows for the value of text to be augmented by other associated text, “such as booster words, negative words, repeated letters, repeated negative terms, antagonistic words, punctuation, and other distinctive characters suited for studying an online context.” 60 In addition to this, the researchers were able to evaluate the context surrounding keywords of high frequency. The flexibility for the researchers to add context was critical for identifying extremist speech.
Scrivens and colleagues, demonstrated a proof of concept, that online or cyber content can be mined to identify extremist that demonstrated a “consistent pattern of extremely radical online discourse and a high level of dedication to extremist belief.” 61 Further the research has the potential to identify cyber communities (forums, Facebook groups, blogs etc.) that contain the most radical users. It must be noted that Scrivens and colleagues’ SIRA was developed to capture content that would indicate an online actor as radical or extreme in their ideology. The indicators and methodology were not developed based on social psychological research that demonstrated a probability for extremist violence. Thus, it cannot be considered a violence risk analysis tool. This is explicitly asserted by the authors that state, the “purpose of this project was to attempt to develop an innovative technique to measure the online behavior of radical users…. By no means does this study or its results imply… [to] detect radical users who may engage in an act of violent extremism.” 62 Regardless of this the SIRA is a significant first step in demonstrating how linguistic markers of extremist online actors can be reliably captured and analysed. The study also has the potential to identify those actors that are likely to inspire others to mobilize to violence.
Analysis of Online Based Behaviours and How They Relate to Off-line Behaviour
This article has examined some of the most current methods used to analyze online content in order to identify extremists, and better forecast which extremists pose the greatest risk to mobilizing to terrorism violence. However, cyber related behaviours (posts, comments, and networks) may not always be indicative of an individual’s off-line or “real world” behaviours, views, or ideologies.63 This can greatly reduce the reliability of intelligence gained, despite assessment models being grounded in an evidence base methodology.
- Weimann (2010), 45.
- Brachman (2006), 16.
- Fordham Law, Center on National Security (2016), 2.
- Ibid., 10.
- Ibid., 19 & 20.
- SITE Intelligence Group is considered one of the world’s leading non-government counterterrorism organizations, which specializes in tracking and analyzing online activity of global terrorist groups including jihadist organizations.https://ent.siteintelgroup.com/Corporate/about-site.html
- SITE Intelligence Group. February 6 2018
- See: Amble (2012); Sageman (2008); Seib and Janbek (2011); Weimann (2010).
- See: Meloy (2011); Meloy and O’Toole (2011); Meloy, Hoffmann, Guldimann and James (2012); Meloy, Mohandie, Knoll and Hoffmann (2015).
- Cohen, Johansson, Kaati, and Mork (2014), 247.
- Pressman and Ivan (2016), 393.
- Reilly (2015).
- Borum (2012), 9; Horgan and Taylor (2011), 175.
- Brachman (2010).
- Pressman (2016), 259; Pressman and Ivan (2016), 397.
- See: Borum, Fein, Vossekuil, and Berglund (1999); Borum, (2015); Cook, Hart, and Kropp (2013); Douglas, Hart, Webster, and Belfrage (2013); Hart (2008); Lloyd and Dean (2015); Pressman (2009); Singh and Fazel (2010); Webster Douglas, Eaves, and Hart (1997).
- Borum (2015), 67.
- Singh and Fazel (2010), 972.
- See: Borum et al. (1999); Borum (2015); Guy (2008); Gauy (2012); Hart (2008); Singh and Fazel (2010).
- Guay (2012), 17.
- See: Pressman (2009); Pressman and Flockton (2012); Pressman (2016).
- Roberts and Horgan (2008); Monahan (2012), 194.
- Pressman and Ivan (2016), 359; Bell (2017).
- Cook, Hart, and Kropp (2013).
- Pressman and Ivan (2016), 393.
- Pressman and Ivan (2016), 399
- Reilly (2015).
- Pressman and Ivan (2016), 399.
- Ibid., 405.
- Ibid., 399 & 404.
- Ibid., 400 & 401.
- Ibid., 405 & 406.
- Ibid., 405.
- See: Meloy (2004); Meloy and O´Toole (201); Meloy et al. (2014a); Meloy et al. (2014b); Meloy et al. (2012); Meloy and Yakeley (2014); Meloy et al. (2015).
- Cohen et al. (2014), 248.
- Cohen et al. (2014).
- Ibid., 248.
- Meloy and O’Toole (2011), 513.
- Cohen et al. (2014), 247.
- Meloy et al. (2012), 366.
- Cohen et al. (2014), 249.
- Meloy et al. (2015), 218.
- Ibid., 221.
- Cohen et al. (2014), 249; McCauley and Moskalenko (2008), 416.
- Cohen et al. (2014), 253.
- See: Feldman (2013); Ghiassi, Skinner, and Zimbra (2013).
- Cohen et al. (2014), 247.
- See: Bouchard, Joffres and Frank (2014).
- Retterberger and Hucker (2011), 90; Sarma (2017), 280.
- Hart (2008), 10-12.
- Monahan (2012), 194; Roberts and Horgan (2008).
- Scrivens, Davies, and Frank (2018), 43.
- Ibid., 44.
- See: Bartlett and Miller (2013); Chung and Pennebarker, (2011); Davis (2012); “Meet Catalyst: IARPA”, (2012); Pennebaker and Chung, (2008).
- Scrivens et al. (2018), 43.
- Ibid., 44.
- Ibid., 52.
- Ibid., 53.
- Suler (2004), 321.