Note: I had already prepared this post to drop today to discuss some of the research I’ve been reviewing and conducting regarding hate speech and aggression. It’s the first in a series, where I explore three to four research papers at a time as they relate to this hypothesis I have. I was about to pull it in light of the U.S. coup happening in real-time today, incited by the exiting President, Donald Trump, but then realized, what better example of speech leading to aggression could we possibly have? I did not edit the post from my original review paper, written this Fall for Fielding, so some “takes” may feel jarringly clinical in light of today. Regardless, here is the first in this series, and the hope that this coup will be marked down in history as an attempt and not a success.
Read full post:
I’ve been exploring an idea I have about the role of fear in a shift to authoritarian political beliefs and aggressive political behavior in grad school. As part of this, I’ve been reading many studies regarding the roles of conspiracy theories, online hate speech, and a fear of mortality in relation to shifts in political ideology. In this post my focus will be on three of these. First we have the experimental study by Alizadeh, Weber, Cioffi-Revilla, Fortunato, and Macy “Psychology and morality of political extremists: evidence from Twitter language analysis of alt-right and Antifa” (2019). In this study, the authors examined 10,000 Twitter users with differing political ideologies that could be classified as “extreme right” or “extreme left.” Another 10,000 Twitter users were chosen for their non-partisan posts to act as the control group. By the end of this study, the authors had determined that Moral Foundation Theory could explain the impact of extreme online speech on political views. Three other studies stood out in my initial exploration One was “Conspiracy Theories: Evolved functions and psychological mechanisms” by authors van Prooijen and van Vugt (2018), the second was “You’re hired! Mortality salience increases Americans’ support for Donald Trump” by authors Cohen, Solomon, and Kaplin (2017), and the third was “Mortality salience, system justification, and candidate evaluations in the 2012 U.S. Presidential election” by authors Sterling, Jost, Shrout (2016). These three studies were able to shed some light on the nuances of political ideological changes that could be caused by fear-based narratives; however, the methodology study by Alizadeh et al was more granular (2019). My hypothesis is that the increased experience of internalized fear as an individual and/or increased exposure to fear through extreme rhetoric and fear-based messaging can push individuals to be farther right politically or to otherwise have increasingly authoritarian political leanings, and in some cases can cause politically charged acts of aggression.
In order to study the hypothesis that fear correlates to increased conservative and authoritarian political beliefs and increased acts of political aggression, I needed to understand the psychology underpinning three of the pillars of online political aggression: a fear of mortality (Cohen, et al., 2017 and Sterling et al. 2016), a belief in conspiracy theories (van Prooijen and van Vugt, 2018), and the increasing spread of hate speech online (Alizadeh et al., 2019). I noticed that Alizadeh, Weber, Cioffi-Revilla, Fortunato, and Macy (2019) found that the online speech of right-wing extremists scored high on values often associated with authoritarian-leaning politics, such as obedience to authority, concern with purity, and in-group loyalty. Conspiracy theories, while a potential component of the overall hypothesis, especially in 2020 when online conspiracy farms like QAnon are prevalently shared on social media and discovered in online searches for otherwise mundane topics, still have more opportunities to be studied further in experimental, peer reviewed studies. Mortality salience offered the most interesting plot twist of the four studies, in that both mortality salience studies reached slightly different conclusions. In the first study (Sterling, et al., 2016), the authors did not find a conclusive connection between mortality salience and political ideology in the 2012 election they were studying (Obama vs Romney). The authors did note that mortality salience had a profound effect on the results of the 2004 election after the 9/11 terrorist attack (support for George W. Bush) and suggested a follow-up for the 2016 election. Looking at the mortality salience results from the second study (Cohen, et al., 2017), conducted after the Trump vs Clinton 2016 election, there is a much clearer relationship between fear (mortality salience, fear of the unknown, and in-group infiltration concerns) and conservative political ideology. Supporters of Trump responded to fear prompts and death prompts with increased support of Trump, though the authors stopped short of connecting the dots between the increased support of Trump in response to perceived threats and increased political violence as observed behavior of Trump supporters online and at Trump’s rallies and events. All four studies seem to corroborate an aspect of the first half of my hypothesis: higher levels of fear correlate to conservative political ideology. No single study on its own out of these four studies offered full corroboration of my hypothesis, but the Alizadeh et al. study (2019) is worth a deeper look here.
In the study of extremist language on Twitter (Alizadeh et al., 2019), the authors were focused on several political categories: political extremist, leftist, liberal, far-right, far-left, alt-right, “Antifa”, right-wing, follower count, the variables of certainty, anxiety, positive and negative emotions, and the five moral constructs from Moral Foundations Theory (fairness, harm avoidance, obedience to authority, ingroup loyalty, and purity). The authors examined the hypothesis “whether or not there exists some psychological and moral variables that covary with political orientation and extremity” (Alizadeh et al, 2019, p 3). Interestingly, the authors of the 2018 conspiracy theory study (van Prooijen and van Vugt) considered a theory for their research called adaptive-conspiracism that posits that belief in conspiracy theories might be an adaptive alert system developed to act as an early detection of fear or threat, which is an idea that dovetails nicely with some of the theories in the Twitter study. While building their study, the Twitter study authors needed to narrow down the study criteria from every moral variable ever shown to correlate with political ideology to the handful of moral variables they could actually measure using the text-based data available on Twitter. This winnowing down process left them with measurements for certainty, anxiety, and positive and negative emotions.
The Twitter study authors also measured the text -based data against the five moral constructs found in the Moral Foundations Theory. One interesting aspect of these four studies were that they all touched in some way one or two theories that seemed to align along a similar hypothetical direction: Moral Foundations Theory, System Justifications Theory, and Terror Management Theory (Cohen, et al., 2017). If further research were to be done on the findings of the Twitter study, Terror Management Theory would be a beneficial lens to explore, as the authors did not explore it in their study. All-in-all the authors of the Twitter study (Alizadeh, et al., 2019) put forth a total of 21 hypotheses and variations. This stood out to me, especially as the other studies each posited four or fewer hypotheses. Of the four studies explored in this paper, the studies with a narrower focus on fewer hypothesis all reached more robust conclusions, potentially through their tighter hypotheses focus. I noted two other challenges in the Twitter study that I will explore as part of the overall methodology section: one regarding the definition of “Antifa” the authors used for the study, and the other regarding the criteria used to winnow down the large Twitter participant base into relevant groups.
The authors of the Twitter study refined their criteria for extremism by focusing on white supremacist and neo-Nazi political ideologies for their right-wing extremist criteria and on “Antifa” as their criteria for left-wing extremist criteria. This poses the first confounding variable in this study: Antifa is not an organization or political party. “Antifa” is often sensationalized by politicians and the media this way, but it is simply “antifa” – an ideological stance against fascism. The FBI has corroborated this in their ongoing tracking of politically extremist groups (Tucker and Fox, 2020). The authors focused solely on the United States for their political ideologies and selection of Twitter accounts. The authors used publicly available data from the hate group tracking database maintained by the Southern Poverty Law Center to narrow down the list of Twitter accounts that correlated to right-wing extremism to 25 active hate groups. For “Antifa” the authors relied on manual Twitter searches of accounts to find popular “official and local chapters of the movement.” This is where the confounding variable of the erroneous definition of “Antifa” has an impact on the study and potentially skewed the study results. Because this manual search of accounts only netted them 16, they filled out the number to 25 by sourcing “friends of” the 16 “Antifa” accounts, vetting them using k-core decomposition of the friendship network. It is at this point where another confounding variable edges into play: the authors of the Twitter study (Alizadeh, et al., 2019) used methodology to narrow down their list that displayed a lack of understanding about the unique mechanics of Twitter use and Twitter users, removing accounts that were more likely to be human and keeping accounts with characteristics that were more likely to be bots, fake, or spam. To test this particular variable in a peer-review situation, I would suggest re-running the study with different criteria for Twitter user selection and comparing the results to this study as one path to external validation.
The authors then designed a control group comprised of accounts that followed the top five most liberal and most conservative US Senators, as based on their 2018 DW-NOMINATE score. Mutual followers of Senators with both ideologies were excluded – this is another facet of the confounding variable of user selection criteria I mentioned above, as human users of Twitter are more likely to follow more accounts from various points of view, whereas ‘bot’ and spam accounts are often found to be following a set of users that could be considered myopic in focus (Fitton, et al., 2009). Accounts following less than 3 Senators from each category were also excluded. Once these criteria were set, the authors were able to use these two master lists to build their participant base for each political ideology. This netted them 50,000 from each ideology, which they further narrowed to 5,000 left-wing and 5,000 right-wing participants. The authors of the Twitter study also preprocessed the tweets of the participants in a similar way they processed the participants themselves, attempting to weed out noise such as retweets (the resharing of someone else’s post on Twitter), URLs, username mentions, and other punctuation criteria. The authors of the Twitter study (Alizadeh, et al., 2019) also controlled for temporal variation by limiting the tweet set to the most recent three months. The authors then needed to parse the tweet sample through the Linguistic Inquiry and Word Count lexicon to attempt to measure words related to the variables of certainty, anxiety, and positive and negative emotions. The study mentioned that “the first author” did some additional work to verify the accuracy of the Linguistic Inquiry and Word Count lexicon for extremist language using a 7-point Likert type scale on 100 tweets from the tweet sample, but did not explain in detail, nor did they explain why they chose a Likert scale, or why only the first author participated in this portion of the study. Once the tweets were parsed through the linguistic measures for extremist speech favoring either left-wing or right-wing political ideologies, they needed to be analyzed using the Moral Foundations Theory word lists. As before one of the authors vetted the words in the tweets using a Likert style scale before comparing them to the Moral Foundations Theory list for signs of extremism. Some further confounding variables not yet mentioned here included: tweet frequency of each account, number of mentions, activity rate, number of followers, length of time an account has been active, whether or not the account had a profile picture, and how often they retweeted others or were retweeted. The authors used a covariate Balancing Propensity Score to account for these variables.
The authors used the ANOVA (analysis of variance) technique to examine the correlation between political orientation and political extremity with measures of certainty, anxiety, positive and negative emotions, as well as the five moral foundations from the Moral Foundations Theory across their groups. They then used further methods to randomly, uniformly sample 25 users of each group for further analysis with Tukey’s HSD (an “honestly significant difference” test used to discover which means are not like the others).
According to the results summary found in Table 22 (Alizadeh, et al., 2019, p. 24) right-wing extremists scored high on obedience to authority, ingroup loyalty, and ideas about “purity,” which supports my hypothesis that exposure to fear (in this case via extremist posts on Twitter) increases authoritarian tendencies. Extremists on both sides of the political spectrum scored high on negative emotion in the ANOVA analysis; however, those results could have been skewed by angry words used in bickering and complaining on social media that were not necessarily related to politics. Further research might test this in a secondary study to confirm that bias is removed from the results. One surprising result, that right-wing extremists were happier and more certain in their views than left leaning users, and that right wingers were less anxious, seemed to prove the old adage that “ignorance is bliss” – it would be valuable to test this further in a follow-on study to measure political savvy (not just ideology, but also intelligence of political matters and their impact) against happiness to see if that old adage really is accurate. Related, in the mortality salience study from 2016 (Sterling, Jost, and Shrout), the authors recommended continuing their research with a focus on mortality salience and system justification theory working together to influence political ideology, a connection they uncovered while attempting to prove a different hypothesis in their research.
The authors of the Twitter study (Alizadeh, et al., 2019) mention System Justification Theory, saying their findings lend credence to that theory, but they were measuring for Moral Foundations Theory, not System Justification Theory, and their results there were less concrete. In a later paragraph in the discussion, the authors seem to say their findings support the Moral Foundations Theory – but more specifically, the authors say their findings are in general agreement with the theory.
Can we conclude from these studies that fear leads to more right-wing and authoritarian political ideology and increased political aggression?
I don’t think we can firmly conclude with certainty, using only these four studies, that fear and/or political language creating internalized fear will always lead to more right-wing and authoritarian political ideology, though one study, the mortality salience study from 2017 (Cohen, et al.), did conclude that in the specific case of the 2016 election fear and threat did push people to a more right-leaning stance in favor of Trump. I do think we can conclude that there is enough connection between data points to explore the ideas more deeply with different sample population criteria, refined hypotheses, and different methodology. I also recommend a study that includes all three theories from across these studies (Moral Foundations Theory, System Justification Theory, and Terror Management Theory), as they all seemed to interconnect around the ideas of fear management and conservative political leanings, though not in a conclusive way. Another loose end came about in the hypothesis that increased conservative rhetoric and ideology would lead to increased political aggression. Though there are certainly some real-world examples of this that would be important to explore further in a new study, particularly as relates to the rhetoric of hate surrounding the political rallies of President Trump and their potential correlation to real world violence against out groups, there was not enough in any of these four studies to reliably prove that part of my hypothesis.
Alizadeh, M., Weber, I., Cioffi-Revilla, C., Fortunato, S., & Macy, M. (2019) Psychology and morality of political extremists: evidence from Twitter language analysis of alt-right and Antifa. EPJ Data Science
Van Prooijen, J., & van Vugt, M. (2018) Conspiracy theories: Evolved functions and psychological mechanisms. Perspectives on Psychological Science 2018, Vol. 13(6) 770-788. Sage Publications
Sterling, J., Jost, J.T., & Shrout, P.E. (2016) Mortality salience, system justification, and candidate evaluations in the 2012 U.S. Presidential Election. PLOS ONE
Cohen, F., Solomon, S., & Kaplin, D. (2017) You’re hired! Mortality salience increases Americans’ support for Donald Trump. Analyses of Social Issues and Public Policy, Vol. 17, No. 1, pp. 339-357
Fitton, L., Poston, L., & Gruen, M. (2009) Twitter for Dummies. John Wiley and Sons Publishing, 1st Edition. New York, New York
Tucker, E., & Fox, B. (2020). FBI director says antifa is an ideology, not an organization. AP News, Retrieved 30 November 2020, from https://apnews.com/article/donald-trump-race-and-ethnicity-archive-bdd3b6078e9efadcfcd0be4b65f2362e
image not mine, taken from reporters on Twitter – will remove on request