Discourse Strategies of Fake News in the Anti-vax Campaign*
Stefania Maci
DOI: https://dx.doi.org/10.7358/lcm-2019-001-maci
ABSTRACT
Anti-vaccine controversial debates have been occurring for more than a century. As the debate has moved onto social media, the issue has further developed. For anti-vaccine campaigners the use of Twitter and Facebook means giving them voice and massively amplifying their message. Yet, precisely because social media advertisements and news works on the basis of an algorithm that brings people to see similar news to those they have read before, anti-vax supporters tend to read and to always believe in the same type of news, be it fake or real. In other words, anti-vax supporters cannot discern real and fake news, as they do not realize that scientific fake news is the result of a decontextualization of the medical sources. By drawing on CDA, corpus linguistics and socio-semiotic multimodality, this paper aims at analyzing fake news discursive dynamics and strategies related to the anti-vax campaign to unveil cognitive, social and institutional constructs of misinformation.
Keywords: anti-vaccine movement; autism; discourse strategies; fake news; medical discourse; multimodality; popularization; social media; sociosemiotic analysis; vaccine.
1.INTRODUCTION
Vaccination was introduced in the medical community as early as the 19th century thanks to the work of Edward Jenner, the father of immunology. Information on vaccination immediately started to be disseminated in society, but with the introduction of the Vaccination Act (1849-1898) in the UK resistance to vaccination began, as the bill was regarded as a violation of civil liberty. The bill was therefore amended, “allowing parents who did not believe vaccination was efficacious or safe to obtain a certificate of exemption” and introducing the concept of the “conscientious objector” (Wolfe and Sharp 2002, 431). On a similar cline, in the 19th century, in the US, despite several and serious smallpox outbreaks, any attempts to enforce vaccination laws were vigorously opposed by anti-vaccination movements.
Anti-vaccination movements were revived in the 1990s, when The Lancet published a study carried out by Wakefield et al. (1998). The authors demonstrated a correlation between the measles, mumps and rubella vaccine (MMR) and autism. This study was, however, strongly biased and its findings were flawed, as was demonstrated by Taylor et al. (1999, 2029) and Miller et al. (1999, 950), showing that Wakefield et al. (1998, 637) selected the data they needed to suggest a vaccine-autism correlation, despite Wakefield’s (1999, 949) counterclaims rejecting other scholars’ accusations. Wakefield et al.’s (1998) study was not only demonstrated to be strongly distorted; a journalist’s report (by Brian Deer) also proved that the study was funded by litigants opposing vaccine manufacturers (Hussain et al. 2018). This made The Lancet (Editors 2010) retract the article by Wakefield et al. (1998) in 2010. Despite the fact that further and even more recent medical studies have confirmed the absence of any correlation between MMR and autism and, on the contrary, indicated that autism seems to have a genetic predisposition (cf., for instance, Garcia 2019), anti-vaccine movements have become stronger and stronger, to such an extent that there has been a worldwide drop in vaccination. Apparently, the laypeople’s support for Wakefield et al.’s (1998) study and the resistance to vaccination has to be looked for in the access to medical information granted by the Net, which has dramatically changed doctor-patient communication dynamics (Hussain et al. 2018). Indeed, while in the past the repository of medical information was in the hands of medical specialists only, the advent of information technology and the easy access to any type of medical information has seen “the establishment of shared decision-making between patients and healthcare physicians” (Hussain et al. 2018, e2919). If sharing this responsibility may help patients to cope better with their health problems (Hunt and Koteko 2005, 446), in the case of vaccination, the process of medical popularization carried out by the anti-vaccine activists has facilitated the dissemination of disinformation.
Many studies have carried out analyses of the dissemination of knowledge (cf., for instance, Calsamiglia and Ferrero 2003; Calsamiglia and van Dijk 2004; Garzone 2006; Marko 2015, 2017; Marko and Wimmer 2018), health literacy and healthcare (as in Mūnoz-Miquel 2012; Filippone et al. 2013; Briones 2015), also paying attention to the role digital media play in disseminating medical knowledge (Vicentini 2012; Grego and Vicentini 2015; Luzón 2015; Turnbull 2015a, 2015b; Amann and Rubinelli 2017; Pott and Semino 2017; Semino 2017). Furthermore, many scholars have analyzed the different aspects of popularization in the spread of medical knowledge (Bondi 2014; Garzone 2014; Gotti 2014; Maci 2014a, 2014b). An exploration of how people can use online channels and social media to communicate has recently been carried out by scholars who have developed Digital Discourse Studies (cf., for instance, Myers 2010; Thurlow and Mroczek 2011; Zappavigna 2012; Gee 2015; Herring and Androutsopoulos 2015), and Social Media Critical Discourse Studies (Khosravinik 2018). Myers (2015), in particular, focuses on the use of Twitter in medical communication to demonstrate how professionals construct their persona in a 140-character tweet combining a variety of modes (personal, professional or institutional) according to the interactants. However, to the best of our knowledge, no empirical research has been conducted on the use of Twitter as a channel of communication for spreading (mis)information about or against a specific medical topic – vaccination – by laypeople. Thus, this chapter will investigate how anti-vaccine activists use Twitter to spread fake news, in order to detect their ideological dynamics and discourse strategies in relation to the anti-vax campaign on Twitter with the intention of revealing cognitive, social and institutional constructs of misinformation. For this purpose, an investigation will be carried out on a small corpus of 16,768 tweets (75,960 running words) collected over ten days in October 2018, soon after the death of some children due to an upsurge of measles in New York City.
In order to carry out this investigation, the chapter will draw on a CDA analysis accompanied by a socio-semiotic reading of the collected tweets and develop this in the following sections. Section two will give the background and an overview of how misinformation is constructed and spread on online platforms. Section three will offer a methodological approach, followed, in section four, by an analysis and interpretation of the data collected from Twitter. Section five will offer a conclusion. The results suggest that the #anti-vax discourse seems to be founded on moral and scientific grounds at cognitive, social and institutional levels, thus granting itself authority.
2.THE PRODUCTION OF MISINFORMATION AND THE DIFFUSION OF FAKE NEWS
2.1.Specialized (mis)information
Nowadays, the audience requires more and more specialized information from the media industry, to such an extent that the demand surpasses the offer given by the media (AGCOM 2018). For this reason, newspapers and magazines try to cope with the demand by publishing specialized information written by journalists who do not have the necessary specialized competences or specific training (ibid., 21-22). For this reason, spreading false information is easy. Within this context, it is also easier to scatter misinformation, because there are not enough specialists who can argue against what is wrong (AGCOM 2018). This is particular problematic, as anti-vaxxers easily persuade the audience by instilling fear, using blame language and other rhetorical devices for their manipulative purposes.
2.2.The dissemination of fake news on online platforms and the ‘informed’ citizen
As said above, misinformation tends to spread where the information system fails, in particular where there is not enough specialized training for professionals, especially in online contexts where updates need to be done immediately – which compromises the adequateness of the required information. As a consequence, this causes a trust loss on the citizens’ part towards the information system (AGCOM 2018, 75). When people lack information, they search for or accept information which is in line with their beliefs, defending them. What is more, as claimed by AGCOM (2018, 76), people’s tendency to distrust the information system leads them to oppose objective and proven facts supported by scientific and official sources and to support misconceptions. In this context, people turn to online platforms, particularly social media, which are considered the main and objective sources of news, and so misinformation seems to have found a new channel (Tandoc, Lim, and Ling 2017, 138). All this has contributed to creating the ‘informed’ citizen. Unfortunately, these platforms disseminate misinformation and fake news on polarizing topics, which become objects of viral propagation through social media (AGCOM 2019, 6). Not only is this fake news viral, it is also based on algorithms that bring people to similar news they have read before. This creates a vicious circle: anti-vax supporters tend to always read and believe the same type of (fake) news offered by such algorithms, which confirm (in a self-predicting way) what they believe in, and which renders them unable to discern real and fake news (Balmas 2014; Gili 2018). This leads to risky consequences, i.e. exposure to unchecked and unreliable information in ready-made narratives.
2.3.The production of misinformation
The distinctive elements characterizing misinformation include (a) the subjects who are involved in the creation, production and distribution of content; and (b) the content itself.
Misinformation that is ‘distributed’ online can be characterized by six main aspects...