Disinformation and Manipulation in Digital Media
eBook - ePub

Disinformation and Manipulation in Digital Media

Information Pathologies

Eileen Culloty, Jane Suiter

  1. 112 pagine
  2. English
  3. ePUB (disponibile sull'app)
  4. Disponibile su iOS e Android
eBook - ePub

Disinformation and Manipulation in Digital Media

Information Pathologies

Eileen Culloty, Jane Suiter

Dettagli del libro
Anteprima del libro
Indice dei contenuti
Citazioni

Informazioni sul libro

Drawing on research from multiple disciplines and international case studies, this book provides a comprehensive and up-to-date understanding of online disinformation and its potential countermeasures.

Disinformation and Manipulation in Digital Media presents a model of the disinformation process which incorporates four cross-cutting dimensions or themes: bad actors, platforms, audiences, and countermeasures. The dynamics of each dimension are analysed alongside a diverse range of international case studies drawn from different information domains including politics, health, and society. In elucidating the interrelationship between the four dimensions of online disinformation and their manifestation in different international contexts, the book demonstrates that online disinformation is a complex problem with multiple, overlapping causes and no easy solutions. The book's conclusion contextualises the problem of disinformation within broader social and political trends and discusses the relevance of radical innovations in democratic participation to counteract the post-truth environment.

This up-to-date and thorough analysis of the disinformation landscape will be of interest to students and scholars in the fields of journalism, communications, politics, and policy as well as policymakers, technologists, and media practitioners.

This research received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 825227.

Domande frequenti

Come faccio ad annullare l'abbonamento?
È semplicissimo: basta accedere alla sezione Account nelle Impostazioni e cliccare su "Annulla abbonamento". Dopo la cancellazione, l'abbonamento rimarrà attivo per il periodo rimanente già pagato. Per maggiori informazioni, clicca qui
È possibile scaricare libri? Se sì, come?
Al momento è possibile scaricare tramite l'app tutti i nostri libri ePub mobile-friendly. Anche la maggior parte dei nostri PDF è scaricabile e stiamo lavorando per rendere disponibile quanto prima il download di tutti gli altri file. Per maggiori informazioni, clicca qui
Che differenza c'è tra i piani?
Entrambi i piani ti danno accesso illimitato alla libreria e a tutte le funzionalità di Perlego. Le uniche differenze sono il prezzo e il periodo di abbonamento: con il piano annuale risparmierai circa il 30% rispetto a 12 rate con quello mensile.
Cos'è Perlego?
Perlego è un servizio di abbonamento a testi accademici, che ti permette di accedere a un'intera libreria online a un prezzo inferiore rispetto a quello che pagheresti per acquistare un singolo libro al mese. Con oltre 1 milione di testi suddivisi in più di 1.000 categorie, troverai sicuramente ciò che fa per te! Per maggiori informazioni, clicca qui.
Perlego supporta la sintesi vocale?
Cerca l'icona Sintesi vocale nel prossimo libro che leggerai per verificare se è possibile riprodurre l'audio. Questo strumento permette di leggere il testo a voce alta, evidenziandolo man mano che la lettura procede. Puoi aumentare o diminuire la velocità della sintesi vocale, oppure sospendere la riproduzione. Per maggiori informazioni, clicca qui.
Disinformation and Manipulation in Digital Media è disponibile online in formato PDF/ePub?
Sì, puoi accedere a Disinformation and Manipulation in Digital Media di Eileen Culloty, Jane Suiter in formato PDF e/o ePub, così come ad altri libri molto apprezzati nelle sezioni relative a Languages & Linguistics e Journalism. Scopri oltre 1 milione di libri disponibili nel nostro catalogo.

Informazioni

Editore
Routledge
Anno
2021
ISBN
9781000356670
Edizione
1
Categoria
Journalism

1Introduction

Information pathologies

Across the world, all spheres of life have become subject to false information, conspiracy theories, and propaganda. These information pathologies are implicated in the global resurgence of vaccine preventable diseases, the subversion of national politics, and the amplification of social divisions. In 2018, the United Nations Human Rights Council cited Facebook as a ‘determining’ factor in the ethnic cleansing of Myanmar’s Rohingya population. A year later, the Oxford Internet Institute found evidence of efforts to manipulate public opinion in 70 countries (Bradshaw and Howard 2019). More recently, the Covid-19 pandemic brought an onslaught of conflicting reports, hoaxes, and conspiracy theories. The World Health Organisation (WHO) called it an ‘infodemic’: an overabundance of accurate and inaccurate claims that left many people confused about what to believe. In this context, it is unsurprising that a sense of crisis has become entrenched among policymakers, scholars, technologists, and others (see Farkas and Schou 2019).
While the need to develop countermeasures for disinformation is urgent, it is also challenging on many fronts. First, there are significant conceptual difficulties surrounding definitions of the problem. Second, there are practical impediments to developing fair and consistent moderation principles for enormous volumes of online content. Third, any proposed restriction on freedom of expression is necessarily accompanied by legal, ethical, and democratic reservations. Fourth, communication technologies are constantly evolving, which makes it difficult to design countermeasures that will be effective for the future. Fifth, and perhaps most crucially, there are major gaps in our understanding of the problem owing to the nascency of the research area and the lack of access to the platforms’ data. As a result, there is broad agreement that something needs to be done, but there is far less clarity about what that should be.
Undoubtedly, our current digital age is predisposed to a ‘shock of the new’ whereby digital media phenomena can seem more radical than they are because we untether them from their historical precedents. Taking a long view of human history, there is nothing new about disinformation. To take one example, ‘The Protocols of the Elders of Zion’ emerged from Russia in 1903 and, in the guise of a leaked document, appeared to reveal a Jewish plot for global domination. It gained international traction through the endorsement of major public figures, including the US industrialist Henry Ford, and through news media coverage and the distribution of pamphlets. As with disinformation generally, it is difficult to delineate the direct effects of the document, but two important lessons can be drawn from this case: successful disinformation amplifies existing prejudices and relies on structures of communication power and influence.
There is much to be gained by adopting a historical understanding of disinformation (see Cortada and Aspray 2019). Nevertheless, while cognisant of historical continuities, we argue that online disinformation represents a fundamental change. The affordances of digital platforms – with their design features, business models, and content policies – distinguish contemporary disinformation from its historical precursors. Digital media have unprecedented consequences in terms of the scale and speed at which disinformation is dispersed as well as the range of content types and platforms in which it is manifest. While the motivations that lie behind the production and consumption of disinformation may not have changed substantially over time, the rapid evolution of digital platforms have created new opportunities for bad actors while leaving regulators struggling to keep pace.
All this is predicated on the wider ‘platformization’ of economic, political, and social life (Plantin and Punathambekar 2019). Entire sectors have become institutionally dependent on the major online platforms. The news media is an important case in point. By dominating how people access information, the platforms became integral to news publishers’ distribution strategies (Cornia and Sehl 2018). However, the relationship was fundamentally asymmetrical. News publishers were subject to unpredictable changes in platform policies, such as changes to recommendation algorithms, and were largely unable to monetise the content they created. Meanwhile the platforms, Google and Facebook in particular, came to dominate online advertising; largely thanks to their ability to collect data from users who enjoyed free access to content. As these conditions contributed to a dramatic decline in the news media’s advertising revenue, finding ways to support high-quality journalism is a major consideration within the broader effort to counteract online disinformation.
Of course, the challenges faced by journalism are just one contributing factor to the proliferation of online disinformation. The key aim of this book is to provide an overarching context for understanding this multifaceted and evolving problem. In what follows, we present our model of the online disinformation process and its potential mitigation.

The components of online disinformation

Reduced to its basic constituents, online disinformation, when it is successful, is a process that involves different actors and consecutive stages (see Figure 1.1). We model online disinformation in terms of the bad actors who create and push manipulative content, the platforms that enable the distribution and promotion of this content, and the audiences who give it meaning and impact through their willingness to engage with it. Of course, any given scenario of online disinformation is more complex than this basic model suggests and we elucidate this complexity in the succeeding chapters by interrogating each component of the process. Nevertheless, we suggest the value of this model is that it allows us to simultaneously map and assess various countermeasures as efforts to intervene in different stages of the online disinformation process. In so doing, we emphasise the need for a multi-pronged approach and the concluding chapter takes this further to argue that countermeasures are likely to be ineffective unless they are accompanied by broader efforts to address deep-seated issues relating to public trust and democratic legitimacy.
Figure 1.1
The online disinformation process
The first stage in the process involves the so-called bad actors who create and push online disinformation. Bad actors may be defined collectively for their common intention to deceive or manipulate the public, but it is important to recognise that the nature of bad actors is multifarious. To date, much of the scholarly and journalistic attention has focused on state-backed bad actors in the political domain; primarily on Russia’s Internet Research Agency. As outlined in Chapter 2, we are also interested in the broader range of bad actors who are intent on misinforming the public or subverting public debate. A nuanced understanding of bad actors is complicated by the fact that much of what we know is derived from leaks and investigative journalism. Moreover, robust investigations – whether academic, journalistic, or parliamentary – have been hampered by a lack of useful data from the platforms (Boffey 2019). Nevertheless, we suggest that a broad understanding of bad actors may be derived by assessing: who they are or represent (e.g. states, corporations, social movements); their primary motivations (e.g. political, financial, ideological); and, of course, their tactics (e.g. creating deceptive content, influencing media agendas). The answers to these questions are typically inferred from the digital traces left online by bad actors; that is, by analysing disinformation content and how it has propagated through online networks. This brings us to the second component of our model – the platforms – as the strategies and tactics of bad actors take shape in line with the affordances of digital platforms.
The infrastructures of the platforms facilitate disinformation and incentivise low-quality content in many ways. As noted above, platform advertising models have had a detrimental impact on professional news. They also allow bad actors to monetise their disinformation. In addition, recommendation algorithms appear to have ‘filter bubble effects’ that amplify existing biases and potentially push people towards more extreme positions (Hussein et al. 2020). Recommendation algorithms aim to provide users with relevant content by grouping them according to their shared interests. This approach is relatively benign when those interests centre on sports and hobbies, but the implications are severe when those interests are defined by conspiracy theories and hate. More generally, the platforms’ engagement metrics – likes, shares, and followers – incentivise attention-grabbing content including clickbait journalism and hoaxes. These metrics can be manipulated by bad actors who piggyback on trending content and use false accounts and automated bots to inflate the popularity of content (Shao et al. 2018).
Nevertheless, receptive audiences are arguably the most important component of the process. After all, disinformation only becomes a problem when it finds a receptive audience that is willing, for whatever reasons, to believe, endorse or share it. Understanding what makes audiences receptive to disinformation and in what circumstances is therefore crucial. Many researchers are trying to answer this question and what they find is a complex overlap of factors relating to biased reasoning and the triggering of negative emotions such as fear and anger. These tendencies are amplified on social media where our attention is perpetually distracted. Moreover, quite apart from any bias on the part of the individual, repeated exposure to disinformation can increase perceptions of credibility over time (De keersmaecker et al. 2020; Fazio et al. 2015). Thus, reducing exposure to disinformation and providing supports to help audiences evaluate content have been to the forefront of efforts to mitigate disinformation.
There are ongoing debates about how to counteract online disinformation without undermining freedom of expression. Since 2016, a wide range of technological, audience focused, and legal and regulatory interventions have been proposed (see Funke and Flamini 2020). Technological interventions aim to advance the ability to detect and monitor disinformation. For their part, the platforms have variously taken action to reduce the visibility of certain content, but face calls for more radical action to improve transparency and accountability. Within the media and educational sectors, there has been a rapid growth in verification and fact-checking services and a renewed focus on media and information literacy. Legal and regulatory interventions are perhaps the most controversial, ranging from new laws prohibiting the spread of false information to proposals for the regulation of the platforms. Authoritarian states and democratic states that are ‘backsliding’ into authoritarianism are both exploiting concerns about disinformation to silence critics and increase their control over the media. For example, Hungary recently introduced emergency Covid-19 measures that permit prison terms for publicising disinformation (Walker 2020). These and similar bills are widely criticised for their potentially chilling impact on freedom of expression and such cases accentuate the need for international leadership to protect fundamental rights and freedoms.

Conceptual approach

This book adopts an international and multi-disciplinary perspective on online disinformation and its mitigation. As a growing research area, important empirical insights are emerging from multiple disciplines including communication studies, computer science, cognitive science, information psychology, and policy studies. At the same time, technologists and investigative journalists are deepening our understanding of the problem and a range of actors are developing new initiatives and countermeasures. While grounded primarily in communication studies, we draw on developments in all of these areas to provide a comprehensive and up-to-date understanding of the disinformation environment.
Throughout the book, we utilise a selection of international case studies that represent different information domains including politics, health, and social relations. While there are valuable studies of disinformation within specific countries (primarily the US) and thematic areas (primarily politics), we present a wider perspective in order to elucidate the dynamics of the disinformation process. Context is vital. The architectures, interfaces, moderation mechanisms, and participatory behaviours of social media platforms are neither static nor universal (see Karpf 2019; Munger 2019). Rather, they are temporally situated and patterns of audience engagement are relative to their media, political, and social contexts. It follows that the dynamics of online disinformation are highly variable and understanding this variability is essential for assessing the threats and developing effective countermeasures. In elucidating the interrelationship between the four key components of the online disinformation process and their manifestation in different international contexts, we emphasise that online disinformation is a complex problem with multiple, overlapping causes and no easy solutions.
Throughout the book, we use the term online disinformation rather than the more popular term ‘fake news’. The latter is a specific subset of disinformation and the term is already polluted through its invocation as a term of abuse for the news media. Nevertheless, we note that current definitions of the problem are broad, encompassing disinformation, ‘fake news’, manipulation, propaganda, fabrication, and satire (see Tandoc et al. 2018). In part, this definitional confusion is a consequence of the variety of forms and genres in which disinformation is manifest. It may appear as news articles, memes or tweets and its substantive content can range from the complete fabrication of facts to their distortion or decontextualisation (Wardle and Derakhshan 2017). We take the view that it is not necessarily helpful to think in strict terms of true and false or fake and real. Disinformation is often multi-layered containing a mix of verified, dubious, and false statements. Moreover, in many cases, the distinction between disinformation and ideological opinion may be difficult to define because ‘political truth is never neutral, objective or absolute’ (Coleman 2018: 157). Ultimately, we suggest the threat of disinformation has less to do with individual claims than the cumulative assault on trust and evidence-based deliberation.

Book outline

Following this introduction, successive chapters focus on each element of our disinformation process model: bad actors, platforms, audiences, and countermeasures. The second chapter examines the bad actors who produce and distribute disinformation. With a specific focus on disinformation about politics, climate change, and immigration, we examine different types of actors, their motivations, and the tactics through which they seek influence. Ultimately, we suggest that fo...

Indice dei contenuti

  1. Cover
  2. Half Title
  3. Series Information
  4. Title Page
  5. Copyright Page
  6. Contents
  7. 1 Introduction: Information pathologies
  8. 2 Bad actors
  9. 3 Platforms
  10. 4 Audiences
  11. 5 Countermeasures
  12. 6 Conclusion: Post-truth communication
  13. Index