Chapter 1
Political micro-targeting in an era of big data analytics
An overview of the regulatory issue
Janice Richardson, Normann Witzleb and Moira Paterson
In March 2018, The New York Times and The Observer revealed that the data analytics company Cambridge Analytica (CA) had obtained data from more than 50 million Facebook profiles in its quest to develop techniques for predicting and shaping the behaviour of individual American voters.1 The data was harvested through a Facebook app containing a personality quiz, which was developed by a contracted researcher, Aleksandr Kogan. Around 320,000 users who downloaded the quiz not only provided their own personal information but also granted the app permission to access the personal information of many millions of other users who were their friends. Neither the individuals who completed the quiz nor their Facebook friends were aware that their information was being harvested, and commercially and possibly politically exploited.
These events, which Facebook acknowledged to constitute a âmajor breach of trustâ,2 have revealed significant weaknesses in the way the social media giant manages and protects the vast troves of personal information it holds on its users. The incident also dramatically increased the concerns that had first arisen in relation to the role of data analytics in the Trump election and Brexit referendum results.3 Prior to its demise, Cambridge Analytica boasted that it has collected up to 4,000 personal data points on people, including 240 million Americans.4 It claimed to have developed models for sophisticated âpsychographicâ profiling that allowed political parties to target potential voters with specifically tailored advertisements. A related issue that arose in this context was related to the use of âfake newsâ and, specifically, the use of so-called âbotsâ5 to flood social networks with false information in order to influence election results.
1 Carole Cadwalladr and Emma Graham-Harrison, âRevealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breachâ The Guardian (17 March 2018) www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election, accessed 19 February 2019; Matthew Rosenberg, Nicholas Confessore and Carole Cadwalladr, âHow Trump Consultants Exploited the Facebook Data of Millionsâ The New York Times (17 March 2018) www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html, accessed 19 February 2019. Facebook has since revised this figure upwards to up to 87 million users: Cecilia Kang and Sheera Frenkel, âFacebook Says Cambridge Analytica Harvested Data of Up to 87 Million Usersâ The New York Times (4 April 2018) www.nytimes.com/2018/04/04/technology/mark-zuckerberg-testify-congress.html, accessed 19 February 2019.
2 David Ingram, âZuckerberg apologizes for Facebook mistakes with user data, vows curbsâ Reuters (21 March 2018) www.reuters.com/article/us-facebook-cambridge-analytica/zuckerberg-apologizes-for-facebook-mistakes-with-user-data-vows-curbs-idUSKBN1GX0OG, accessed 19 February 2019.
3 See e.g. Jamie Doward and Alice Gibbs, âDid Cambridge Analytica influence the Brexit vote and the US election?â The Guardian (4 March 2017) www.theguardian.com/politics/2017/mar/04/nigel-oakes-cambridge-analytica-what-role-brexit-trump, accessed 19 February 2019.
4 David Carroll, âCambridge Analytica Is Dead, Long Live Our Data: Were data crimes perpetrated against U.S. voters? We are about to know a lot moreâ Boston Review (24 May 2018) http://bostonreview.net/law-justice-david-carroll-cambridge-analytica, accessed 19 February 2019.
5 Otherwise known as internet robots â i.e. software applications that runs automated tasks, including, in the case of chat bots/social media bots, simulations of human activity.
The topic of disinformation and the Cambridge Analytica scandal were the subject of an 18-months-long, ground-breaking enquiry by the United Kingdom House of Commons Digital, Culture, Media and Sport (DCMS) Committee. The DCMS Committee relied upon voluminous evidence from witnesses, submissions and internal documents, and collaborated with other domestic and international agencies and organisations. Its Final Report6 found that Facebook intentionally and knowingly violated UK data privacy and competition laws, and recommended greater regulatory control of social media companies through a compulsory Code of Ethics to be enforced by an independent regulator, reforms to electoral communications laws and improved transparency.
The Cambridge Analytica scandal also prompted the UK Information Commissionerâs Office (ICO) to launch an investigation into whether Cambridge Analytica or Facebook had committed any breaches of privacy legislation. As part of its investigation, it examined the practices of 30 organisations in the context of political campaigning with a view to developing recommendations on enhanced disclosure rules for political advertising.7 The ICO published two reports. Its first report8 contains a detailed account of its findings concerning how social media platforms were used for micro-targeting by UK political parties and a set of policy recommendations in respect of the transparent and lawful use of data analytics in future political campaigns. The second, a report to the UK Parliament that was also considered by the DCMS Committee,9 summarised the results of the ICOâs investigations and the regulatory enforcement actions that it has taken in response to its findings. As part of its enforcement action, the ICO fined Facebook ÂŁ500,00010 for failing to sufficiently protect the privacy of its users before, during and after the unlawful processing of their data. In the US, Facebook agreed to pay a record US$5 billion fine and to upgrade its privacy procedures and protections, in order to settle a long-running privacy investigation by the Federal Trade Commission and resolve its involvement the Cambridge Analytica data scandal.11
6 UK Parliament, Digital, Culture, Media and Sport Committee, Disinformation and âfake newsâ: Final Report (HC 2017â19, 1791). See also UK Parliament, Digital, Culture, Media and Sport Committee, Disinformation and âfake newsâ: Interim Report (HC 2017â19, 363).
7 UK Information Commissionerâs Office, âInvestigation into the use of data analytics in political campaigns: A report to Parliamentâ (6 November 2018) https://ico.org.uk/media/action-weve-taken/2260271/investigation-into-the-use-of-data-analytics-in-political-campaigns-final-20181105.pdf, accessed 19 February 2019, 5â6.
8 Information Commissionerâs Office, âDemocracy disrupted? Personal information and political influenceâ (11 July 2018) https://ico.org.uk/media/2259369/democracy-disrupted-110718.pdf, accessed 19 February 2019.
9 Information Commissionerâs Office (n 7).
10 This was the maximum allowable under the former Data Protection Act 1998 (UK).
11 David McLaughlin and Daniel Stoller, âFacebook $5 Billion U.S. Privacy Settlement Approved by FTCâ Bloomberg (13 July 2019) www.bloomberg.com/news/articles/2019-07-12/ftc-approves-facebook-privacy-settlement-worth-about-5-billion, accessed 29 July 2019.
Taken together, these developments shine a stark light on emerging data-driven practices that present a major shift in the operation of the democratic process. Big data analytics represents a new frontier in the way in which information is processed and used to affect individuals. While the use of elector databases by political parties is not a new development,12 it is important to understand the transformative effect of big data analytics and artificial intelligence. What is significant about the new âbig dataâ phenomenon is not just the size, variety and complexity of the datasets available for analysis but also that new tools harness the power of artificial intelligence to find âsmall patternsâ13 and provide new insights concerning data subjects. While much recent research focused on the potentially discriminatory effects of decisions based on big data analytics14 and the issue of consumer manipulation,15 an aspect that is now emerging as at least equally important is the extent to which it makes voters more vulnerable to illegitimate manipulation, thereby undermining the democratic process.
There have recently been major advances in assessing psychological traits derived from the digital footprints of large groups of individuals, and putting them to use for mass persuasion.16 The ubiquity of social media is central to many of these practices. The digital footprints of social media users are utilised not only as a data source for their psychographic assessment but also to influence the behaviours of users. Given the increased public reliance on social media as a source of ideas and information, these platforms are increasingly used for behavioural targeting with messages that are devised to resonate with populations that share distinct personal characteristics or persuasions. With growing awareness of the opportunities offered by social media, political campaigns increasingly make use of them to combine data-driven voter research with personalised political advertising, a practice known as political micro-targeting. Through micro-targeting, a political party can identify the individual voters whom it is most likely to convince and match its message to the specific interests and vulnerabilities of these voters. Micro-targeting activities may be accompanied by other practices, including the use of bots to spread ideas and information, including fake information.
12 See e.g. the Resolution on the Use of Personal Data for Political Communication (adopted at the 27th International Conference on Privacy and Personal Data Protection, Montreux, 14â16 September 2005).
13 Luciano Floridi, âThe Search for Small Patterns in Big Dataâ (2012) 59 The Philosophersâ Magazine 17â18.
14 See e.g. Jenifer Winter, âAlgorithmic Discrimination: Big Data Analytics and the Future of the Internetâ in Jenifer Winter and Ryota Ono (eds), The Future Internet: Alternative Visions (Springer International Publishing 2015) 125.
15 See e.g. Max N Helveston, âConsumer Protection in the Age of Big Dataâ (2016) 93(4) Washington University Law Review 859; Nir Kshetri, âBig dataâs impact on privacy, security and consumer welfareâ (2014) 38(11) Telecommunications Policy 1134.
16 Sandra C Matz et al., âPsychological Targeting as an Effective Approach to Digital Mass Persuasionâ (2017) 114(48) Proceedings of the National Academy of Sciences 12714â12719.
These developments have increased the efficiency of political campaigning but raise a number of issues that go to the heart of democratic systems of government: To what extent do micro-targeting practices, in particular when they are not made transparent, involve unacceptable manipulation? Does the crafting of personalised messages exacerbate the issue of âfilter bubblesâ and does this undermine some of the inherently collective processes underpinning democratic governance?
The profiling practices that underlie these campaigns also raise profound privacy issues. To what extent is it acceptable for personal information disclosed for other purposes to be repurposed and used for political gain? Are our regulatory systems sufficiently robust to address this issue? Is there some need for broader group privacy protection, given that much of modern data analytics relies on classification and the analysis of defined groups for the purpose of making predictions about them or seeking to influence their behaviour?
The vast accumulations of data on voters and politicians in the computer networks of political parties have also made them increasingly attractive targets for malicious hacking. Over the last few years, there have been reports from a number of countries that their parliamentary IT infrastructure, or the computer systems of political parties, had been attacked and compromised.17 While it is notoriously difficult to confirm the origin of such attacks, there are many indications that sophisticated state actors have been involved.18 These attacks have led to increased attempts to secure these networks against cyber threats. The connections between privacy and cybersafety lie in the requirement of data protection laws to keep data secure.
17 See e.g. Rohan Pearce, âGovernment says âstate actorâ hacked Australian political partiesâ networksâ Computer World, 18 February 2019, www.computerworld.com.au/article/657853/government-says-state-actor-hacked-australian-political-parties-networks, accessed 21 July 2019; Danny Palmer, âCyber-espionage warning: Russian hacking groups step up attacks ahead of European electionsâ ZDNet (21 March 2019) www.zdnet.com/article/cyber-espionage-warning-russian-hacking-groups-step-up-attacks-ahead-of-european-elections, accessed 21 July 2019.
18 See e.g. Brett Worthington, âScott Morrison reveals foreign government hackers targeted Liberal, Labor and National parties in attack on Parliamentâs serversâ ABC News (18 February 2019) www.abc.net.au/news/2019-02-18/prime-minister-scott-morrison-cyber-attack-hackers/10821170, accessed 21 July 2019.
Many of the arising issues are complex and interrelated, and it is not possible to deal with them all in a single volume. In particular, this book does not consider in detail the broader challenges posed by the new big data environment to data protection regimes that focus on data relating to identifiable individuals. The new big data environment challenges the focus on individual privacy because it facilitates the targeting of individuals on the basis of analyses of group data gleaned from studies of the groups to which an individual belongs.19 It also challenges the assumption that there is a sharp line between identifiable and non-identifiable information, given that modern data analytics increasingly facilitates the re-identification of data that has been ostensibly de-identified.20
The focus of this book is the exploration of issues raised by the use of ...