1 The âclassicâ model of self-regulation on the Internet
You claim there are problems among us that you need to solve. You use this claim as an excuse to invade our precincts. Many of these problems donât exist. Where there are real conflicts, where there are wrongs, we will identify them and address them by our means. We are forming our own Social Contract. This governance will arise according to the conditions of our world, not yours. [. . .]
We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.
Our identities have no bodies, so, unlike you, we cannot obtain order by physical coercion. We believe that from ethics, enlightened self-interest, and the commonweal, our governance will emerge.
John Perry Barlow, A Declaration of the Independence of Cyberspace, 19961
In this chapter, we outline the basis for Internet self-regulation, explaining the origins and development of US Internet content regulation, the industry structures and content liabilities that have been enshrined in law, and the manner in which private enforcement of liability has developed in the governmental vacuum that developed in the late 1990s. We then analyse the European Unionâs response to the prevailing Internet content self-regulatory paradigm of the United States, and the development of what became known as co-regulation: self-regulation with a legislative backstop or âlurking threatâ.
Internet design and libertarian non-regulation
It is claimed by some pioneers such as John Perry Barlow that the Internet is a global phenomenon beyond nation-state control, in which unregulated any-to-any communication is possible. Ten years on, comparing âcyberspaceâ to outer-space when the majority of EU citizens have accessed the Internet, and when the boundaries between âvirtualâ and real life are blurring, negates the democratic importance of preventing harm occurring on the Internet. The preface to the âInternet Commons Treatyâ of 2004 recognises this, even while continuing to proclaim the non-governmental libertarian ideal:2 âThe Internet seems to have lost the special status that led most governments [sic] to âhands offâ policies during the 90s.â
The Internet was largely a US government creation, ARPANET, with architecture originally intended to survive thermonuclear strike. Developed by university science departments, and later in European universities, it became a cultural artefact and is now a key driver of economic integration across national boundaries. The British inventor of the World Wide Web (WWW), Tim Berners-Lee, has explained that the openness of the WWW describes:
a vision encompassing the decentralized, organic growth of ideas, technology, and society. The vision I have for the Web is about anything being potentially connected with anything. It is a vision that provides us with new freedom, and allows us to grow faster than we ever could when we were fettered by the hierarchical classification systems into which we bound ourselves.3
Lawrence Lessig explains what that architectural principle4 means in practice:
This end-to-end design frees innovation from the past. Itâs an architecture that makes it hard for a legacy business to control how the market will evolve. You could call it distributed creativity, but that would make it sound as if the network was producing the creativity. Itâs the other way around. End-to-end makes it possible to tap into the creativity that is already distributed everywhere.5
The âend-to-endâ principle is hard-wired into the Internetâs architecture by the technical standards and protocols that govern the engineering of the Internet. In this narrow engineering sense, much of the Internet is self-regulated, for instance by:
- W3C (World Wide Web Consortium), a USâEU consortium of private and public universities and researchers, including corporate researchers;
- the similarly constituted IETF (Internet Engineering Task Force);
- ICANN (Internet Corporation for Assigned Names and Numbers).
Gould demonstrates that the consensual model of standard setting which sufficed in the development of the Internet, a legacy model which is still effective in the more technical policy arena, is increasingly placed under strain by the advanced consumer adoption of the Internet.6 The legacy of such technical self-regulation is that minimal direct government interference has been seen.7 The self-regulatory bodies are international in character and were begun as non-commercial self-regulatory organisations.8 The end-to-end principle dictates that any content control be embedded in code by the content creator, and filtered by browser software installed and controlled by the end-user.
Running throughout this book are some fundamental questions: does technical architecture prevent the realization of public policy goals in content regulation? Can âpublicâ control be re-exerted over media content, not just on the Internet but also in media as diverse as video games, feature films, mobile phone content, print and traditional broadcasting? In the following section we consider the events that led the US Internet community to its technologically led libertarian position.
Internet content, codes of conduct and technical self-regulation
Concerns regarding inappropriate and potentially harmful content on the Internet are as old as the public Internet itself.9 Once the general citizen was first allowed to use the Internet, in 1992, rapid consumer adoption led to a need for rule-making. This began to surface in public policy debate around 1994, when Vint Cerf10 classified three types of Internet regulation: technical constraints, legal constraints and moral constraints. He stated that, âIn reality, all of these tools are commonly applied to channel behavioural choices.â He explains that it was public service Internet service providersâ â university and research institute â conditions of use, including codes of conduct (CoCs), that regulated online behaviour from the Internetâs invention. After the opening of the Internet, CoCs, inherited from the public service past, continued to be the default approach. Cerf emphasised the need to set up incentive structures for self-regulation: âguidelines for conduct have to be constructed and motivated in part on the basis of self-interestâ.
In these early years, a pattern of negotiation between self-regulatory bodies for the Internet and government began to emerge. Price and Verhulst assert the limits of both government and private action in this sphere, and assert the interdependence of both â there is little purity in self-regulation as there is usually a lurking government threat to intervene where market actors prove unable to agree.11
An early threat to libertarianism emerged in 1994, with proposed US child protection legislation against illegal and harmful material on the Internet, the Communications Decency Act.12 In 1996, this particular content law was struck down under the strict standards of the US Constitutionâs First Amendment in the landmark American Civil Liberties Association v. Reno case. In a 1995 response the World Wide Web Consortium began to develop the Platform for Internet Content Selection (PICS),13 the basis of filtering that was immediately incorporated into browser software and used to classify web pages by the major Internet service provider (ISP) portals in the United States âand by default worldwide. The idea was simple: to engineer websites and user software to enable control of content at the device â the end of the network â rather than by ISP or another intermediary.
In addition to the administrative sanctions, generally informal, triggered by the original CoCs of Internet users described by Vint Cerf, the Internet also developed its own form of lynch-mob to enforce norms of behaviour. Sanctions included the spread of viruses which incapacitate recipientsâ PCs via e-mail, hacking into government or corporate sites, or reputation damage (consider the eBay auction siteâs user ratings as a benign example). Clearly these lack legitimacy. Formal democratic decision-making for the global issues which Internet governance raises is extremely immature, as Froomkin demonstrates in his assessment of ICANN processes.14 Citizen demands for protection and security create a classic global public goods issue, which governments are now addressing.15 Environmental, labour and financial market analysts will find these reflections unsurprising examples of both the limitations of global governance and the rapid maturing and increasing complexity of âcivil societyâ.
Public policy towards Internet content liability
âVisionariesâ such as Barlow were not the only advocates of a self-regulatory structure for new media during the first years of the rapid consumer adoption of the WWW. The Clinton Presidency launched two major self-regulatory initiatives for the digital media sector: a US Presidential Advisory Committee on digital television, and another on privacy in electronic commerce.16 The Council of Europe and the European Commission issued a series of reports and recommendations promoting Internet self-regulation during the same period.17 The intervening years have seen the emergence of a fertile ecology of rule-making, regulatory competition, alternative dispute resolution and a complex interaction between state, co- and self-regulatory practices in the media sectors. And this complex and changing regulatory ecology has been further challenged by two major trends: convergence between previously distinct technologies such as telecommunications, broadcasting, games, the Internet and press; and convergence between different national and regional regimes of self-and co-regulation.
To view all rule-making on the Internet in terms of a Manichean divide between state and âfreedomâ may provide an edge to Barlowâs social critique, but it is unhelpful for our purpose here, which is to analyse and monitor the emerging structures of self-and co-regulation that apply to the Internet and other convergent media sectors. As the Internet is critically important for communication (and, in turn, for culture and commerce) then broader issues of trust and externalities will lead to legitimate demands for regulation. It will be clear that imagery of state âinvasionâ of a self-governing Internet is misleading. Internet development does entail some public policy issues that engage the institutions of democratic governance, and it is only within the legal framework for Internet liability that Internet users and service providers can enjoy the relative freedoms to self-regulate that Barlow invokes. Our emerging structures of media regulation develop through competition in response to a demand for regulation which is in a constant negotiation between private and public institutions.
The first decade of WWW content rule-making has been the subject of little systematic European legal analysis. This study therefore examines the background to self-regulation in terms of the earliest rules for Internet conduct, and then explicates some of the key aspects of the demand for regulation in terms both of the economics of information and communication services and of apparent consumer harms associated with the Internet in particular, a...