Decoding Liberation
eBook - ePub

Decoding Liberation

The Promise of Free and Open Source Software

  1. 232 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Decoding Liberation

The Promise of Free and Open Source Software

Book details
Book preview
Table of contents
Citations

About This Book

Software is more than a set of instructions for computers: it enables (and disables) political imperatives and policies. Nowhere is the potential for radical social and political change more apparent than in the practice and movement known as "free software." Free software makes the knowledge and innovation of its creators publicly available. This liberation of code—celebrated in free software's explicatory slogan "Think free speech, not free beer"—is the foundation, for example, of the Linux phenomenon.

Decoding Liberation provides a synoptic perspective on the relationships between free software and freedom. Focusing on five main themes—the emancipatory potential of technology, social liberties, the facilitation of creativity, the objectivity of computing as scientific practice, and the role of software in a cyborg world—the authors ask: What are the freedoms of free software, and how are they manifested? This book is essential reading for anyone interested in understanding how free software promises to transform not only technology but society as well.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Decoding Liberation by Samir Chopra,Scott D. Dexter in PDF and/or ePUB format, as well as other popular books in Computer Science & Computer Science General. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2008
ISBN
9781135864866
Edition
1

1
Free Software and Political Economy

“For us, open source is capitalism and a business opportunity at its very best.”
—Jonathan Schwartz, president and chief operating officer, Sun Microsystems (Galli 2005)
“The narrative of the programmer is not that of the worker who is gradually given control; it is that of the craftsperson from whom control and autonomy were taken away.”
—Steven Weber (Weber 2004, 25)
Free software, in its modern incarnation, was founded largely on an ideology of “freedom, community, and principle,” with little regard for the profit motive (Stallman 1999, 70). Yet today FOSS makes headlines daily1 as corporations relying on “open source development” demonstrate remarkable financial success (Vaughan-Nichols 2005; Brown 2006); those that before hewed to closed-source development and distribution now open significant parts of their code repositories (Farrell 2006) and develop partnerships with open source firms (Haskins 2006; Krill 2006); and free and open source software–licensing schemes become challenges to be tackled by legal departments of software firms keen to draw on FOSS’s unique resources (Lindquist 2006).
FOSS incorporates a complex of political ideologies, economic practices, and models of governance that sustain it and uphold its production of value. It is a political economy whose cultural logic cannot be understood without reference to the history of computing and software development, the arc of which traverses some of the most vigorous technological innovation in history, a political economy heavily reliant on the marriage of industry and science. This historical context provides a framework both for placing FOSS’s salient characteristics and ideologies in dialogue with theories of political economy and for examining the import of the distinction between the free and open source movements. In particular, political economy provides a useful lens through which to understand the ways in which free and open source software invoke and revise traditional notions of property and production. While the principles of the free software movement were originally orthogonal to proprietary software and its attendant capitalist culture, the recent Open Source Initiative seeks greater resonance between them. Examining the process of this convergence enables an understanding of how capitalism is able to coexist with, and possibly co-opt, putative challenges to its dominance of the industrial and technological realms.

A Brief History of Computing and Software Development

The term “software” encompasses many modalities of conveying instruction to a computing device, marking out a continuum between programming the computer and using it, from microcode and firmware — “close to the metal” — to the mouse clicks of a gamer. Our contemporary interactions with computers, whether punching digits into cell phones or writing books on word processors, would be unrecognizable to users of fifty years ago. Then, computers were conceived as highly specialized machines for tasks beyond human capacity; visions of widespread computer use facilitating human creativity were purest speculation. This conception was changed by successive waves of user movements that radically reconfigured notions of computing and its presence in everyday life.
The postwar history of computing is marked by steady technological advance punctuated by critical social and economic interventions. In the 1950s, IBM introduced computing to American business; in the late 1950s, at MIT, the hacker movement established the first cultural counterweight to corporate computing; in the 1970s, personal computers demonstrated a new breadth of applications, attracting a more diverse population of users; and in the 1990s, the Internet became a truly global information network, sparking intense speculation about its political, cultural, and, indeed, metaphysical implications.
The history of software is intermingled with that of hardware; innovations in software had little effect on the practice of computing unless they were in sync with innovations in hardware.2 In particular, much of computing’s early technical history concerns hardware innovations that removed bottlenecks in computational performance. Software and programming emerged as a separate concern parallel to these developments in hardware. During the Second World War, J. Presper Eckert and John W. Mauchly designed and built the Electronic Numerical Integrator and Computer (ENIAC) at the University of Pennsylvania for the purpose of calculating artillery ballistic trajectories for the U.S. Army (Weik 1961). The first electronic general-purpose computer, the ENIAC, which became operational in 1946, would soon replace Herman Hollerith’s tabulating and calculating punch-card machines, then made popular by IBM. But it was Eckert and Mauchly’s next invention, the Electronic Discrete Variable Automatic Computer (EDVAC), designed with the significant collaboration of the prodigious mathematician John von Neumann, that represented true innovation (Kempf 1961). It was one of the first machines designed according to the “stored program” principle developed by von Neumann in 1945. Before stored-program computing, “programming” the computer amounted to a radical reconstruction of the machine, essentially hardwiring the program instructions directly into the machine’s internals. But with the stored-program principle, which allowed programs to be represented and manipulated as data, humans could interact with the machine purely through software without needing to change its hardware: as the physical hardware became a platform on which any program could be run, the computer became a truly general-purpose machine.
The term “programming a computer” originated around this time, though the relatively mundane “setting up” was a more common term (Ceruzzi 2003, 20). The task of programming in its early days bore little resemblance to today’s textual manipulations and drag-and-drop visual development environments. The electromechanical programmable calculator Harvard Mark I, designed by Howard Aiken, accepted instructions in the form of a row of punched holes on paper tape; Grace Murray Hopper, who joined the Mark I project in 1944, was history’s first programmer. The Mark I was not a stored-program computer, so repeated sequences of instructions had to be individually coded on successive tape segments. Later, Aiken would modify the Mark I’s circuitry by hardwiring some of the more commonly used sequences of instructions. Based on this successful technique, Aiken’s design for the Mark III incorporated a device, similar to Konrad Zuse’s “Programmator,” that enabled the translation of programmer commands into the numerical codes understood by the Mark III processor.
With the advent of the stored-program computer, general-purpose computers became their own programmators. The first software libraries consisted of program sequences stored on tape for subsequent repeated use in user programs. This facilitated the construction, from reusable components, of special-purpose programs to solve specific problems. To some observers, this project promised to do for programming what Henry Ford had done for automobile manufacturing: to systematize production, based on components that could be used in many different programs (Mahoney 1990). Von Neumann’s sorting program for the EDVAC was probably the first program for a stored-program computer; Frances Holberton wrote similar sorting routines for UNIVAC data tapes.
In 1946, the University of Pennsylvania, striving to eliminate all commercial interests from the university, pressured Eckert and Mauchly to relinquish patent royalties from their inventions. The pair resigned, taking the computer industry with them, and founded the company that developed the UNIVAC. In the early days of commercial computing, software innovations were typically directed toward making usable machines for the small user community that constituted the embryonic computer market. The U.S. Census Bureau was one of Eckert and Mauchly’s first customers; Eckert and Mauchly’s machines found application in inventory management, logistics, and election prediction. In an early acknowledgement that computing cultures, such as they were, were different in universities, Eckert and Mauchly’s first corporate customer, General Electric, had to convince its stockholders that it had not come under the sway of “longhaired academics” (Ceruzzi 2003, 33).
In May 1952, IBM ended UNIVAC’s monopoly, joining the business of making and marketing computers by introducing the IBM 701. Thus began the epic cycle, part and parcel of the computing industry’s mythology, of dominant players upstaged by nimble upstarts. By 1956, IBM had shot past UNIVAC on the basis of its technical superiority and refined marketing techniques, with other corporate players such as Honeywell, GE, and RCA fueling the competition. IBM had become the dominant player in the computing sector by 1960,3 to the extent that the Justice Department, suspecting antitrust violations, began to show an interest in the company’s business practices4 that would culminate in the longest running antitrust case of all time, starting in 1969. As curiosity grew about its internal governance techniques, IBM was investigated by the media as well; the magazine Datamation, for example, extensively covered IBM’s internal affairs. Not all this media attention was uncritical: IBM was accused of being a poor technical innovator, of taking over projects brought to maturity by others, of lagging behind others in technical advances (Ceruzzi 2003). These charges would echo in the 1990s as Microsoft achieved similar dominance over the software world.
IBM’s classic mainframes, such as the 7090, were first installed in the late 1950s. Used in scientific, technical, and military applications, they came complete with lab-coated attendants, whirring tape drives, sterile white-walled data centers, and highly restricted access for both machine operators (responsible, for example, for the logistics of loading programs) and programmers. While the real control lay in the hands of the programmers, few were ever allowed in the computer room. Given the high costs of computing time, no programmer ever interacted with a machine directly. All jobs were run in batches, groups of jobs clustered together to minimize idle processor time. Programmers developed their work on decks of punched cards that were transferred to a reel of tape mounted by the operators onto a tape drive connected to the mainframe. Many programmers never saw the machines that ran their programs. The operators performed the tasks of mounting and unmounting tapes, reading status information and delivering output. Contrary to appearances, the work was not intellectually challenging: indeed, most operator tasks were later relegated to collections of software known as “operating systems.” In case of system malfunctions, however, operators were able to load programs by flipping switches to alter the bits of machine registers directly. Though tedious, this manipulation gave the operators an intimate relationship with the machine.
The advent of transistorized circuits brought increased power efficiency and circuit miniaturization; computers became smaller and cheaper, making their way to universities. While these machines were intended for use in data processing courses — that is, for strictly pedagogical purposes — at university computer centers students began a pattern of “playing with computers.” Users were envisioning applications for computers that went beyond their intended purposes. By 1960, the structures of commercial and academic computing were much the same, organized around computing centers that separated programmers from the machines they programmed. The drawbacks of the batch-processing model were first noticed in universities when the discipline of computer programming was initially taught: batch processing greatly constrained the amount of iterative feedback available to the student programmers. Thus the needs of the university, and the breadth of its user base, began to manifest themselves in academic requirements for computing. In a foreshadowing of contemporary university– corporate relations, IBM offered its 650 system to universities at massively discounted rates under the condition that they would offer computing courses. This strategy marked the beginning of direct corporate influence on the curricula of academic computing departments, one of the many not-so-benign relationships between university and corporation that contributed to the industrialization of the sciences (Noble 1979).

Programming Languages

The first programming languages, called “assembly languages,” simply provided mnemonics for machine instructions, with “macros” corresponding to longer sequences of instructions; large libraries of such macros were maintained by each user site. These languages used named variables for numeric memory addresses to enhance human readability; “assembler” programs managed these variable names and oversaw memory allocation. Given their utility to programmers, it is no surprise that the first assembler for the IBM 704 was developed in collaboration with the SHARE user group (see below). Computing pioneers had not foreseen the widespread nature of the activity of programming: it developed in response to user needs and was largely driven by them.
Even early in the development of the programming discipline, the cohort of programmers was regarded as a priesthood, largely because of the apparent obscurity of the code they manipulated. This obscurity was a primary motivation for J. H. Laning and N. Zierler’s development of the first “compiler,” a program that automatically translated programming commands entered by users into machine code. But this compiler was significantly slower than other methods for performing such translations. The continuing desire for efficient automatic translation led to the development of high-level programming languages, which also promised to dispel some of the mystery surrounding the practice of programming.
This phase of software’s history was one of heavy experimentation in programming-language design. The design of FORTRAN (FORmula TRANSlation) — developed in 1957 for the IBM 704 — made it possible to control a computer without direct knowledge of its internal mechanisms. COBOL (COmmon Business Oriented Language), a self-documenting language with descriptive variable names, was designed to make programming more accessible to a wider user base. More languages were developed — ALGOL (ALGOrithmic Language) and SNOBOL (StriNg Oriented symBOlic Language) among them — each promising ease of use combined with technical power. These developments defined the trajectory of programming languages toward greater abstraction at the programmer level. While programmers still had to master the syntax of these languages, their design allowed programmers to focus more exclusively on the logic of their programs.
In 1954, a cooperative effort to build a compiler for the IBM 701 was organized by four of IBM’s customers (Douglas Aircraft, North American Aviation, Ramo-Wooldridge, and The RAND Corporation) and IBM. Known as the Project for the Advancement of Coding Techniques (PACT), this group wrote a working compiler that went on to be released for the IBM 704 as well (Kim 2006). In 1955, a group of IBM 701 users located in Los Angeles, each faced with the unappealing prospect of upgrading their installations to the new 704, banded together in a similar belief that sharing skills and experiences was better than going it alone. The group, called Society to Help Alleviate Redundant Effort (SHARE), grew rapidly — to sixty-two member organizations members in the first year — and developed an impressive library of routines that each member could use. The founding of SHARE — today still a vibrant association of IBM users with over twenty thousand members5 — was a blessing for IBM, as it accelerated the acceptance of its equipment and likely helped sales of the 704. As it grew, the group developed strong opinions about IBM’s technical agenda; IBM had little choice but to acknowledge SHARE’s direct influence on its decisions. SHARE also contributed significant software for IBM products, ranging from libraries for scientific computing to the SHARE Operating System (SOS).
It became increasingly common for the collaborative effort among corporations and users to produce vital technical components such as operating systems, as important to system usability as high-level programming languages: the FORTRAN Monitor System for the IBM 7090 was developed by a user group, while, in 1956, another group at the GM Research Laboratories developed routines for memory handling and allocation. In 1959, Bernie Galler, Bob Graham, and Bruce Arden at the University of Michigan, in order to meet the pedagogical needs of the student population, developed the Michigan Algorithmic Decoder (MAD), a programming language used in the development of RUNOFF, the first widely used text-processing system. In the fledgling world of computing, user cooperation and sharing was necessary; thus, the utility of collaborative work in managing the complexity of a technology was established early.
Though IBM developed system software for its legendary 360 series, it was only made usable through such user efforts. So onerous were the difficulties experienced by IBM in producing the 360 that software engineer Frederick Brooks was inspired to set out a systematic approach to software design based on division of labor in the programming process. Championed first by Harlan Mills in a sequence of articles written in the late 1960s (Mills 1983), these ideas about software design were presented by Brooks in his groundbreaking 1975 text on software engineering, The Mythical Man-Month (Brooks 1995). Mills and Brooks, acknowledging that the software industry was engaged in a “manufacturing process” like no other, laid out principles by which the labor of creating source code might be divided among groups of programmers to facilitate the efficient development of high-quality code. This industrial move introduced division of labor to emphasize efficiency: from the beginning, industrialization was pushed onto computer science, with long-term implications for the practice of the science.6 The complexity of software that Brooks described was recognized throughout the industry by 1968, when NATO’s Science Committee convened the first conference on software engineering at Garmisch-Partenkirchen, Germany. Concurrently, pioneering computer scientist Edsger Djikstra published his influential letter, “Go-To Statement Considered Harmful” (Djikstra 1968), in an attempt to move programming to a more theoretical basis on which his new paradigm of “structured programming” could rest.
The year 1968 also saw a significant discussion unfold on the pages of the Communications of the Association of Computing Machinery (CACM), the flagship journal of the primary society for computing professionals. In a policy paper published by the Rockford Research Institute, Calvin Mooers had argued for trademark protection for his TRAC language to prevent its modification by users. University of Michigan professor Bernie Galler responded in a letter to the CACM, arguing that that the best and most successful programming languages benefited from the input of users who could change them, noting in particular the success of SNOBOL, which he suggested had “benefited from ‘meritorious extensions’ by ‘irrepressible young people’ at universities” (Galler 1968). Mooers responded:
The visible and recognized TRAC trademark informs this public . . . that the language or computer capability identified by this trademark adheres authentically and exactly to a carefully drawn Rockford Research standard. . . . An adequate basis for proprietary software development and marketing is urgently needed particularly in view of the doubtful capabilities of copyright, patent or “trade secret” methods when applied to software. (Mooers 1968)
While most computer science professionals acknowledged the need for some protection in order to maintain compatibility among different versions of a language, Galler’s views had been borne out by the successful examples of collaborative development by the SHARE and MAD user groups. Significantly, Mooers’s communiquĂ© had noted the inapplicability of extant legal protections to software, which would continue to be a point of contention as the software industry grew. As it turned out, Galler’s analysis was correct, and the trademarked TRAC language never became popular.
Pressure from the U.S. government, and IBM’s competitors, soon led to the phenomenon of “unbundling,” a significant step toward the commodification of software. In 1968, responding to IBM’s domination of the market, Control Data Corporation filed an antitrust suit against IBM. Anticipating a similar suit by the Department of Justice, IBM began to sell software and support services separately from its mainframes (though it preferred to lease its machines rather than sell them, maintaining nominal control over its products). IBM’s Customer Information Control System (CICS) was its first software “product”; IBM’s competitors were now able to sell software to customers who used IBM’s hardware.
Digital Equipment Corporation (DEC) adopted a different business model with the PDP-1 minicomputer, the first model in what would become the very ...

Table of contents

  1. Cover Page
  2. Title Page
  3. Copyright Page
  4. Dedication
  5. Acknowledgments
  6. Introduction
  7. 1 Free Software and Political Economy
  8. 2 The Ethics of Free Software
  9. 3 Free Software and the Aesthetics of Code
  10. 4 Free Software and the Scientific Practice of Computer Science
  11. 5 Free Software and the Political Philosophy of the Cyborg World
  12. Notes
  13. Bibliography