1

When Students Taught the Computer

IN 1958, Tom Kurtz wanted to run a computer program. He woke early on a Tuesday morning and drove five or so miles from his home in Hanover, New Hampshire, to the train station in White River Junction, Vermont. He brought with him a steel box. At the station, Kurtz boarded the 6:20 train to Boston and settled in for the three-hour ride, during which he would read to pass the time. On his arrival in Boston, he took a cab to MIT’s campus in Cambridge. Finally reaching the computer center at MIT, he opened the steel box. It contained hundreds of cardboard cards measuring about three inches by eight inches. One set of those cards, precisely ordered and held together with a rubber band, constituted his computer program. Other sets were programs created by colleagues at Dartmouth College, where he was a professor in the mathematics department. It was thanks to Dartmouth’s participation in the New England Computation Center at MIT that they had access to an IBM 704 mainframe computer. After Kurtz handed the stacks of cards over to an employee at the center he had several hours to wait. On some occasions when he made this trip to Cambridge, he met with colleagues at MIT or nearby Harvard; other times he simply strolled around the city. Late in the afternoon, he returned to the computer center to pick up the cards, along with the precious printouts of each program’s results. Reviewing them on the evening train back to White River Junction, Kurtz saw that the results for his program runs contained error reports—yet again. Finally back at home in Hanover at the end of a long day, he was already thinking of how he might revise his program in the coming days, replace some cards with newly punched ones, and go through the process all over again two weeks later.1
A decade later, in 1968, Greg Dobbs, a student at Dartmouth College, wanted to run a computer program. He stepped out of his dormitory, Butterfield Hall, and walked a few hundred yards north to Webster Avenue, enjoying the September sunshine. He turned right on Webster and walked just a block to the new Kiewit Computation Center. At night, he could see Kiewit’s lights from his dorm room window. As he made his way to one of the few empty teletype terminals, he recognized some of his friends and classmates among the thirty or so students sitting at teletypewriters. He went through the habitual steps of the login routine, beginning by typing HELLO and pressing RETURN, and settled in to a game of FOOTBALL against the computer, typing his commands and receiving responses within seconds. He, like 80 percent of his student peers and 40 percent of Dartmouth faculty, embraced this new personal and social computing.2
In the early 1960s, computers were remote, inaccessible, and unfamiliar to Americans. The approximately six thousand computer installations around the nation clustered in military, business, and research-focused university settings. Individual access to computing in 1958 had been so rare, and so valuable, that Kurtz was willing to devote an entire day to gain the benefit of a few minutes of it. Within a decade, however, Kurtz and his colleague John Kemeny, together with a group of their students at Dartmouth, had transformed computing by creating an interactive network that all students and faculty, not just those working in the sciences or engineering, could use. This chapter argues that Kurtz, Kemeny, and their student assistants put the user first in the design and implementation of their network, thereby creating computing for the people. Their focus on simplicity for the user, instead of efficiency for the computer, combined with their commitment to accessible computing for the whole student body, set them apart from the mainstream of academic, industrial, and military computing.

The Problems with Mainframes

Computers were far from quotidian in 1958. In the Cold War context of the 1950s, the American military developed computing for defense against the Soviet Union with projects such as the extensive Semi-Automatic Ground Environment (SAGE) system to protect against Russian airborne attacks. Less than a year after the Soviet Union’s 1957 launch of its Sputnik satellite alarmed Americans, President Dwight Eisenhower requested from Congress a staggering $1.37 billion “to speed missile development and expand air defenses,” of which $29 million was for SAGE.3
This news conveyed that computers were essential to American protection—powerful and significant, but also remote and intimidating. During this post–World War II decade, American businesses ramped up both their production and their usage of computers. Remington Rand installed some of the earliest electronic, digital computers sold commercially in the United States—at the Census Bureau in 1951 and at General Electric (GE) in 1954. During that time, IBM competed with Remington Rand for leadership in the computer manufacturing field, but together they had only nine installations by the end of 1953.4 Although computers proliferated in military, commercial, and university spaces—with several thousand in use by 1960—they functioned behind the scenes. They were used, for example, to maintain consistent oil output at Texaco’s refinery in Port Arthur, Texas; to process checks for Bank of America; and to manage orders and inventories for Bethlehem Steel. In short, computers remained invisible to most Americans. Even when Kurtz visited the MIT Computation Center, he did not interact with the computer there.
Kurtz’s MIT experience was emblematic of programming in the era of mainframe computers. These machines were large and therefore demanded large spaces. The IBM 704 Data Processing System Kurtz used at MIT would have easily dominated a typical eighty-square-foot office.5 The mainframes commonly received input from punched cards like the ones Kurtz carried. A hole punched in the card at a particular location communicated a letter, number, or symbol to the computer, and each card featured several rows of punches. A computer operator loaded the cards into the computer to run the program. The computer communicated its results through more punched cards or magnetic tape or, most commonly, printouts.6 In addition to being large, the mainframes were also very fast and very expensive. MIT’s IBM 704 performed four thousand operations per second.7 In 1962, GE priced one of its average mainframe computers, the GE-225, and its auxiliary equipment at nearly $240,000—close to $2 million in 2018 dollars.8 Thus, any institution that had purchased or leased a mainframe aimed to keep it running as much as possible, to maximize its return on investment.
A carefully ordered set of punched cards often represented the culmination of the programming process. A mathematician like Kurtz first handwrote a program, either on scrap paper or in a special programming notebook. The notebook featured demarcated columns where the program author could write in commands and data that would be understood by the computer. The programmer could also make notes on what each step of the program was meant to accomplish. The columns were visual cues for converting handwritten notes to punched cards. In some cases, program authors punched their own cards using a keypunch machine. By 1958, Dartmouth had installed IBM keypunch equipment for its accounting operations, so Kurtz and his colleagues punched their own cards.9 In larger programming operations, the program author submitted handwritten programming notebook pages to a keypunch operator who would then punch the cards. Kurtz would have spent hours working out a complex program. After he translated his program onto punched cards and ran it, additional hours or days would be needed to address any errors—to debug the program.10
Numerous errors crept into this programming process. A misplaced period—a simple dot—written into the code and punched in the card could dramatically alter the results of a program. A hole punched in the wrong location on a card could create an error. Indicating division instead of addition for a particular programming function could wreak havoc with a program. If Kurtz produced a computer program to perform a series of mathematical operations, and at some point the program told the computer to divide by zero (an operation not anticipated by Kurtz), that would have been an error. Typos, punch errors, misplaced punctuation—all of these confounded programmers, as did the challenges of communicating with the computer via complicated programming languages.
If Kurtz had been the sole user of the computer while he programmed, the very fast and very expensive computer would spend only seconds, maybe minutes, actually running his program. And if Kurtz had been the sole user, the minutes during which he loaded his punched cards and waited for the computer to print his results would have been minutes during which the computer’s central processor was not active—costly minutes lost to inactivity. Thus, at the New England Computation Center at MIT and at other university computer centers during the latter 1950s and through the 1960s, computer managers focused on how to most effectively use the scarce and expensive resource of computer processing.11
As a solution, computer operators organized groups of individual programs to be run together, one group after the next, with as little computer downtime as possible, to maximize computer utilization. These groups of programs were known as batches, and this method of using mainframe computers became known as batch processing. Batch processing kept the computer humming, but it left programmers waiting hours or days for results. A GE consultant offered this description in 1963:
If we follow a particular job [program] through this procedure, we find that the job is still waiting for its turn at all of these various manual input-output operations. It waits for keypunching, it waits for the batch to be collected to be put on the computer, it waits until the computer finishes processing all of the other jobs in the batch, it waits for the batch to be printed, it waits for someone to de-collate the combined output of the many jobs, and then it waits to be mailed or sent back to the man who had the problem run.12
Nonetheless, the various scientists, engineers, and business managers who relied on batch processing knew that a computer operating this way would still yield solutions faster than no computer at all. “So everyone puts up with this computer accessibility bottleneck,” the consultant concluded, “and wishes it wasn’t there.”13
Batch processing frustrated Kurtz and his colleagues in the mathematics department at Dartmouth. Dartmouth College is located in Hanover, New Hampshire, not far from the Connecticut River, which forms the natural border between Vermont to the west and New Hampshire to the east. Appalachian Trail through-hikers trek through Hanover on their journey from Vermont’s Green Mountains to New Hampshire’s White Mountains. One observer described the “endless procession of mountain lakes” nestled in the region’s lush green hills as a “year-round vacationland.”14 The Dartmouth community embraced the region’s recreational possibilities through its student-established Outing Club. In 1956, just a year before MIT formally dedicated its Computation Center,...