Chapter 1
LABS, LIVES, TECHNOSCIENCE
The Engineering Quadrangle (E-Quad) at Princeton University is located at the eastern edge of campus. Constructed in the 1960s, the buildingâs design marks a shift away from the collegiate gothic style that reigns supreme farther to the west. The effect, therefore, of approaching E-Quadâs façade from the center of campus is a sense of crossing the threshold to modernity. Up the steps, inside, Ron Weiss could most often be found in his office in the Department of Electrical Engineering, where he had been employed since 2001.
Ronâs lab, located on the same floor as his office, occupied two discrete rooms at the end of a long corridor. Both rooms were impeccably neat, owing to the gruff but supportive presence of the lab manager, Mike, whose full-time job was to oversee lab nitty-gritty, restock, and make sure that everything was in working order. Mike was stationed in the larger and older lab, which contained lots of bench space, computers, machines, instruments, desks, flasks, bottles, and refrigerators, as well as a special room for the Zeiss microscope, one of the most expensive pieces of equipment in the labâs possession and the one that required the most resource-sharing coordination between individuals who signed up to use it at all hours of the night. The other lab space, located across the hall, was newer and smaller, and contained all of its own basic equipment, as well as a cold room and a cell culture and virus room with two fume hoods.
Ron directed these laboratory spaces, but the majority of his time was not spent in them. The primary inhabitants of these rooms were undergraduates, graduate students, and postdocs, who brought with them different levels of expertise in a variety of disciplines. Since Princeton had no bioengineering track or program, the more permanent members of Ronâs lab were funneled in through either molecular biology or engineering fields. As is often the case in academic laboratories, graduate students early in their careers rotated among a few labs for short test periods, gauging their interest in the research and their compatibility with the setting before settling on the lab in which they would pursue their graduate degrees. A number of graduate students had chosen Ronâs lab as their permanent home. The lab also housed four postdocs, representing four different fields: neuroscience, molecular biology, biophysics, and protein chemistry. The biologically trained students mostly lacked familiarity with circuit abstractions and computational models, whereas the engineering students often needed training in wet lab work. Everyone was an amateur at something.
At Princeton, Ron was the only adherent of the parts-based approach to synthetic biology. He was in his late thirties, which placed him on the younger end of a generation of practitioners in the budding field to be running their own labs. Like the older crop of mid-career scientists and engineers redirecting their research efforts and retooling their labs to do synthetic biology, Ron necessarily built an assortment of competencies in markedly unstructured settings. This was one of the defining features of Ronâs description of his own career path (which I recount below). This feature resonates more generally with the way the parts-based approach to synthetic biology, pursued mostly by trained engineers, has taken shape among practitioners who not only knew very little about biological systems at the outset but also lacked the more organic know-how tied to the daily care and manipulation of living things in the lab.
In her classic study of physicists, Sharon Traweek tracks the way the experimental particle physics community reproduces itself through the training of novices.1 She identifies the patterns through which education and inculcation occur, and by which particle physicists learn the criteria for a successful career. The images she conveys of community, stability, and gendered reproduction can be discerned only within a sufficiently entrenched discipline. In contrast, in an unstable and ambiguously bounded field like synthetic biology, idiosyncratic individual paths figure prominently, especially for members of the first generation of practitioners, like Ron and Michael, whose training necessarily took place withinâor betweenâthe reproductive mechanisms of established disciplines. Such paths embed different concepts and logics within the synthetic organisms made in different labs. They also culminate in different normative frameworks for assessing what counts as a project worth pursuing, or a question worth asking, or a life worth making.
I interviewed Ron fairly regularly during my time in his lab. On one occasion, when I asked Ron whether he identified as a synthetic biologist, he demurred, noting that the label sounded funny. No, he explained, he identified first and foremost as an engineer.
âIâve always been interested in computers,â he explained. He had inherited the interest in computers from his father, who had worked for IBM in Israel before moving the family to the United States, when Ron was fourteen years old, to take a job at a software company in Texas.
Ron spent much of his childhood and adolescence programming. When he enrolled as an undergraduate at Brandeis University, he knew he wanted to pursue computer science. In his senior year at Brandeis, he applied to graduate school and, in the early 1990s, joined MITâs prestigious Department of Electrical Engineering and Computer Science as a doctoral student.
At MIT, he began his studies with a focus on digital media and information retrieval, but he had not found research in these areas to be âlife-fulfillingâ work. Then came amorphous computing: the term was coined in a 1996 white paper, âAmorphous Computing Manifesto,â whose lead authors were all faculty in MITâs Department of Electrical Engineering and Computer Science.2 The term âamorphous computingâ describes computational systems made up of very large numbers of processors that interact locally and that possess limited individual computational ability. Researchers studying amorphous computations draw inspiration from biological systems, which provide models and examples of how huge numbers of irregular and computationally limited bits can coordinate behavior or produce global patterns. It was therefore through amorphous computing that Ron first encountered research at the interface of programming and biology. Yet the step from amorphous computing to synthetic biology was still a fairly drastic one. Ron explained it to me as follows:
I was looking at biology as a way to get inspired for how to program computers, and I was doing all kinds of simulation and things like that. There was a point when I came to the conclusion that rather than wanting to look at biology as a way to get inspired for how to program computers, I actually want to reverse the arrow and say, how can I look at computing and understand how to program biology. And at that point I teamed up with the person who ended up being my main adviser, Tom Knight, whoâs actually one of the visionaries in synthetic biology. Heâs one of the people who started the field. And I helped him set up a wet lab in the computer science building. From then on I was working on bioengineering.3
Ron was not alone in praising his mentor. In a write-up about synthetic biology from 2005 that appeared in Wired magazine, Oliver Morton called Knight âan MIT institution.â4 A computer engineer, Knight spent much of his life in and around MITâs Computer Science and Artificial Intelligence Laboratory. He is often credited with having been in some sense the âfatherâ of synthetic biology, an analogy that draws together both his role in elaborating the basic idea of how biology might become the substrate of choice for a veritable engineering field and his efforts to spur institutional and infrastructural developments that would bring such a field to fruition. Knight, for example, was one of the individuals responsible for launching the International Genetically Engineered Machine Competition (iGEM), an event worthy of a brief descriptive detour, since Ronâs lab was peopled by its participants and engrossed in its pursuit for a good portion of the year.
iGEM is a remarkable synthetic biology fĂȘte where institutionally affiliated teams of undergraduates tackle synthetic biology projects under the guidance of faculty, postdocs, and graduate students. Celebrated as a major site of infrastructure building for the parts-based approach, iGEM is rife with peculiarly late-twentieth- and early-twenty-first-century technological and institutional arrangements. Corporate sponsorships for individual teams are de rigueur. Logos, public and private, are emblazoned on team T-shirts and posters. The competition was launched in the early 2000s alongside a template for putting together standard biological parts called BioBricks. iGEM teams built, characterized, and circulated these parts, growing a material and informational library for bacterial synthetic biology.5 Ron had assisted in the early stages and had led a team for Princeton every year since the competitionâs inception.
iGEM was launched as an attempt to replicate some of the magic of a previous era in MIT computer science, one defined by Lynn Conwayâs legendary large-scale integration class, which is considered by many to have revolutionized electronics. Conway, a computer architect from Xerox PARC, in collaboration with Caltechâs Carver Mead, developed a new chip-making method called VLSI, which separated the design process from manufacturing. Conwayâs course, first taught in the late 1970s, was a response to the strategic secrecy of early semiconductor companies that used the technology for limited military and industrial applications. It offered a hands-on experience for participants, owing largely to grant support from DARPA that allowed students to have their chip designs assembled at a chip foundry in California. The hands-on ethic, the ideals of openness and free(ish) circulation, and the recruitment of energetic and naĂŻve youngsters as the catalysts for a technological revolution were all features Knight and a few others sought to re-create. Even DARPA did its part at the outset, paying the cost of DNA synthesis.
Before iGEM, while still a graduate student at MIT, Ron helped Knight get started. First, Ron, Knight, and one other graduate student who was aiding in the efforts had to learn how to work with cells. They were completely unfamiliar with wet lab work. They started by taking some undergraduate courses in biology, reading papers and books, and talking to people. Mainly, Ron recalled, âWe just picked it up as we went along ⊠and just tried things. It was probably not the most efficient way to learn, but itâs kind of the MIT way.â The institution, Ron claimed, was supportive of innovative work and propagated an ethos of self-reliance (a characterization to which I return in chapter 2). âAt MIT they are very open. They like crazy ideas. Doing something nontraditional is something that people enjoy. You pretty much assume that you can figure it out by yourself and you donât need anybody else.â
Having gotten the wet lab going, Ron set about running experiments. As he recalled, the contrast between spending oneâs days programming and doing wet lab work was a stark one. Whereas in computer science, âif you make a mistake you can go back and fix it, in biology, it doesnât work that way.â Mistakes now meant having to repeat specific protocols or entire experiments. One had to be diligent and attentive, Ron explained, remembering the frustration of those months, and the doubts that accompanied the daily lab work: âI was thinking to myself, maybe I shouldnât have switched to do all this biology stuff.â
Ron soon encountered the first real hurdle when he tried to build a plasmid. Plasmids are circular pieces of DNA that replicate independent of chromosomes and that allow experimenters to manipulate genes. Plasmid building is notoriously tedious and prone to human error, increasing the temporal and energetic start-up costs of research drastically. On the surface, plasmid building requires that practitioners follow a fairly straightforward step-by-step process. Yet discussions of the fickle outcomes of this process were often fringed with an air of mystery. Some practitioners had consistently better luck for unknown reasons. Indeed, as Ron recalled, it took him six whole months to confirm his first plasmid. A week later, a molecular biologist the lab had hired for assistance arrived on the scene. In recollecting her effect on their work, Ron implicitly recanted some of his enthusiasm for sacrificing efficiency in the name of self-reliance. He realized the value of this new lab member quickly: âShe was a biologist. An actual biologist. Which is something we should have done week one. We should have had an actual biologist.â
That first plasmid was put to use in a digital logic circuit made of cells, which served as a major part of Ronâs doctoral thesis. The first generation of circuits, like the ones Ron built, was used to exemplify the utility of engineering abstractions for the construction of novel biological systems. Thus, these early, rudimentary circuits achieved fairly modest technological goals that worked to validate the approach while also cinching a set of engineering abstractions to a new substrate. Referring to this first generation of circuits, a collaborator of Ronâs in molecular biology at Princeton once remarked that it was neat that engineers had taught a dog how to talk. He quickly added that it was time to start caring about what the dog was actually saying.
Ron spent nine years at MIT. He didnât want to leave, but his family had grown, and âit was time for a real job.â He went on the academic job market, applying, mostly, to computer science departments. Princetonâs was in fact the lone electrical engineering department in the mix. His interview went well, despite some perceived resistance from the molecular biologists who came to vet his job talk and who, Ron sensed, unlike engineers, didnât quite see the point of the work. Nonetheless, Ron was offered the job at Princeton and accepted it. A subsequent steady stream of well-placed publications secured his tenure case and his standing among practitioners of the parts-based approach.
Michael Hecht and the members of his lab didnât talk about circuits or wiring. None of them had backgrounds in engineering. Yet Michaelâs research emerged from a tradition of work that provided epistemic validation for contemporary attempts to found a biological discipline aimed at synthesis.
In 1828, Friedrich Wöhler famously synthesized urea, an organic compound secreted in urine, shattering the divide between the organic and the inorganic. At the time, scientists had managed to cross the divide in only one direction, transforming organic molecules into inorganic ones through various treatments. But the inability to perform the opposite operation had bolstered the notion that some vital force divided the animate from the inanimate. In the decades that followed Wöhlerâs discovery, chemists began not only to assemble an array of organic molecules but also to synthesize new compounds similar to those found in nature, contributing to modern theories of chemical structure and reactivity, while also leaving their mark on many aspects of human life, from medicine to agriculture and beyond.6 The notion that synthesis and analysis could be conjoined was therefore not a novel insight for a chemist, nor was the deployment of this particular pairing at the boundaries of the living.
In his mid-fifties, with booming voice and jovial charisma, Michael had spent much of his career as a professor in Princetonâs chemistry department. When I began my fieldwork, his lab and office were located in Hoyt, a 1970s extension to the gothic Frick Chemistry Laboratory at Princeton, which was conveniently located one door down from the anthropology department on Washington Road. (Since I recount the story of the labâs move to the new ho...