Philosophy and Computer Science
eBook - ePub

Philosophy and Computer Science

  1. 224 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Philosophy and Computer Science

Book details
Book preview
Table of contents
Citations

About This Book

Colburn (computer science, U. of Minnesota-Duluth) has a doctorate in philosophy and an advanced degree in computer science; he's worked as a philosophy professor, a computer programmer, and a research scientist in artificial intelligence. Here he discusses the philosophical foundations of artificial intelligence; the new encounter of science and philosophy (logic, models of the mind and of reasoning, epistemology); and the philosophy of computer science (touching on math, abstraction, software, and ontology).

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Philosophy and Computer Science by Timothy Colburn in PDF and/or ePUB format, as well as other popular books in Sozialwissenschaften & Genderforschung. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2015
ISBN
9781317462828

– 1 –

Introduction

“Philosophy and computer science … isn’t that a rather odd combination?” Such is the typical cocktail-party response when learning of my academic training in the discipline Socrates called “the love of wisdom” and my subsequent immersal in the world of bytes, programs, systems analysis, and government contracts. And such might be the reaction to the title of this book. But despite its cloistered reputation and its literary, as opposed to technological, image, the tradition of philosophical investigation, as all of us who have been seduced by it know, has no turf limits. While few but the truly prepared venture into philosophy’s hardcore “inner circle” of epistemology, metaphysics, (meta)ethics, and logic, literally anything is fair philosophical game in the outer circle in which most of us exist. And so we have the “philosophy of s:” philosophy of science, philosophy of art, of language, education. Some of the philosophy of s even have names befitting their integration into vital areas of modern society, for example, medical ethics and environmental ethics, which we can say are shorter names for the philosophies of ethical decisions in medicine and ecology. One of the aims of this book is to make an early contribution to a nascent philosophy of computer science.
Which is not to say that there has not been a vast amount of work done which can be described as the cross-disciplinary encounter of philosophy with computer science. Despite the typical cocktail-party reaction to this combination, the solutions to many problems in computer science have benefited from what we might call applied philosophy. For example, hardware logic gate design would not be possible without boolean algebra, developed by the nineteenth-century mathematician George Boole, whose work helped lay the foundation for modern logic. Later work in logic, particularly the development of predicate calculus by Gottlob Frege, has been drawn upon extensively by researchers in software engineering who desire a formal language for computer program semantics. Predicate calculus is also the formal model used by many of those who implement automated reasoning systems for mechanical theorem proving. These theorem proving techniques have even formed the basis for a style of general-purpose computer programming called logic programming.
Furthermore, the application of philosophical methods to computer science is not limited to those in logic. The study of ethics, for example, has found broad application to computer-related issues of privacy, security, and law. While these issues are not regarded as germane to the science of computing per se, they have arisen directly as a result of the drastic changes society has undergone due to the ubiquity and power of computers. In 1990, a major U.S. software vendor attempted to openly market a large mailing list compiled from public sources, but was forced to withdraw it when the public outcry over invasion of privacy became too great. While the scaling back of the U.S. Strategic Defense Initiative in the 1980s could be seen as a response to technical feasibility questions, a major underlying moral concern was whether a nation ought to entrust its security, to such a large extent, to machines. And now, with the pervading influence of the World Wide Web, society has been forced to confront issues regarding freedom and decency in the digital world.
Within the field of law, many sticky ethical questions related to computers have arisen: Is unauthorized use of a computer from the privacy of one’s own home, without damaging any files or programs (i.e., hacking), the same as breaking and entering? Can authors of programs that are experts in medicine or law be sued for malpractice? Should computer programs be copyrightable, or should they be free, like air? Should programmed trading be allowed on the stock exchange? Answers to the last two questions, and others like it, would have significant effects on the conduct of our economy. None of these questions could have been predicted a mere few decades ago. Today, it would be difficult to find a college curriculum that did not include, in either the computer science or the philosophy department, a course entitled “Computers and Society,” “Values and Technology,” or the like.
But our inquiry in this book goes beyond the application of philosophical method to specific issues like those just mentioned. Our inquiry attempts to seek a new encounter between philosophy and science by examining the ways they can change one another in the context of one of science’s newest disciplines. This type of inquiry is in addition to the traditional concern of philosophy of science, which, in its analyses of concepts like explanation, theory, and the ontological status of inferred entities, is typically unaffected by the content of particular scientific discoveries. This insular nature of philosophical content and method is being challenged by work in the area of computer science known as artificial intelligence (AI), particularly in the traditional philosophical areas of logic, philosophy of mind, and epistemology.
Although even the definition of the field of AI is fraught with philosophical debate, genuine philosophical questions come to the fore as researchers attempt to model human intelligence in computer programs: What is the structure of human knowledge (so that we may represent it in computer memory)? What is the process of human thought (so that we may model reasoning, learning, and creativity in computer programs)? Interestingly, while AI researchers must ask the same sorts of cognitive questions as philosophers do, they usually agree with the pervasive assumption, stated by Hobbes in the seventeenth century, that “cognition is computation,” a point of view certainly not shared by all philosophers. One of the fascinating aspects of AI is its concern for both computer scientists and philosophers. As a subfield of computer science, it is a young discipline, but the questions it raises have been the objects of philosophical investigation for centuries. There is no dearth of writing on this confluence of concerns from seemingly disparate disciplines, but Part I of this book, “Philosophical Foundations of Artificial Intelligence,” provides a fresh treatment of their relationship by returning to historical philosophical problems and looking at them in the light of how they would set the stage for an age when people would begin pronouncing certain computer programs “intelligent.”
This retro-treatment of historical philosophy is an interesting exercise, because it allows us to imagine an epochal sweep of philosophical musings through the ages, in which concepts of mind and reasoning are first rooted in the formal or the divine, then become powers of humanity’s own individuality, and finally are manifest in humanity’s own artifacts. However one feels about the inexorability of this sweep, one thing is clear: The construction of models of mind and reasoning have today forced many philosophers out of the cloistered confines of their a priori worlds.
One reason for this emergence is consistent with the traditional role of philosophy as a guiding beacon, a giver rather than receiver of wisdom in its encounter with science. AI underwent a resurgence in the 1980s that was primarily the result of its switching focus from systems for doing merely automated reasoning to so-called knowledge-based systems. Prior to this the most important theoretical tool for the AI researcher was logic, and it was thought that by automating formal and well understood patterns of inference, one could emulate human intelligent behavior in a computer program. Insofar as logic had been the province of philosophers and mathematicians, it was obvious that previous work by them had a bearing on AI. However, many AI researchers began to believe that the role of reasoning in machine intelligence had been overemphasized at the expense of knowledge. Early AI programs were very good at proving theorems in first-order predicate logic, but such programs proved hugely inefficient when used to implement nontrivial systems for reasoning in specific areas. It became obvious that more effort spent on acquiring and digitally representing knowledge in a specific area, combined with even a minimal reasoning mechanism, would pay off with programs more accurately emulating human expertise in that area, and the first truly successful applications in AI became known as expert systems. Such programs were said to be knowledge-based because much of the intense effort in their development centered around the representation and manipulation of specific knowledge, as opposed to the efficient modeling of mechanisms of pure reason.
Thus AI researchers became interested in the concept of knowledge as well as logic, and it seemed reasonable to suppose that they could learn something from philosophers, who have been thinking about knowledge for a long time. The most important area of AI directly related to epis-temology became known as knowledge representation. But it was clear that, to truly emulate intelligent behavior, not only models of knowledge representation but also models of coming to know were necessary. In other words, AI programs had to be able to learn. So there are several important aspects of knowledge with which AI researchers and practitioners must be concerned.
This question of how philosophy can help us do AI is couched in the language of interdisciplinary cooperation, in which one discipline perhaps serendipitously benefits another by offering relevant work already done or an insightful outlook previously unseen. That this relationship is even possible between AI and philosophy is due to the overlap of subject matter: philosophy is concerned with issues of human knowledge and reasoning, and AI is concerned with modeling human knowledge and reasoning.
But philosophy, in its general role of critically evaluating beliefs, is more than merely a potential partner with AI. Perhaps the most visible role philosophy has played in this relationship is that of watchdog, in which it delineates the limits, and sometimes even attempts to destroy the foundations, of AI.
These critiques proceed by taking to task the claim that computation is even an appropriate model for human thought or consciousness in the first place. Their primary focus is not logic or knowledge, but of mind. Here the primary question is whether philosophy can tell AI what it can do. Many philosophers believe the answer to this question is yes, but they are largely ignored by AI researchers and practitioners, because the latter’s focus is not mind but logic and knowledge. While this ignoring is done at their peril, it is becoming clear that a swing in the opposite direction has occurred, to the point where it is claimed that technological advances, especially those in computer science, can shed light on traditional philosophical problems.
To claim this was unthinkable within many philosophical circles just two decades ago, and there are still those who will steadfastly resist countenancing the possibility. But since certain questions traditionally thought to be philosophical—such as: How do we come to know things? What is the structure of knowledge? What is the nature of the mind?—are now being asked by AI and cognitive science researchers as well, it is inevitable that these researchers will offer answers in the technical terms with which they are familiar. In short, the rapid growth in the speed and complexity of computing machines is tempting people to put forth models of the human mind in terms of computer science. But why are computer science models so tempting? To answer this, it helps to discuss something about the relation between science and philosophy in general.
Science and philosophy are often distinguished by pointing out that science seeks explanation while philosophy seeks justification. To ask what causes the tides is a scientific question, while to ask what would constitute adequate grounds for believing that I see the moon is a philosophical one. “What reason do you have for believing X?” is a typical question asked by philosophers, and the nature of X determines the kind of philosophy undertaken. For example, “What reason do you have for believing that mercy killing is wrong?” is a question for normative ethics, “What reason do you have for believing in the existence of disembodied minds?” is a question for philosophy of mind, and “What reason do you have for believing that this argument is valid?” is a question for philosophical logic. So philosophy has been characterized as the critical evaluation of beliefs through the analysis of concepts in a given area of inquiry.
Of course, science is also concerned with critically evaluating beliefs and analyzing concepts. But when one looks at the kinds of things X is in the questions above, one notices that none of them lend themselves to empirical study. One need not witness actual cases of mercy killing to come to a conclusion about whether it ought to be done. By definition, a disembodied mind is one that cannot be substantiated through physical observation. And the validity of an argument form is not determined by looking for instances of the form in the world. So philosophy has also been characterized as a nonempirical, or a priori, discipline, in distinct contrast with science.
Computer science, being a science, would seem to be distinguished from philosophy just as any other science. But computer science is unique among the sciences in the types of models it creates. In seeking explanations, science often constructs models to test hypotheses for explaining phenomena. For example, it might be hypothesized that the phenomenon of the northern lights is caused by the interaction of solar atoms with the earth’s magnetic field. To test this hypothesis, a model of the earth and its magnetic field could be created in a laboratory, complete with appropriate magnets and gaseous elements. Then, if, under the right conditions, luminosity is observed, the hypothesis may be said to be confirmed. This model, in the form of experimental apparatus, is of course a physical object, like many models built and manipulated in any of the natural sciences. The models built and manipulated in computer science, however, are not physical at all.
Computer science is a science concerned with the study of computational processes. A computational process is distinguished from, say, a chemical or electrical process, in that it is studied “in ways that ignore its physical nature.”1 For example, the process by which a card player arranges cards in her hand, and the process by which a computer sorts names in a customer list, though they share nothing in common physically, may nevertheless embody the same computational process. They may, for example, both proceed by scanning the items to be arranged one by one, determining the proper place of each scanned item relative to the items already scanned, and inserting it into that place, perhaps necessitating the moving of previously scanned items to make room. This process (known as an insertion sort in computer science terms) can be precisely described in a formal language without talking about playing cards or semiconducting elements. When so described, one has a computational model of the process in the form of a computer program. This model can be tested, in a way analogous to how a hypothesis is tested in the natural sciences, by executing the program and observing its behavior. It can also be reasoned about abstractly, so that we may answer questions about it, such as, are there other processes which will have the same effect but do it more efficiently? Building computational models and answering these kinds of questions form a large part of what computer scientists do.
The explosive growth in the number of computer applications in the last several decades has shown that the kinds of real world processes amenable to modeling by computer are limitless. Not only have traditional activities, like record keeping, investing, publishing, and banking, been simply converted to control by computational models, but whole new kinds of activity have been created that would not be possible without such models. These are the by-now-familiar “virtual” activities we describe in the language of cyberspace: e-mail, chat rooms, Web surfing, on-line shopping, Internet gaming, and so on. But long before computers came to dominate everyday life, computational models were employed to describe processes of a special sort, which have existed as long as modern Homo sapiens has existed. These are the processes associated with human reasoning and knowledge organization, and computational models of them are the concern of AI.
The study of the nature of human reasoning and knowledge, in the form of logic and epistemology, has, of course, been a focus of western philosophy since Plato and Aristotle. However, not until the latter part of the twentieth century and the advent of the digital computer did it become possible to actually build models of reasoning that contained alleged representations of human knowledge. Before that time, if you wanted to study human reasoning or the structure of human knowledge, you remained for the most part in the a priori world of philosophy, utilizing perhaps a datum or two from psychology. With computers, however, it became possible to test one’s epistemological theory if the theory was realizable in a computer model. It therefore became reasonable to at least ask: Can AI, as an empirical discipline concerned with building and observing models of human cognitive behavior, help us do philosophy?...

Table of contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright Page
  5. Dedication
  6. Table of Contents
  7. Series Preface
  8. Acknowledgments
  9. 1 Introduction
  10. Part I Philosophical Foundations of Artificial Intelligence
  11. Part II The New Encounter of Science and Philosophy
  12. Part III The Philosophy of Computer Science
  13. Notes
  14. Bibliography
  15. Index