Renewing Philosophy
eBook - ePub

Renewing Philosophy

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Renewing Philosophy

Book details
Book preview
Table of contents
Citations

About This Book

Hilary Putnam, one of America's most distinguished philosophers, surveys an astonishingly wide range of issues and proposes a new, clear-cut approach to philosophical questions—a renewal of philosophy. He contests the view that only science offers an appropriate model for philosophical inquiry. His discussion of topics from artificial intelligence to natural selection, and of reductive philosophical views derived from these models, identifies the insuperable problems encountered when philosophy ignores the normative or attempts to reduce it to something else.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Renewing Philosophy by Hilary Putnam in PDF and/or ePUB format, as well as other popular books in Philosophy & Philosophy History & Theory. We have over one million books available in our catalogue for you to explore.

Information

1

The Project of Artificial Intelligence

Traditionally Gifford Lectures have dealt with questions connected with religion. In recent years, although reference to religion has never been wholly absent, they have sometimes been given by scientists and philosophers of science, and have dealt with the latest knowledge in cosmology, elementary particle physics, and so on. No doubt the change reflects a change in the culture, and particularly in the philosophical culture. But these facts about the Gifford Lectures—their historical concern with religion and their more recent concern with science— both speak to me. As a practicing Jew, I am someone for whom the religious dimension of life has become increasingly important, although it is not a dimension that I know how to philosophize about except by indirection; and the study of science has loomed large in my life. In fact, when I first began to teach philosophy, back in the early 1950s, I thought of myself as a philosopher of science (although I included philosophy of language and philosophy of mind in my generous interpretation of the phrase “philosophy of science”). Those who know my writings from that period may wonder how I reconciled my religious streak, which existed to some extent even back then, and my general scientific materialist worldview at that time. The answer is that I didn’t reconcile them. I was a thoroughgoing atheist, and I was a believer. I simply kept these two parts of myself separate.
In the main, however, it was the scientific materialist that was dominant in me in the fifties and sixties. I believed that everything there is can be explained and described by a single theory. Of course we shall never know that theory in detail, and even about the general principles we shall always be somewhat in error. But I believed that we can see in present-day science what the general outlines of such a theory must look like. In particular, I believed that the best metaphysics is physics, or, more precisely, that the best metaphysics is what the positivists called “unified science”, science pictured as based on and unified by the application of the laws of fundamental physics. In our time, Bernard Williams has claimed that we have at least a sketch of an “absolute conception of the world” in present-day physics.1 Many analytic philosophers today subscribe to such a view, and for a philosopher who subscribes to it the task of philosophy becomes largely one of commenting on and speculating about the progress of science, especially as it bears or seems to bear on the various traditional problems of philosophy.
When I was young, a very different conception of philosophy was represented by the work of John Dewey. Dewey held that the idea of a single theory that explains everything has been a disaster in the history of philosophy. Science itself, Dewey once pointed out, has never consisted of a single unified theory, nor have the various theories which existed at any one time ever been wholly consistent. While we should not stop trying to make our theories consistent—Dewey did not regard inconsistency as a virtue—in philosophy we should abandon the dream of a single absolute conception of the world, he thought. Instead of seeking a final theory—whether it calls itself an “absolute conception of the world” or not—that would explain everything, we should see philosophy as a reflection on how human beings can resolve the various sorts of “problematical situations” that they encounter, whether in science, in ethics, in politics, in education, or wherever. My own philosophical evolution has been from a view like Bernard Williams’ to a view much more like John Dewey’s. In this book I want to explain and, to the extent possible in the space available, to justify this change in my philosophical attitude.
In the first three chapters, I begin with a look at some of the ways in which philosophers have suggested that modern cognitive science explains the the link between language and the world. This chapter deals with Artificial Intelligence. Chapter 2 will discuss the idea that evolutionary theory is the key to the mysteries of intentionality (i.e., of truth and reference), while Chapter 3 will discuss the claim made by the philosopher Jerry Fodor that one can define reference in terms of causal/counterfactual notions. In particular, I want to suggest that we can and should accept the idea that cognitive psychology does not simply reduce to brain science cum computer science, in the way that so many people (including most practitioners of “cognitive science”) expect it to.
I just spoke of a particular picture of what the scientific worldview is, the view that science ultimately reduces to physics, or at least is unified by the world picture of physics. The idea of the mind as a sort of “reckoning machine” goes back to the birth of that “scientific worldview” in the seventeenth and eighteenth centuries. For example, Hobbes suggested that thinking is appropriately called “reckoning”, because it really is a manipulation of signs according to rules (analogous to calculating rules), and La Mettrie scandalized his time with the claim that man is just a machine (L’Homme Machine).2 These ideas were, not surprisingly, associated with materialism. And the question which anyone who touches on the topic of Artificial Intelligence is asked again and again is “Do you think that a computing machine could have intelligence, consciousness, and so on, in the way that human beings do?” Sometimes the question is meant as “could it in principle” and sometimes as “could it really, in practice” (to my mind, the far more interesting question).
The story of the computer, and of Alan Turing’s role in the conception of the modern computer, has often been told. In the thirties, Turing formulated the notion of computability3 in terms which connect directly with computers (which had not yet been invented). In fact, the modern digital computer is a realization of the idea of a “universal Turing machine”. A couple of decades later materialists like my former self came to claim that “the mind is a Turing machine”. It is interesting to ask why this seemed so evident to me (and still seems evident to many philosophers of mind).
If the whole human body is a physical system obeying the laws of Newtonian physics, and if any such system, up to and including the whole physical universe, is at least metaphorically a machine, then the whole human body is at least metaphorically a machine. And materialists believe that a human being is just a living human body. So, as long as they assume that quantum mechanics cannot be relevant to the philosophy of mind (as I did when I made this suggestion),4 materialists are committed to the view that a human being is—at least metaphorically—a machine. It is understandable that the notion of a Turing machine might be seen as just a way of making this materialist idea precise. Understandable, but hardly well thought out.
The problem is the following: a “machine” in the sense of a physical system obeying the laws of Newtonian physics need not be a Turing machine. (In defense of my former views, I should say that this was not known in the early 1960s when I proposed my so-called functionalist account of mind.) For a Turing machine can compute a function only if that function belongs to a certain class of functions, the so-called general recursive functions. But it has been proved that there exist possible physical systems whose time evolution is not describable by a recursive function, even when the initial condition of the system is so describable. (The wave equation of classical physics has been shown to give rise to examples.) In less technical language, what this means is that there exist physically possible analogue devices which can “compute” non-recursive functions.5 Even if such devices cannot actually be prepared by a physicist (and Georg Kreisel has pointed out that no theorem has been proved excluding the preparation of such a device),6 it does not follow that they do not occur in nature. Moreover, there is no reason at all why the real numbers describing the condition at a specified time of a naturally occurring physical system should be “recursive”. So, for more than one reason, a naturally occurring physical system might well have a trajectory which “computed” a non-recursive function.
You may wonder, then, why I assumed that a human being could be, at least as a reasonable idealization, regarded as a Turing machine. One reason was that the following bit of reasoning occurred to me. A human being cannot live forever. A human being is finite in space and time. And the words and actions—the “outputs”, in computer jargon—of a human being, insofar as they are perceivable by the unaided senses of other human beings (and we might plausibly assume that this is the level of accuracy aimed at in cognitive psychology) can be described by physical parameters which are specified to only a certain macroscopic level of accuracy. But this means that the “outputs” can be predicted during the finite time the human lives by a sufficiently good approximation to the actual continuous trajectory, and such a “sufficiently good approximation” can be a recursive function. (Any function can be approximated to any fixed level of accuracy by a recursive function over any finite time interval.) Since we may assume that the possible values of the boundary parameters are also restricted to a finite range, a finite set of such recursive functions will give the behavior of the human being under all possible conditions in the specified range to the desired accuracy. (Since the laws of motion are continuous, the boundary conditions need only to be known to within some appropriate Δ in order to predict the trajectory of the system to within the specified accuracy.) But if that is the case, the “outputs”—what the human says and does—can be predicted by a Turing machine. (In fact, the Turing machine only has to compute the values of whichever recursive function in the finite set corresponds to the values that the boundary conditions have taken on), and such a Turing machine could, in principle, simulate the behavior in question as well as predict it.
This argument proves too much and too little, however. On the one hand, it proves that every physical system whose behavior we want to know only up to some specified level of accuracy and whose “lifetime” is finite can be simulated by an automaton! But it does not prove that such a simulation is in any sense a perspicuous representation of the behavior of the system. When an airplane is flying through the air at less than supersonic speeds, it is perspicuous to represent the air as a continuous liquid, and not as an automaton. On the other hand it proves too little from the point of view of those who want to say that the real value of computational models is that they show what our “competence” is in idealization from such limitations as the finiteness of our memory or our lifetimes. According to such thinkers,7 if we were able to live forever, and were allowed access to a potentially infinite memory storage, still all our linguistic behavior could be simulated by an automaton. We are best “idealized” as Turing machines, such thinkers say, when what is at stake is not our actual “performance” but our “competence”. Since the proof of the little theorem I just demonstrated depended essentially on assuming that we do not live forever and on assuming that the boundary conditions have a finite range (which excludes a potentially infinite external memory), it offers no comfort to such a point of view.
Again, it might be said that any non-recursivities either in our initial conditions or in our space-time trajectories could not be reliably detected and hence would have no “cognitive” significance. But it is one thing to claim that the particular non-recursive function a human might compute if the human (under a certain idealization) were allowed to live forever has no cognitive significance, and another to say that the whole infinite trajectory can therefore be approximated by a Turing machine. Needless to say, what follows the “therefore” in this last sentence does not follow logically from the antecedent! (Recall how in the “chaos” phenomena small perturbations become magnified in the course of time.)
In sum, it does not seem that there is any principled reason why we must be perspicuously representable as Turing machines, even assuming the truth of materialism. (Or any reason why we must be representable in this way at all—even nonperspicuously—under the idealization that we live forever and have potentially infinite external memories). That is all I shall say about the question whether we are (or can be represented as) Turing machines “in principle”.
On the other hand, the interesting question is precisely whether we are perspicuously representable as Turing machines, even if there are no a priori answers to be had to this question. And this is something that can be found out only by seeing if we can “simulate” human intelligence in practice. Accordingly, it is to this question that I now turn.

Induction and Artificial Intelligence

A central part of human intelligence is the ability to make inductive inferences, that is, to learn from experience. In the case of deductive logic, we have discovered a set of rules which satisfactorily formalize valid inference. In the case of inductive logic this has not so far proved possible, and it is worthwhile pausing to ask why.
In the first place, it is not clear just how large the scope of inductive logic is supposed to be. Some writers consider the “hypothetico-deductive method”—that is, the inference from the success of a theory’s predictions to the acceptability of the theory—the most important part of inductive logic, while others regard it as already belonging to a different subject. Of course, if by induction we mean “any method of valid inference which is not deductive”, then the scope of the topic of inductive logic will be simply enormous.
If the success of a large number of predictions—say, a thousand, or ten thousand—which are not themselves consequences of the auxiliary hypotheses alone always confirmed a theory, then the hypothetico-deductive inference, at least, would be easy to formalize. But problems arise at once. Some theories are accepted when the number of confirmed predictions is still very small—this was the case with the general theory of relativity, for example. To take care of such cases, we postulate that it is not only the number of confirmed predictions that matters, but also the elegance or simplicity of the theory: but can such quasi-aesthetic notions as “elegance” and “simplicity” really be formalized? Formal measures have indeed been proposed, but it cannot be said that they shed any light on real-life scientific inference. Moreover, a confirmed theory sometimes fits badly with background knowledge; in some cases, we conclude the theory cannot be true, while in others we conclude that the background knowledge should be modified; again, apart from imprecise talk about “simplicity”, it is hard to say what determines whether it is better, in a concrete case, to preserve background knowledge or to modify it. And even a theory which leads to a vast number of successful predictions may not be accepted if someone points out that a much simpler theory would lead to those predictions as well.
In view of these difficulties, some students of inductive logic would confine the scope of the subject to simpler inferences— typically, to the inference from the statistics in a sample drawn from a population to the statistics in the population. When the population consists of objects which exist at different times, including future times, the present sample is never going to be a random selection from the whole population, however; so the key case is this: I have a sample which is a random selection from the members of a population which exist now, here (on Earth, in Scotland, in the particular place where I have been able to gather samples, or wherever); what can I conclude about the properties of future members of the population (and of members in other places)?
If the sample is a sample of uranium atoms, and the future members are in the near as opposed to the cosmological future, then we are prepared to believe that the future members will resemble present members, on the average. If the sample is a sample of people, and the future members of the population are not in the very near future, then we are less likely to make this assumption, at least if culturally variable traits are in question. Here we are guided by background knowledge, of course. This has suggested to some inquirers that perhaps all there is to induction is the skilful use of background knowledge—we just “bootstrap” our way from what we know to additional knowledge. But then the cases ...

Table of contents

  1. Cover
  2. Title
  3. Copyright
  4. Contents
  5. Preface
  6. 1. The Project of Artificial Intelligence
  7. 2. Does Evolution Explain Representation?
  8. 3. A Theory of Reference
  9. 4. Materialism and Relativism
  10. 5. Bernard Williams and the Absolute Conception of the World
  11. 6. Irrealism and Deconstruction
  12. 7. Wittgenstein on Religious Belief
  13. 8. Wittgenstein on Reference and Relativism
  14. 9. A Reconsideration of Deweyan Democracy
  15. Notes
  16. Index