The Most Complex Machine
eBook - ePub

The Most Complex Machine

A Survey of Computers and Computing

  1. 464 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

The Most Complex Machine

A Survey of Computers and Computing

Book details
Book preview
Table of contents
Citations

About This Book

This introduction to computers presents the fundamental ideas and principles on which modern computers are built. While used as a text for courses in computer appreciation as well as introductions to computer science, the book has found a wide audience among computer users who wish to understand the basis of the machines that form and transform our society.What Computers Do • Teaching Silicon to Compute • Building a Computer •†Theoretical Computers • Real Computers • Programming • Subroutines and Recursion • Real Programming Languages • Applications • Cooperating Computers • Graphics • Artificial Intelligence • Answers • The text is supplemented by a web site that gives access to other problems and projects.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access The Most Complex Machine by David J. Eck in PDF and/or ePUB format, as well as other popular books in Computer Science & Programming Games. We have over one million books available in our catalogue for you to explore.

Information

Year
2018
ISBN
9781351989329
Edition
1

Chapter 1

Introduction:
What Computers Do

image
WHAT COMPUTERS DO, of course, is compute. That is not the end of the story, though. The real question is, how can computers do all the remarkable things that they do, just by computing?
The essence of computing is the mechanical manipulation of symbols. When people compute, in the ordinary sense, the symbols are numbers, which are mechanically manipulated according to the rules of arithmetic. For example, a person who memorizes a fairly small set of rules and applies them correctly can multiply numbers of any size. The rules include basic facts about the sum and product of any two digits, along with a procedure that determines the steps to be carried out in doing the multiplication: “Write down the numbers, one beneath the other, with the rightmost digits lined up, and draw a line beneath them. Multiply the top number by the rightmost digit of the bottom number, writing the result under the line….”
This example reveals several important aspects of computation. First of all, it is very boring. There are rules to be followed. They tell you exactly what to do. No creativity. No fun. One small mistake and the answer will be wrong. (This is what we mean when we say that computation is mechanical.) It is no surprise that people find it so difficult to get through a large multiplication problem without error. Computers, on the other hand, have no such difficulty. They follow the rules without error and without complaint.
Second, computation is, in itself, meaningless. This is hard for people to understand, because people generally compute for a reason. A person who multiplies 16 by 127 is likely to be doing it to find out how much 16 light bulbs cost at $1.27 each or how many calories are in 127 potato chips if each one contains 16. But doing the multiplication involves following the rules, putting aside all thought of calories or light bulbs. It may be that the number “127” being multiplied represents 127 potato chips, but those chips are external and irrelevant to the computation. This is what we mean when we say that a computation manipulates symbols. A symbol is something that, while meaningless in itself, can stand in for some sort of external meaning. It is the nature of computation, however, that any external meaning is irrelevant to the computation. Again, this tends to make computation difficult for people, who deal naturally with meaning and find it hard to ignore it.
It is important to understand one other aspect of computation from the start. Although the term commonly refers to the arithmetical manipulation of numbers, it can refer to the manipulation of any sort of symbols according to definite, mechanical rules. For example, the editor who counts the number of words in a book, or who checks each word to see whether it can be found in an official dictionary, is computing in this sense. The symbols being manipulated in this case are words, and the editor’s activities are examples of the type of “word processing” that can be done more easily and more accurately by a computer, since counting words or looking them up in a dictionary can be done by applying simple rules that require no understanding of the words’ meanings.
All this is just the beginning of an explanation of what computation is, but it is enough to introduce the questions which will occupy us for the remainder of this book: How can a machine be built that can carry out complex computations? How can those computations accomplish things that seem to be much more than the mechanical manipulation of symbols? And what are the limits to what can be accomplished, just by computing?

1.1. Bits, Bytes, etc.

We can start with the question of what sort of symbols it is that computers manipulate. When people do arithmetic, the basic symbols are the digits 0 through 9. It is important to realize that the particular symbols used are arbitrary. It makes no difference, for example, if the symbol 2 is replaced by %, as long as the rules are also changed in an appropriate way (“6 times % is 1%”), and as long as you remember what % stands for when it comes time to interpret an answer (“%03% is an awful lot of calories!”).
For a computer, the basic symbols are the two digits 0 and 1. Since there are just two of them, zero and one in this context are called binary digits, which is almost always abbreviated to bits. Again, it makes no difference if we use different symbols. We might, for example, decide to represent the two possible bits as % and #, or by 7 and 3 for that matter. Since there are two binary digits, they tend to be represented as things that naturally come in pairs, such as true/false, on/off, and black/white. We will use whatever representation seems most natural in context.
Now, two symbols don’t seem to give us a lot to work with. For that matter, the ten symbols of ordinary arithmetic might seem a bit inadequate to cover the infinite range of numbers that can be represented, You know how this dilemma is resolved: Any number of digits chosen from 0 through 9 can be strung together to give a compound symbol, such as 2032. By stringing together basic symbols, we can represent any number whatsoever. The same principle works when there are only two basic symbols. When we have more than two things to represent, we can turn to strings of bits, such as 10011, to provide us with as many different (compound) symbols as we need. If we are careful, we can represent anything that a computer might have to deal with.
1.1.1. Binary Numbers. Among the most important things computers deal with are numbers. As our first exercise in combining bits, we can construct representations for the most basic type of numbers, the counting numbers, or nonnegative integers. Using the digits zero through nine, we write these numbers as
fig_01
This way of writing numbers is called the base ten or decimal representation, since ten digits are used and since the number ten plays such an important role. It is useful to visualize counting in the base ten by thinking of the way a car’s odometer keeps track of the number of miles traveled. In a brand new car, the odometer reads all zeros: 000000. Every time the car travels one mile, the rightmost digit increases by one, from 000000 to 000001, up to 000009. At this point you run out of digits; the rightmost digit goes back to zero and the digit next to it increases by one. This takes you up to 000099. At that point, you’ve run out of digits in both the first and second places’ so the zero in the third place changes to one, giving 000100. (Ordinarily, of course, we don’t write the zeros on the left—for one thing, there would be the problem of just how many of them we should write!)
When we use just the two binary digits, 0 and 1, we are working in the base two or binary system. To avoid confusion, I will subscript any binary number with a 2, so you can tell the difference between 101102 (base two) and 10110 (base ten). To count in binary, you just need to imagine an odometer with only zeros and ones: start with 02, then 12—oops, ran out of digits, so the last digit becomes 0 and the next digit rolls over—102, 112—ran out of numbers in both places this time— 1002, 1012, 1102, 1112, 10002, 10012, …. We obtain a translation between binary and decimal simply by matching up the numbers in each system as we count:
fig_02
This is not a very satisfactory way to find a translation for, say, the binary number 10011101012. However, a little analysis provides a more satisfactory mechanism. I will work through the details of this analysis not because the details are important, but because it shows how a simple idea can be developed into a more complicated, but more efficient, computational procedure. If you are not interested in this, you can safely skip ahead to Subsection 1.1.2.
The first step in our analysis is to find out how many binary numbers there are that are made up of k (or fewer) bits. For k = 1, there are only two such one-bit numbers, 02 and 12. For k = 2, there are four two-bit numbers: 002, 012, 102, and 112. (It will useful to write some extra zeros on the left, so that each number in the list is a sequence of exactly k bits. If you leave off the leading zeros, you have a list of numbers with “k bits or fewer.”) Let’s consider the case k = 3 more carefully. A sequence of three bits must begin with a 0 or with a 1, so we can divide such sequences into two groups:
fig_03
You can get the numbers in the first group by taking the list of all two-bit numbers (00, 01, 10, and 11) and tacking a zero onto the beginning of each. If you tack on a one instead, you get the second group.
Now, there are two groups of numbers here. Each group contains just as many members as there are two-bit numbers. This is just a way of saying that there are exactly two times as many three-bit numbers as there are two-bit numbers. By the same argument, there are exactly sixteen four-bit numbers—just twice as many as the number of three-bit numbers. We can list the four-bit numbers in two groups of eight numbers each as
fig_04
This argument works for any number of digits. For any number k, there are twice as many (k + l)-bit binary numbers as there are k-bit binary numbers. There are 2 one-bit numbers, 2 × 2 two-bit numbers, 2 × 2 × 2 three-bit numbers, and so on. In general there are 2k k-bit numbers, where 2k is the kth power of 2, that is, 2 × 2 × ··· × 2, with k factors of 2.
Now, let’s return to the problem of trying to convert a binary number to the base ten. First, note that the binary number consisting of a one followed by k zeros represents the number 2k. You can see this by noting that, for example, in order to count up to 1002 you have to count past the four two-bit numbers, so that 1002 corresponds to 4—that is, to 22. (It corresponds to 4, rather than 5, because you start counting with zero; the first four numbers are 0, 1, 2, 3.) Similarly, to get up to 100002, you have to count past the 24 four-bit numbers, so 100002 corresponds to 24, or 16. (Note that the rule works even for k = 0 or k = 1, using the facts that 21 = 2 and 20 = 1.)
The more complicated number 101102 corresponds to 24 + 22 + 21. This can be justified directly by considering how you count up to 101102. You must count past 24 numbers to get to 100002, then past another 22 to get to 101002 and finally past 21 more to get to 101102. Alternatively, you could anticipate the meaning of addition for binary numbers and write
fig_05
The general rule for converting a binary number to the base ten is to add up the powers of two corresponding to each 1 in the binary number. To find the appropriate power, simply count bits from the right, starting from 0 for the rightmost bit. As a final example, we can compute
fig_06
fig_07
Figure 1.1. Some powers of two and their representations as binary and as decimal numbers. Adding a zero onto the end of a base-two number multiplies that number by 2, just as adding a zero to the end of a base 10 number multiplies it by 10.
After all that, you might be wondering why computers use binary numbers instead of just sticking to the more familiar decimal numbers. In fact, it’s sort of an accident. Early mechanical calculating devices generally represented numbers with wheels or gears that could be in one of ten positions, one for each decimal digit. Such calculators worked directly in the base ten. Modern computers are made out of electronic components—for very good reasons involving speed and reliability. The most natural “positions” for a wire in a circuit are on and off, which correspond in a natural way to the two binary digits. Although it might be possible to represent ten digits by using ten different voltage levels in a wire, such a scheme would have two disadvantages: The inevitable inaccuracy in measuring a voltage would lead to a much higher probability of error than occurs when only the difference between on and off must be detected. And the complex circuits necessary to work with such a representation would be very difficult to design and build.
1.1.2. Text. If you are like most people, there is something that might be bothering you at this point. You might reasonably point out that you have been working quite happily with computers for years— typing papers, drawing pictures, or whatever—without ever having heard or thought of binary numbers. Although bits and binary numbers are an essential aspect of the internal workin...

Table of contents

  1. Cover Page
  2. Half Title
  3. Title Page
  4. Copyright Page
  5. Contents
  6. Preface
  7. Chapter 1. Introduction: What Computers Do
  8. Chapter 2. Teaching Silicon to Compute
  9. Chapter 3. Building a Computer
  10. Chapter 4. Theoretical Computers
  11. Chapter 5. Real Computers
  12. Chapter 6. Programming
  13. Chapter 7. Subroutines and Recursion
  14. Chapter 8. Real Programming Languages
  15. Chapter 9. Applications
  16. Chapter 10. Cooperating Computers
  17. Chapter 11. Graphics
  18. Chapter 12. Artificial Intelligence
  19. Answers
  20. Annotated Bibliography
  21. Index