Computer Science

Binary Arithmetic

Binary arithmetic is a mathematical operation that involves only two digits, 0 and 1. It is the foundation of all digital systems and is used extensively in computer science. Binary arithmetic includes addition, subtraction, multiplication, and division of binary numbers.

Written by Perlego with AI-assistance

8 Key excerpts on "Binary Arithmetic"

  • Electronic Digital System Fundamentals
    • Dale R. Patrick, Stephen W. Fardo, Vigyan (Vigs) Chandra, Brian W. Fardo(Authors)
    • 2023(Publication Date)
    • River Publishers
      (Publisher)
    6 Binary Addition and Subtraction 6.1 Introduction Digital systems are called on to perform several arithmetic functions in their operations. These functions deal with numbers in a variety of different forms. As a rule, the circuitry of a digital system must deal with binary numbers. The internal circuitry must manipulate the data very quickly and accurately. The subject of digital arithmetic can be very complex when all of the methods of computation and the theory behind it are understood. Fortunately, this level of understanding is not required by technicians until they become experienced system programmers. In this unit, we concentrate on those basic principles needed to understand how digital systems perform basic arithmetic operations. We first discuss arithmetic operations on binary numbers using pencil and paper. We then examine logic circuits that perform these operations in a digital system. As a rule, most arithmetic operations can be achieved with integrated circuits. Special arithmetic ICs are readily available as off-the-shelf items. A person working with digital systems must be familiar with the operation of these devices to see how a system functions electronically. 6.2 Addition The addition is a mathematical operation in which two or more numbers are combined together to obtain a simple equivalent quantity. The addition of integer A to integer B, or A + B, is the process of advancing integer A, B times. The addition is a process of memorizing a list of number combinations. In practice, it is extremely difficult to make a list of all the integer combinations needed to accomplish addition. A list of the base number combinations is, however, very helpful in seeing how addition is accomplished. The addition is basically a counting operation. The counting method of addition is rather slow and somewhat awkward to handle when large numbers are involved
  • Computer Principles and Design in Verilog HDL
    • Yamin Li(Author)
    • 2015(Publication Date)
    • Wiley
      (Publisher)
    Chapter 3 Computer Arithmetic Algorithms and Implementations
    Computers can compute. This chapter describes binary numbers' arithmetic algorithms and their implementations in Verilog HDL (hardware description language). The algorithms include binary addition, subtraction, multiplication, division, and square root.

    3.1 Binary Integers

    This section introduces two types of the binary representations: unsigned number (absolute) and 2's complement representation for signed integers.

    3.1.1 Binary and Hexadecimal Representations

    We are familiar with the decimal system. However, all the information in computer systems, including instruction and data, is represented by the binary numbering system. A bit (a contraction for “binary digit”) is the basic unit of the information that has only two values: either 0 or 1.
    Question: What does stand for the following 32-bit binary code?
    The correct answer is “don't know.” The exact meaning of the code depends on where it will be used. If it is treated as an integer, its value is 870,187,264. If it is a floating-point number, its value is 0.000000103378624771721661090850830078125. If it is an instruction and executed by a MIPS CPU (microprocessor without interlocked pipeline stages central processing unit), then it is addi $30, $30, 256 , an immediate addition instruction. It may be an IP address, data of image or music, or something else.
    Binary code is too long in representation and thus is hard to remember. Hexadecimal (hex) notation is much shorter and easier to remember. As shown in Table 3.1 , 4-bit binary code is represented by one character. We need
  • Modern Digital Design and Switching Theory
    • Eugene D. Fabricius(Author)
    • 2017(Publication Date)
    • CRC Press
      (Publisher)
    CHAPTER 1
    Number Bases, Codes, and Binary Arithmetic 1.1    INTRODUCTION
    A digital system processes discrete information using electronic circuits that respond to and operate in a finite number of states, usually two states, on and off, which can be represented by the binary digits 1 and 0. The information so processed may represent anything from arithmetic integers and letters of the alphabet to values of physical quantities such as mass, velocity, and acceleration, or current, voltage, and impedance. A digital system accepts digital input representing numbers, symbols, or physical quantities, processes this input information in some specific manner, and produces a digital output.
    Digital systems are used in all types of control systems due to their flexibility, accuracy, reliability, and low cost. Flexibility is due to the ease with which digital systems can be reprogrammed. Accuracy is limited only by the number of bits (BInary digiTs, consisting of 1s and 0s) one wishes to use in representing the digital quantities being processed. Reliability is due to the ability of digital circuits to correctly interpret logical 1s and 0s. For example, in transistor-transistor logic or TTL technology, a logical 1 is represented by a voltage in the range of roughly 2.5 to 5.0 V, and a logical 0 is represented by a voltage of from 0 to about 1 V, and minor fluctuations in voltage levels are not misinterpreted by the hardware.
    The cost of all digital chips has dropped dramatically in the past three decades. This is primarily due to the number of transistors that can be put on a single chip. This number has been doubling almost every year for three decades, from a single transistor on a chip in 1960 to several million transistors per chip in 1990.
    This chapter starts with a discussion of number bases and how to convert from one number base to any other number base. Next, the topics of binary addition and subtraction and then multiplication and division are covered. Following this, binary codes are discussed, specifically the binary-coded decimal, and the excess-three, the Gray, and error-detecting codes are covered. This leads into the concept of Hamming distance and the requirements for detecting and/or correcting codes. The American standard code for information interchange (ASCII) alphanumeric code is also introduced.
  • Numbers
    eBook - ePub

    Numbers

    Arithmetic and Computation

    • Asok Kumar Mallik, Amit Kumar Das(Authors)
    • 2022(Publication Date)
    • CRC Press
      (Publisher)
    Chapter 6 Computer and programming fundamentals
    We are now in a position to appreciate the potential of different types of numbers and their applications in describing real-life problems and their solutions. It is also expected that you might be eager to use a computer for solving some problems. Computers would be handy not only to save time, but to relieve you from the drudgery of manual computation as well. Last but not the least, computers may be a big help if you could not obtain an analytical solution.
    Here we will learn about the following, with the final goal of solving some of the problems presented in this book without the rigorous analytical acumen required otherwise.
    • How the numbers and codes are represented in computers.
    • Logic operations and logic gates.
    • How arithmetic and logic operations are done in a computer.
    • A Programming language with a tutorial introduction to programming in a concise form.
    Well, it seems to be a tall-order; believe us it would be very short and fun to learn.

    6.1 Advantages of binary representation of numbers

    In a computer the numbers are represented in binary. In fact, not only the numbers, the instruction or any other information is nothing other than some 0’s and 1’s stored in the memory, the main resource of the computer. A switch with its two states, namely, OFF and ON is the simplest possible 2-state device that be used to represent 0 and 1. So, a few switches together, say 8, may be used to represent a 8-bit binary number (b7 b6 ,…, b0 ) 000000002 to 111111112 ; i.e. 0 to 255 in decimal. Additionally, the ON and OFF states of the switches can be mapped to TRUE and FALSE in the logical domain. This is very important as the computer relies on logic operations as we shall see next.
    In short, if we choose to represent a number in decimal form, the computing system should be able to represent and identify 10 different states; 0 to 9 and carry out all arithmetic operations using 10-state devices. This leads to a complicated, costly and bulky solution. World’s first electronic computer ENIAC (1946), designed and installed in the University of Pennsylvania, USA used decimal representation of numbers. Moreover, transistors and ICs were not invented then and it was built using electronic valves. ENIAC weighed 27 tons, occupied 18,000 sq. ft. and consumed ONLY 150 KW of electricity.
  • Encyclopedia of Computer Science and Technology
    eBook - ePub

    Encyclopedia of Computer Science and Technology

    Volume 2 - AN/FSQ-7 Computer to Bivalent Programming by Implicit Enumeration

    • Jack Belzer(Author)
    • 2020(Publication Date)
    • CRC Press
      (Publisher)
    10 .
    Computers of today use many ways to perform their arithmetic operations. It would be impossible to discuss all of the techniques that are employed. However, most of the techniques are variations and extensions of the basic concepts that will be presented.
    The heart of the arithmetic unit in a digital computer is the adder. The adder not only performs addition, but it is also used in the subtraction, multiplication, and division operations. An adder can be constructed from some rather simple electronic devices called gates. There are several types of gates, one of which is called an OR gate. Figure 1 shows a schematic sketch of a two-input OR gate, with two sequences of binary digits as its input, and a single sequence as its output. The input and output of the gate would be a train of electrical pulses over time. The presence of a pulse would indicate a 1; the absence of a pulse in a time frame would indicate a 0. From the figure it can be seen that the output from an OR gate will be 1 whenever one or both of the inputs to the gate is 1. The output from the gate will be 0 whenever both of the inputs are 0.
    Fig. 1. OR gate.
    Another type of gate is the AND gate shown in Fig. 2 . The output from a two-input AND gate is 1 whenever both of the inputs are 1 and a 0 whenever this is not true.
    Fig. 2. AND gate.
    The third type of gate to be considered is the inverter or NOT gate shown in Fig. 3 . The NOT gate has a single input connection. Whenever the input to a NOT gate is 1 the output will be 0, and whenever the input is 0 the output will be 1.
    A device for adding two binary digits must produce two output digits. One of the output digits will be the sum digit, and the other digit will be the carry. Table 1 shows all possible combinations of two input binary digits A and B, and the resulting digits which must be generated by the addition device in each case. For example, if A = 0, and B = 1, the sum digit S must be 1 and the carry digit C must be 0. Figure 4
  • Digital Electronics: A Primer
    eBook - ePub

    Digital Electronics: A Primer

    Introductory Logic Circuit Design

    • Mark Nixon(Author)
    • 2015(Publication Date)
    • ICP
      (Publisher)
    These reduced equations are identical to the earlier combinational design. This shows not only that there is redundancy within the MSI solution, but also how mathematics can be used to reduce circuits. Note the duality between hardware and software: all design approaches have reached the same result.

    7.4   Concluding Comments and Further Reading

    This chapter has introduced the main number coding systems used in computer and Binary Arithmetic. Coding systems find much wider use in communications systems to improve efficiency and for reasons of data security. The coding system always needs to be deciphered so that we can tell exactly what the code meant. A hardware implementation will be preferred due to speed though the complexity of the coding technique might require a software solution, perhaps even implemented using a parallel processing system to achieve a required speed. The examples included here have only concentrated on simple techniques for basic code conversion but form the basis for extension to more complex systems.
    The basis of computer arithmetic has been introduced. The most versatile system is floating-point arithmetic and this can be implemented in software and/or hardware in computer systems to handle the arithmetic processes (hardware is preferred for speed). There are other programmable logic chips, which include XOR gates (instead of the OR gates and sometimes called arithmetic devices) that better suit the design of counters. There are also more sophisticated design techniques for arithmetic circuits though these are beyond the scope of this text.
    If you would like to find more details concerning arithmetic circuits and their implementation some books now concentrate on computer arithmetic. They are older texts, but principles don’t change. Parhami (2010), Vladutiu and Du Iu (2012), and Ercegovac and Lang (2003) are all good sources. Naturally, none can divert to texts on computer architecture for which there are many since arithmetic is central to a computer’s organisation. A well-established and excellent text here is Patterson and Hennessy (2008).
  • Computer Arithmetic in Practice
    eBook - ePub

    Computer Arithmetic in Practice

    Exercises and Programming

    • Sławomir Gryś(Author)
    • 2023(Publication Date)
    • CRC Press
      (Publisher)
    Chapter 1 Basic Concepts of Computer Architecture
    DOI: 10.1201/9781003363286-1

    1.1 The 1-bit Logical and Arithmetical Operations

    Let's look at the dictionary definition of computer cited after:
    < Latin: computare>, electronic digital machine, an electronic device designed to process information (data) represented in digital form, controlled by a program stored in memory.
    [Encyclopedia online PWN 2023, https://encyklopedia.pwn.pl/ ]
    an electronic machine that is used for storing, organizing, and finding words, numbers, and pictures, for doing calculations, and for controlling other machines.
    [Cambridge Academic Content Dictionary 2023, https://dictionary.cambridge.org/dictionary/english/computer ]
    a programmable electronic device designed to accept data, perform prescribed mathematical and logical operations at high speed, and display the results of these operations.
    [https://www.dictionary.com/browse/operation ]
    Unfortunately, the above definitions ignore the outstanding achievements of many pioneers of the age of mechanical calculating machines; to cite just a few names: Schickard, Pascal, Leibniz, Stern, Jacquard, Babbage, linking the emergence of the computer with the development of electro-technology in the second half of the 20th century. Those interested in the history of the evolution of computing machines are encouraged to read [Augarten 1985 , Ceruzzi 1998 , McCartney 1999 , Mollenhoff 1988 , and Pollachek 1997 ]. As it can be shown, all the computer functions mentioned in the definition (and thus the performance of calculations, which is the subject of this book) can be realized by limited set of logical functions and data transfers from and to computer's memory.
    Information in a computer is expressed by set of distinguishable values, sometimes called states. In binary logic, these states are usually denoted by symbols {0,1} or {L,H}. The symbols {0,1} are applied much more often because it is associated with commonly used numbers used to express numerical values, so it will be also used further in this book. The symbols L (low) and H (high) are used in digital electronics for describing the theory of logic circuit. In physical implementations, they translate into two levels of electric voltage, e.g. 0 V and 3.3 V or current 4 mA and 20 mA (so-called current loop). In the most cases of computer architectures and communication technologies, the positive logic is used, where 1 is the distinguished state and identical to H.
  • Microcontroller Programming
    eBook - ePub
    • Julio Sanchez, Maria P. Canton(Authors)
    • 2018(Publication Date)
    • CRC Press
      (Publisher)
    Chapter 4 Digital Logic, Arithmetic, and Conversions
    This chapter is about the fundamental arithmetic and logical operations of digital machines. It serves as a background for developing processing routines which involve decisions, data filtering and processing, and number crunching. Here we discuss logical and arithmetic operations in general, that is, without reference to any individual processor. There are so many different hardware versions of microcontrollers that it is not feasible to develop an actual routine for each device. On the other hand, once the logic is understood, the actual coding is a simple process of finding a way of implementing it in a specific instruction set. The chapter also includes material related to data type conversions since these operations are closely related to the other material in this chapter.
    4.0    Microcontroller Logic and Arithmetic All microcontrollers contain instructions to perform arithmetic and logic transformations on binary or decimal operands. These instructions can be classified into three groups:
    1.  Logical instructions. Sometimes these are called Boolean operators. The group includes instructions with mnemonics such as AND, NOT, OR, and XOR. They perform the logical functions that correspond to their names.
    2.  Arithmetic instructions. Typically this group of instructions performs integer addition and subtraction. Occasionally, the instruction set includes multiplication and division. The operands can be signed or unsigned binary and binary coded decimal numbers.
    3.  Auxiliary and bit manipulation instructions. This group includes instructions to shift and rotate bits, to compare operands, to test, set, and reset individual binary digits, and to perform various auxiliary operations.
    4.0.1    CPU Flags
    All microcontrollers are equipped with a special register that reflects the current processing status. This register, sometimes called the status register or the flags register, contains individual bits, usually called flags
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.