Computer Science

Von Neumann Architecture

Von Neumann Architecture is a computer design concept that separates the memory and processing units, allowing data and instructions to be stored in the same memory space. This architecture features a central processing unit (CPU) that can fetch, decode, execute, and store instructions and data. It forms the basis for most modern computer systems.

Written by Perlego with AI-assistance

12 Key excerpts on "Von Neumann Architecture"

  • Communications Technology Handbook
    • Geoff Lewis(Author)
    • 2013(Publication Date)
    • Routledge
      (Publisher)
    The concept is named after John Van Neuman whose work in the 1940s had such a lasting influence on computer development. This concept is based on stored program control (the program being stored within the computer), with the answers being arrived at by a series of small uniquely defined steps. During the last decade there has been such a massive improvement in the technologies of architectures, electronic components and programming languages, that Von Neuman machines are now capable of vast amounts of data processing at high speed. Even though such developments have taken place, there are now many applications where Von Neuman machines are too slow and as a consequence, parallel processing has become the important feature of new computers. One such important concept is referred to as the Harvard architecture where data and instructions are moved around simultaneously or concurrently, at the same time as the computer is carrying out some other function. This parallelism results in a very significant increase in computing power. Figure 6.1 Basic microcomputer system. The minimum computer system shown in Fig. 6.1 consists of a microprocessor or central processing unit to handle all the computing functions, together with memories (see also Memories, p. 172) to hold the instructions and data, and a device to manage communications with the outside world. The read only memory holds the instructions for the basic operation of the microprocessor, while the random access memory holds the data and user programs. The input/output device is used for communications purposes with the user terminals. Communication between the elements of this basic system is via three sets of parallel conductors, known as buses, one each to carry data and control signals and the third to address the memory locations where data and instructions are to be found
  • Microprocessor 1
    eBook - ePub

    Microprocessor 1

    Prolegomena - Calculation and Storage Functions - Models of Computation and Computer Architecture

    • Philippe Darche(Author)
    • 2020(Publication Date)
    • Wiley-ISTE
      (Publisher)
    3 Computation Model and Architecture: Illustration with the von Neumann Approach
    A user working today in front of his or her microcomputer workstation hardly suspects that he or she is in front of a machine whose operation is governed by principles described by the mathematician John von Neumann in the 1940s1 (Ceruzzi 2000). This remains the case when modern terms such as “superscalar architectures” and “multicore” or accelerating mechanisms like the pipeline, concepts discussed in the forthcoming Volume 2, are mentioned. Before studying the functioning of the microprocessor, we need to clarify the theoretical concepts of the computational model and computer architecture. The so-called von Neumann approach, which still governs the functioning of computers internally despite all the progress made since it was developed, is described by presenting the basic execution diagram for an instruction. This architecture has given rise to variations, which are also presented. Finally, the programmer needs an abstraction of the machine in order to simplify his or her work, which is called the “Instruction Set Architecture” (ISA). It is described before the basic definitions for this book, which complete this chapter.
    NOTE.– In this book, the term CU for “Central Unit” (or CPU for Central Processing Unit) is taken from the original word, that is, the unit which performs the computations, and not from the microcomputer itself. It most often describes the microprocessor also referred to as an MPU (MicroProcessor Unit) or μP for short, which is a modern integrated form of the CU. We are also adapting the level of discourse to the component’s scale. However, we do not include main memory, as do some definitions, which generally rely on the vocabulary of mainframes from the 1960s.

    3.1. Basic concepts

    Definitions of the fundamental concepts of the Model of Computation (MoC) and of architecture have evolved over time and vary from author to author (Reddi and Feustel 1976, Baer 1984). The same is true for associated terms such as “implementation” or “achievement”. Before presenting them, the concepts of program, control and data mechanisms and flows must be clarified.
  • Modern Computer Architecture and Organization
    • Jim Ledin, Dave Farley(Authors)
    • 2022(Publication Date)
    • Packt Publishing
      (Publisher)
    The Von Neumann Architecture was introduced by John von Neumann in 1945. This processor configuration consists of a control unit, an arithmetic logic unit, a register set, and a memory region containing program instructions and data. The key feature distinguishing the Von Neumann Architecture from the Harvard architecture is the use of a single area of memory for program instructions and the data acted upon by those instructions. It is conceptually straightforward for programmers, and relatively easier for circuit designers, to locate all the code and data a program requires in a single memory region.
    This diagram shows the elements of the Von Neumann Architecture: Figure 7.1: The Von Neumann Architecture
    Although the single-memory architectural approach simplified the design and construction of early generations of processors and computers, the use of shared program and data memory has presented some challenges related to system performance and, in recent years, security.
    Some of the more significant issues are as follows:
    • The von Neumann bottleneck : Using a single interface between the processor and the main memory for instruction and data access often requires multiple memory cycles to retrieve a processor instruction and access the data it requires. In the case of an immediate value stored next to its instruction opcode, there might be little or no bottleneck penalty because, at least in some cases, the immediate value gets loaded along with the opcode in a single memory access. Most programs, however, spend much of their time working with data stored in memory allocated separately from the program instructions. In this situation, multiple memory access operations are required to retrieve the opcode and any required data items.
      The use of cache memories for program instructions and data, discussed in detail in Chapter 8 , Performance-Enhancing Techniques
  • Modern Computer Architecture and Organization
    eBook - ePub

    Modern Computer Architecture and Organization

    Learn x86, ARM, and RISC-V architectures and the design of smartphones, PCs, and cloud servers

    The Von Neumann Architecture was introduced by John von Neumann in 1945. This processor configuration consists of a control unit, an arithmetic logic unit, a register set, and a memory region containing program instructions and data. The key feature distinguishing the Von Neumann Architecture from the Harvard architecture is the use of a single area of memory for program instructions and the data acted upon by those instructions. It is conceptually straightforward for programmers, and relatively easier for circuit designers, to locate all of the code and data a program requires in a single memory region.
    This diagram shows the elements of the Von Neumann Architecture:
    Figure 7.1: The Von Neumann Architecture
    Although the single-memory architectural approach simplified the design and construction of early generations of processors and computers, the use of shared program and data memory has presented some significant challenges related to system performance and, in recent years, security. Some of the more significant issues were as follows:
    • The von Neumann bottleneck : Using a single interface between the processor and the main memory for instruction and data access frequently requires multiple memory cycles to retrieve a single instruction and to access the data it requires. In the case of an immediate value stored next to its instruction opcode, there might be little or no bottleneck penalty because, at least in some cases, the immediate value is loaded along with the opcode in a single memory access. Most programs, however, spend much of their time working with data stored in memory allocated separately from the program instructions. In this situation, multiple memory access operations are required to retrieve the opcode and any required data items.
      The use of cache memories for program instructions and data, discussed in detail in Chapter 8 , Performance-Enhancing Techniques
  • Artificial Intelligence
    eBook - ePub

    Artificial Intelligence

    A Philosophical Introduction

    Several writers in fact do so. In the opinion of philosopher Patricia Churchland ‘these differences [make] the metaphor that identifies the brain with a computer seem more than a trifle thin’; and biologist John Young bluntly asserts on the basis of such differences that ‘the computer analogy is grossly inadequate to describe the operation of the brain’. 34 No such conclusion can be drawn, however. All that can properly be concluded is that the brain does not have what is known as a Von Neumann Architecture. Let me explain. When the ENIAC team broke up in 1946, John von Neumann set about reorganizing and extending the design of this pioneer machine. The result was a set of brilliant architectural proposals that have been the mainstay of computer design ever since. 35 A computer built in accordance with von Neumann’s specifications is known simply as a von Neumann machine. Even today, Von Neumann Architecture remains virtually universal in the computer industry. A list of the central features of Von Neumann Architecture reads almost like a re-run of the points raised in the preceding section. 1 A von Neumann machine is a sequential processor: it executes the instructions in its program one after another. 36 2 Each string of symbols is stored at a specific memory location. 3 Access to a stored string always proceeds via the numerical address of the string’s location. 4 There is a single ‘seat of control’, the central processing unit or CPU. The CPU contains a special register, the ‘instruction register’, into which instructions are fed one at a time (see section 4.3). For example, suppose the bit-code for ‘shift’ enters the instruction register. (As explained in section 4.4, the function of this instruction is to shift everything in a register one place to the right. The bit at the far right of the register ‘falls out’ and a zero is fed in at the far left
  • Computer Systems Architecture
    Figure 5.2 ) divides the system into several different components, so that future developments related to one component will have only minimal effects on other components. This type of modularity is currently used in many other industries, but for computers it was made possible due von Neumann’s ideas. Prior to von Neumann, the computer’s logic was tightly integrated, and each changed introduced affected other parts as well. Relating to a different industry, it is like a driver that has replaced a car’s tires and has to modify the engine as well. When the modularity that stems from von Neumann’s ideas was introduced to computer systems, it enabled reduced complexity and supported fast technological developments. For example, increasing the memory capacity and capabilities was easily implemented without any required change to other system components.
    FIGURE 5.1 Memory as part of the system.
    FIGURE 5.2 Von Neumann Architecture.
    One of the most important developments was to increase the capacity of computer memory, which manifested itself in the ability to better utilize the system. This improved utilization was achieved by enabling the central processing unit (CPU) to run more than one program at a time. This way, during the input and output (I/O) activities of one program, the CPU did not have to wait idle for I/O operations to complete. As several programs can reside in the memory, even if one or more are waiting for input or output, the CPU can still be kept busy.
    One of the consequences of dividing the system into distinct components was the need to design a common mechanism for data transfers between the different components. The mechanism, which will be explained later in this book, is called a bus. As it relates to the memory activities, the bus is responsible for transferring instructions and data from the memory to the CPU as well as for transferring some of the data back to be stored in the memory.
    It should be noted that when there is just one bus, as is the case with the Von Neumann Architecture, the instructions and data have to compete in accessing the bus. The usual and standard solution is implementing a wider and faster bus that will be able to support all requests without becoming a bottleneck (see the explanation in Chapter 7
  • High Performance Parallel Runtimes
    eBook - ePub

    High Performance Parallel Runtimes

    Design and Implementation

    We will then incrementally refine the design and make it more complex to extract more parallelism within the individual cores to achieve higher execution speeds. Please note that this kind of parallelism is usually transparent to the programmer, as, since the commercial failure of very long instruction word (VLIW) machines, the processor does not expose these kind of features as part of the ISA. However, generally, you will notice a measurable speed up when executing instructions on more modern machines that exploit instruction-level parallelism (ILP). As we will see in Section 3.2.2, some of the architectural concepts embodied in processors do have an effect when we consider multi-core machines and how they interact with memory. They might cause the reordering of memory operations, which can be observed from other cores, and that must be considered when programming a parallel system. However, let’s put this topic on the back burner for a bit and revisit it after we have understood the basics of processor architectures. 3.1.1 Von Neumann Architecture and in-order execution Let’s start easily and build a simple execution engine for machine instructions. The prevalent designs of modern processors are all based on the ideas of the Von Neumann Architecture [ 19 ], [ 46 ], [ 150 ]. Before the Von Neumann Architecture, processors were usually fixed-function devices that could only execute a single hardwired task. Instead, a processor with the Von Neumann Architecture can execute streams of instructions stored in memory that can easily be changed to execute a different program. Figure 3.1 Von Neumann processor architecture. Figure 3.1 shows the main components of the Von Neumann Architecture. The central processing unit (CPU) is the engine that executes the program code and contains two conceptual components, the control unit and the arithmetic logic unit (ALU). The task of the control unit is to retrieve the instructions of the program code from the memory and decode them
  • Computer Architecture and Security
    eBook - ePub

    Computer Architecture and Security

    Fundamentals of Designing Secure Computer Systems

    • Shuangbao Paul Wang, Robert S. Ledley(Authors)
    • 2012(Publication Date)
    • Wiley
      (Publisher)
    Hewlett-Packard (HP-Compaq, 2002) have been working on a new type of Secure Platform Architecture (SPA). It is a set of software interfaces built on top of HP's Itanium-based product line. SPA will enable operating systems and device drivers to run as unprivileged tasks and will allow services to be authenticated and identified. The problem exists in the SPA and is that, as the company described, it uses a set of software interfaces to authenticate and identify the tasks. Once the system is compromised, SPA will not be able to function well.
    Sean Smith and Steve Weingart (Smith and Weingart, 1999) developed a prototype using a high-performance, programmable secure coprocessor. It is a type of software, hardware, and cryptographic architecture (Suh et al., 2005). This architecture addressed some issues especially on how to secure programs running on coprocessors and system recovery. In terms of secure information and data, there is a lot of work which needs to be done.
    Recently, MIT researchers proposed secure processors that enable new applications by ensuring private and authentic program execution even in the face of physical attack.

    10.2 Single-Bus View of Neumann Architecture

    Neumann architecture is the foundation of modern computer systems. It is a single bus , stored program computer architecture that consists of a CPU, memory, I/O, and storage. The CPU is composed of a control unit (CU) and arithmetic logical unit (ALU) (von Neumann, 1945). Almost all modern computers are Neumann computers which is characterized as a single system bus (control, data, address) with all circuits attached to it.

    10.2.1 John von Neumann Computer Architecture

    John von Neumann wrote “First Draft of a Report on the EDVAC” in which he outlined the architecture of a stored-program computer. He proposed a concept that has characterized mainstream computer architecture since 1945. Figure 10.1 shows the Neumann model.
    Figure 10.1 Block diagram of John von Neumann's computer architecture model
    A “system bus” representation of the Neumann model is shown in Figure 10.2
  • Computer Systems Performance Evaluation and Prediction
    • Paul Fortier, Howard Michel(Authors)
    • 2003(Publication Date)
    • Digital Press
      (Publisher)
    Figure 2.2 ). This continues until all required operands are in the appropriate registers of the ALU. Once all operands are in place, the control unit commands the ALU to perform the appropriate instruction—for example, multiplication, addition, or subtraction. If the instruction indicated that an input or output were required, the control element would transmit a word from the input unit to the memory or ALU, depending on the instruction. If an output instruction were decoded, the control unit would command the transmission of the appropriate memory word or register to the output channel indicated. These five elements comprise the fundamental building blocks used in the original von Neumann computer system and are found in most contemporary computer systems in some form or another.
    Figure 2.2 CPU memory access.
    In this chapter we will examine these fundamental building blocks and see how they are used to form a variety of computer architectures.

    2.2 Computer hardware architecture

    A computer system is comprised of the five building blocks previously described, as well as additional peripheral support devices, which aid in data movement and processing. These basic building blocks are used to form the general processing, control, storage, and input and output units that make up modern computer systems. Devices typically are organized in a manner that supports the application processing for which the computer system is intended—for example, if massive amounts of data need to be stored, then additional peripheral storage devices such as disks or tape units are required, along with their required controllers or data channels.
    A computer system’s architecture is constructed using basic building blocks, such as CPUs, memories, disks, I/O, and other devices as needed.
    To better describe the variations within architectures we will discuss some details briefly—for example, the arithmetic logic unit (ALU) and the control unit are merged together into a central processing unit or CPU. The CPU controls the flow of instructions and data in the computer system. Memories can be broken down into hierarchies based on nearness to the CPU and speed of access—for example, cache memory is small, extremely fast memory used for instructions and data actively executing and being used by the CPU and usually resides on the same board or chip as the CPU. The primary memory is slower, but it is also cheaper and contains more memory locations. It is used to store data and instructions that will be used during the execution of applications presently running on the CPU—for example, if you boot up your word processing program on your personal computer, the operating system will attempt to place the entire word processing program in primary memory. If there is insufficient space, the operating system will partition the program into segments and pull them in as needed.
  • Foundations of Computing
    eBook - ePub

    Foundations of Computing

    Essential for Computing Studies, Profession And Entrance Examinations - 5th Edition

    • Pradeep K. Sinha, Priti Sinha(Authors)
    • 2022(Publication Date)
    • BPB Publications
      (Publisher)
    Chapter 4

    Computer Architecture

    This chapter provides an overview of computer system architecture. It first introduces the basic functional units of a computer system and then provides detail architectures of processor and memory. Subsequent chapters provide details of other units (secondary storage and I/O devices). This chapter also provides details of interconnection architectures used for interconnecting processor, memory, and I/O units. Finally, it also provides details of multiprocessor system architectures, which use multiple processors and memory units to form powerful computer systems.

    BASIC FUNCTIONS OF A COMPUTER

    All computer systems perform the following five basic functions for converting raw input data into useful information and presenting it to a user:
    1. Inputting. It is the process of entering data and instructions into a computer system.
    2. Storing. It is the process of saving data and instructions to make them readily available for initial or additional processing as and when required.
    3. Processing. Performing arithmetic operations (add, subtract, multiply, divide, etc.), or logical operations (comparisons like equal to, less than, greater than, etc.) on data to convert them into useful information is known as processing.
    4. Outputting. It is the process of producing useful information or results for a user, such as printed report or visual display.
    5. Controlling. Directing the manner and sequence in which the above operations are performed is known as controlling.

    BASIC COMPUTER ORGANIZATION

    Even though the size, shape, performance, reliability, and cost of computers have been changing over the last several years, the basic logical structure (based on stored program concept), as proposed by Von Neumann, has not changed.
    Figure 4.1
  • Computer Architecture
    eBook - ePub

    Computer Architecture

    Fundamentals and Principles of Computer Design, Second Edition

    • Joseph D. Dumas II(Author)
    • 2016(Publication Date)
    • CRC Press
      (Publisher)
    chapter one

    Introduction to computer architecture

    “Computer architecture” is not the use of computers to design buildings (although that is one of many useful applications of computers). Rather, computer architecture is the design of computer systems, including all of their major subsystems: the central processing unit (CPU), the memory system, and the input/output (I/O) system. In this introductory chapter, we take a brief look at the history of computers and consider some general topics applicable to the study of computer architectures. In subsequent chapters, we examine in more detail the function and design of specific parts of a typical modern computer system. If your goal is to be a designer of computer systems, this book provides an essential introduction to general design principles that can be expanded upon with more advanced study of particular topics. If (as is perhaps more likely) your career path involves programming, systems analysis or administration, technical management, or some other position in the computer or information technology field, this book provides you with the knowledge required to understand, compare, specify, select, and get the best performance out of computer systems for years to come. No one can be a true computer professional without at least a basic understanding of computer architecture concepts. So let’s get underway!

    1.1 What is computer architecture?

    Computer architecture is the design of computer systems, including all major subsystems, including the CPU and the memory and I/O systems. All of these parts play a major role in the operation and performance of the overall system, so we will spend some time studying each. CPU design starts with the design of the instruction set that the processor will execute and includes the design of the arithmetic and logic hardware that performs computations; the register set that holds operands for computations; the control unit that carries out the execution of instructions (using the other components to do the work); and the internal buses, or connections, that allow these components to communicate with each other. Memory system design uses a variety of components with differing characteristics to form an overall system (including main, or primary, memory and secondary memory) that is affordable while having sufficient storage capacity for the intended application and being fast enough to keep up with the CPU’s demand for instructions and data.
  • Special Computer Architectures for Pattern Processing
    • King-Sun Fu(Author)
    • 2018(Publication Date)
    • CRC Press
      (Publisher)
    Chapter 2 COMPUTER ARCHITECTURE — AN INTRODUCTION C. V. Ramamoorthy and Benjamin W. Wah TABLE OF CONTENTS I. Introduction A. Arithmetic Module B. Control Unit C. Memory D. I/O E. Communications II. Microprogramming A. Evolution B. Microinstruction Format C. Applications of Microprogramming 1. Logical Design of the Control Unit 2. System Emulation 3. High Level Language Execution III. Stack Architecture A. Evaluating Arithmetic Expressions Using Stacks B. Subroutine Management Using Stacks IV. Parallel Computers A. Instruction Stream B. Interconnection Network C. Processor Capabilities V. Pipeline Computers A. Pipeline Characterization 1. Levels of Pipelining 2. Pipeline Configurations B. Performance Considerations 1. Throughput Considerations 2. Efficiency. Considerations C. Design Issues for a General Sequential Pipelined Processor 1. Buffering 2. Busing Structure 3. Branching 4. Interrupt Handling VI. Computer Communication Networks A. Data Communication Hardware B. Network Architecture 1. Topology 2. Point-to-Point Network Technologies VII. The Future References I. INTRODUCTION The study of computer architecture is the study of the organization and interconnection of computer system components. These components include memories, buses, arithmetic units, microprocessors, input/output (I/O) units, etc. Using these components, the computer architect can design systems with various performance, reliability, cost, and other design criteria. The art of designing computer architecture is therefore the assembly of available computer components to form a system satisfying the users’ requirements. The design of computers has evolved greatly in speed and complexity since the design of the Analytical Engine by Charles Babbage nearly 150 years ago. 27 Babbage’s design was very primitive and did not have any stored program. Since then, many people have improved and extended the design of computers; among them are Herman Hollerith, William S. Burroughs, Charles X
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.