Computer Science

Virtual Memory

Virtual memory is a memory management technique that allows a computer to use more memory than it physically has available. It does this by temporarily transferring data from the RAM to the hard disk. This allows programs to run as if they have more memory than is actually available.

Written by Perlego with AI-assistance

11 Key excerpts on "Virtual Memory"

  • Modern Computer Architecture and Organization
    eBook - ePub

    Modern Computer Architecture and Organization

    Learn x86, ARM, and RISC-V architectures and the design of smartphones, PCs, and cloud servers

    Virtual Memory is a method of memory management that enables each application to operate in its own memory space, seemingly independent of any other applications that may be in the running state simultaneously on the same system. In a computer with Virtual Memory management, the operating system is responsible for the allocation of physical memory to system processes and to user applications. The memory management hardware and software translate memory requests originating in the application's Virtual Memory context to physical memory addresses.
    Apart from easing the process of developing and running concurrent applications, Virtual Memory also enables the allocation of a larger amount of physical memory than actually exists in the computer. This is possible through the use of secondary storage (typically a disk file) to temporarily hold copies of sections of memory removed from physical memory to allow a different program (or a different part of the same program) to run in the now-free memory.
    In modern general-purpose computers, memory sections are usually allocated and moved in multiples of a fixed-size chunk, called a page . Memory pages are typically 4 KB or larger. Moving memory pages to and from secondary storage in Virtual Memory systems is called page swapping . The file containing the swapped-out pages is the swap file .
    In a Virtual Memory system, neither application developers nor the code itself need to be concerned about how many other applications are running on the system or how full the physical memory may be getting. As the application allocates memory for data arrays and calls library routines (which requires the code for those routines to be loaded into memory), the operating system manages the use of physical memory and takes the steps necessary to ensure that each application receives memory upon request. Only in the unusual case of completely filling the available physical memory while also filling the swap file to its limit is the system forced to return a failure code in response to a memory allocation request.
    Virtual Memory provides several notable benefits besides making things easier for programmers:
    • Not only are applications able to ignore each other's presence, they are prevented from interfering with each other, accidentally or intentionally. The Virtual Memory management hardware is responsible for ensuring that each application can only access memory pages that have been assigned to it. Attempts to access another process's memory, or any other address outside its assigned memory space, result in an access violation
  • Software Defined Networking
    eBook - ePub

    Software Defined Networking

    Design and Deployment

    • Patricia A. Morreale, James M. Anderson(Authors)
    • 2014(Publication Date)
    • CRC Press
      (Publisher)
    With the arrival of shared computing resources that supported multiple simultaneous users, memory constraints became more visible. Unlike today’s memory expansion approach, for which additional memory is available and affordable, memory was both expensive and limited at the birth of mainframes because early computers could not support memory expansion. This was a significant problem for computer designers.
    This problem was solved by “virtualizing” the computer’s memory. The Virtual Memory technology used permitted a computer with a limited amount of physical memory to use the local hard disk storage system along with physical memory to make it appear that the computer had more memory available for use by application software.
    Initially, Virtual Memory referred to the concept of using a computer’s hard disk to extend a computer’s physical memory. The idea was that programs running on the computer would distinguish between whether the memory was “real” memory (i.e., random access memory or RAM) or disk based. The OS and hardware would determine the actual memory location. This solved the problem of trying to run programs that were larger than the computer’s memory.
    Later, Virtual Memory was used as a means of memory protection. Every program uses a range of addresses called the address space. The use of Virtual Memory for memory protection solved the problem of allowing multiple users to use the same computer at the same time. The use of Virtual Memory prevents programs from interfering with each other. If a user’s process tries to access an address that is not part of available address space, then an error occurs. The OS assumes control. The process is usually killed or terminated as a safeguard.
    A computer that has been programmed to make use of Virtual Memory has the additional task of managing its memory. The computer will inspect RAM to determine which programs or data that have been loaded into actual physical memory space have not been used recently. Once such areas have been identified, the central processing unit (CPU) will then copy the programs from RAM to the computer’s hard drive. The RAM that this information had been using is now available for use by other applications.
  • Encyclopedia of Computer Science and Technology
    eBook - ePub

    Encyclopedia of Computer Science and Technology

    Volume 14 - Very Large Data Base Systems to Zero-Memory and Markov Information Source

    • Jack Belzer, Albert G. Holzman, Allen Kent(Authors)
    • 2021(Publication Date)
    • CRC Press
      (Publisher)
    Virtual Memory SYSTEMS

    INTRODUCTION

    The main or primary memory of a computer system is the memory in which the instructions and data of a program must reside in order to be accessed by the processor. The maximum amount of main memory a processor is capable of addressing is referred to as its physical address space. A direct relationship exists between the cost of a system and the size of its physical address space. The larger it is, the greater the cost due to the increased number of bits and, thus, the increased amount of circuitry required for addressing. Because addresses, or components of addresses, are stored as part of instructions, larger address spaces can necessitate larger memory word sizes or result in multiple word instructions. It is a challenging problem to minimize the cost of the system and still provide a reasonable physical address space.
    Another truism related to memories is the faster the memory, the higher its cost. Systems have been designed having several different memories of varying size and speed, i.e., a memory hierarchy, since the earliest days of electronic computing. The minimal hierarchy consists of two levels, a relatively small amount of fast primary memory and a larger amount of slower secondary memory. Several different strategies have evolved which distribute instructions and data among the various memories in the hierarchy with the goal of keeping those instructions and data most likely to be referenced in the fastest memory.
    In recent years the term "Virtual Memory" has come to be associated with specific systems, those that are paged and/or segmented. Memory mappings of various sorts and for various reasons, however, are an integral part of most computer systems. Any system having an address mapping can be thought of as having Virtual Memory. Indeed, primitive address transformations are the basis from which paged and/or segmented Virtual Memory systems were developed.
  • C++ High Performance
    • Viktor Sehr, Bjorn Andrist(Authors)
    • 2018(Publication Date)
    • Packt Publishing
      (Publisher)
    If one process uses a lot of memory, the other processes will most likely be affected. But from a programmer's perspective, we usually don't have to bother about the memory that is being used by other processes. This isolation of memory is due to the fact that most operating systems today are Virtual Memory operating systems, which provide the illusion that a process has all the memory for itself. Each process has its own virtual address space. The virtual address space Addresses in the virtual address space that programmers see are mapped to physical addresses by the operating system and the memory management unit (MMU), which is a part of the processor. This mapping or translation happens each time we access a memory address. This extra layer of indirection makes it possible for the operating system to use physical memory for the parts of a process that are currently being used and back up the rest of the Virtual Memory on-disk. In this sense, we can see the physical main memory as a cache for the Virtual Memory space, which resides on secondary storage. The areas of the secondary storage that are used for backing up memory pages are usually called swap space, swap file, or, simply, pagefile depending on the operating system. Virtual Memory makes it possible for processes to have a virtual address space bigger than the physical address space, since Virtual Memory that is not in use does not have to occupy physical memory. Memory pages The most common way to implement Virtual Memory today is to divide the address space in fixed sized blocks called memory pages. When a process accesses memory at a virtual address, the operating system checks whether the memory page is backed by physical memory (a page frame). If the memory page is not mapped in the main memory, a hardware exception occurs and the page is loaded from disk into memory. This type of hardware exception is called a page fault
  • Modern Computer Architecture and Organization
    • Jim Ledin, Dave Farley(Authors)
    • 2022(Publication Date)
    • Packt Publishing
      (Publisher)
    Processor and Memory Architectures , in some depth: Virtual Memory. Virtual Memory uses software, with supporting hardware, to create an environment in which each running application functions as if it has exclusive access to the entire computer, including all the memory it requires at the addresses it expects. This allows the virtual address ranges used by a program to be the same as those in use by other currently running processes.
    Systems using Virtual Memory create multiple sandboxed environments in which each application runs without interference from other applications, except in competition for shared system resources.
    In the virtualization context, a sandbox is an isolated environment in which code runs without interference from anything outside its boundaries, and which prevents code inside the sandbox from affecting resources external to it. This isolation between applications is rarely absolute, however. For example, even though a process in a Virtual Memory system cannot access another process’s memory, it may do something else, such as delete a file that is needed by a second process, which may cause problems for the other process.
    Our primary focus in this chapter will be on virtualization at the processor level, which allows one or more operating systems to run in a virtualized environment on a computer system. This virtual environment operates at an abstracted level relative to the system’s physical hardware.
    The next section will briefly describe the various categories of virtualization you are likely to encounter.

    Types of virtualization

    The term virtualization is applied in several different computing contexts, especially in larger network environments such as businesses, universities, government organizations, and cloud service providers. The definitions that follow will cover the most common types of virtualization you are likely to come across.

    Operating system virtualization

    A virtualized operating system runs under the control of a hypervisor. A hypervisor is a combination of software and hardware that is capable of instantiating and running virtual machines. A virtual machine is an emulation of an entire computer system. The prefix hyper in hypervisor refers to the fact that the hypervisor is more privileged than the supervisor mode of the operating systems running in its virtual machines. Another term for hypervisor is virtual machine monitor
  • Memory Systems
    eBook - ePub

    Memory Systems

    Cache, DRAM, Disk

    • Bruce Jacob, David Wang, Spencer Ng(Authors)
    • 2010(Publication Date)
    • Morgan Kaufmann
      (Publisher)
    In this regard, it is the primary consumer of the memory system: its procedures, data structures, and protocols dictate how the components of the memory system are used by all software that runs on the computer. It therefore behooves the reader to know what the Virtual Memory system does and how it does it. This section provides a brief overview of the mechanics of Virtual Memory. More detailed treatments of the topic can also be found on-line in articles by the author [ Jacob & Mudge 1998a – c ]. In general, programs today are written to run on no particular hardware configuration. They have no knowledge of the underlying memory system. Processes execute in imaginary address spaces that are mapped onto the memory system (including the DRAM system and disk system) by the operating system. Processes generate instruction fetches and loads and stores using imaginary or “virtual” names for their instructions and data. The ultimate home for the process’s address space is nonvolatile permanent store, usually a disk drive; this is where the process’s instructions and data come from and where all of its permanent changes go to. Every hardware memory structure between the CPU and the permanent store is a cache for the instructions and data in the process’s address space. This includes main memory—main memory is really nothing more than a cache for a process’s virtual address space. A cache operates on the principle that a small, fast storage device can hold the most important data found on a larger, slower storage device, effectively making the slower device look fast. The large storage area in this case is the process address space, which can range from kilobytes to gigabytes or more in size. Everything in the address space initially comes from the program file stored on disk or is created on demand and defined to be zero. This is illustrated in Figure Ov.18. FIGURE Ov.18 Caching the process address space
  • Essentials of Computer Architecture
    VM ) to refer to a mechanism that hides the details of the underlying physical memory to provide a more convenient memory environment. In essence, a Virtual Memory system creates an illusion — an address space and a memory access scheme that overcome limitations of the physical memory and physical addressing scheme. The definition may seem vague, but we need to encompass a wide variety of technologies and uses. The next sections will define the concept more precisely by giving examples of Virtual Memory systems that have been created and the technologies used to implement each. We will learn that the variety in Virtual Memory schemes arises because no single scheme is optimal in all cases.
    We have already seen an example of a memory system that fits our definition of Virtual Memory in Chapter 11 : an intelligent memory controller that provides byte addressing with an underlying physical memory that uses word addressing. The implementation consists of a controller that allows a processor to specify requests using byte addressing. We further saw that choosing sizes to be powers of two avoids arithmetic computation and makes the translation of byte addresses to word addresses trivial.

    13.3 Memory Management Unit And Address Space

    Architects use the term Memory Management Unit (MMU ) to describe an intelligent memory controller. An MMU creates a virtual address space for the processor. The addresses a processor uses are virtual addresses because the MMU translates each address into an underlying physical memory. We classify the entire mechanism as a Virtual Memory system because it is not part of the underlying physical memory.
    Informally, to help distinguish Virtual Memory from physical memory, engineers use the adjective real to refer to a physical memory. For example, they might use the term real address to refer to a physical address, or the term real address space to refer to the set of addresses recognized by the physical memory.

    13.4 An Interface To Multiple Physical Memory Systems

    An MMU that can map from byte addresses to underlying word addresses can be extended to create more complex memory organizations. For example, Intel designed a network processor that used two types of physical memory: SRAM and DRAM. Recall that SRAM is faster than DRAM, but costs more, so the system had a smaller amount of SRAM (intended for items that were accessed frequently) and a large amount of DRAM (intended for items that were not accessed frequently). Furthermore, the SRAM physical memory was organized with four bytes per word and the DRAM physical memory was organized with eight bytes per word. Intel’s network processor used an embedded RISC processor that could access both memories. More important, the RISC processor used byte addressing. However, rather than using separate instructions or operand types to access the two memories, the Intel design followed a standard approach: it integrated both physical memories into a single virtual address space.
  • Hands-On System Programming with Linux
    eBook - ePub

    Hands-On System Programming with Linux

    Explore Linux system programming interfaces, theory, and practice

    Virtual Memory

    Coming back to this chapter, we will look at the meaning and purpose of Virtual Memory (VM ) and, importantly, why it is a key concept and required one. We will cover the meaning and importance of VM, paging and address-translation, the benefits of using VM, the memory layout of a process in execution, and the internal layout of a process as seen by the kernel. We shall also delve into what segments make up the process virtual address space. This knowledge is indispensable in difficult-to-debug situations.
    In this chapter, we will cover the following topics:
    • Virtual Memory
    • Process virtual address space
    Passage contains an image

    Technical requirements

    A modern desktop PC or laptop is required; Ubuntu Desktop specifies the following as r ecommended system requirements for installation and usage of the distribution:
    • 2 GHz dual core processor or better
    • RAM
      • Running on a physical host : 2 GB or more system memory
      • Running as a guest : The host system should have at least 4 GB RAM (the more, the better and smoother the experience)
    • 25 GB of free hard drive space
    • Either a DVD drive or a USB port for the installer media
    • Internet access is definitely helpful
    We recommend the reader use one of the following Linux distributions (can be installed as a guest OS on a Windows or Linux host system, as mentioned):
    • Ubuntu 18.04 LTS Desktop (Ubuntu 16.04 LTS Desktop is a good choice too as it has long term support as well, and pretty much everything should work)
      • Ubuntu Desktop download link: https://www.ubuntu.com/download/desktop
    • Fedora 27 (Workstation)
      • Download link:
        https://getfedora.org/en_GB/workstation/download/
    Note that these distributions are, in their default form, OSS and non-proprietary, and free to use as an end user.
    There are instances where the entire code snippet isn't included in the book . Thus the GitHub URL to refer the codes: https://github.com/PacktPublishing/Hands-on-System-Programming-with-Linux
  • Linux Device Driver Development
    Chapter 10 : Understanding the Linux Kernel Memory Allocation
    Linux systems use an illusion referred to as "Virtual Memory." This mechanism makes every memory address virtual, which means they do not point to any address in the RAM directly. This way, whenever we access a memory location, a translation mechanism is performed in order to match the corresponding physical memory.
    In this chapter, we will deal with the whole Linux memory allocation and management system, covering the following topics:
    • An introduction to Linux kernel memory-related terms
    • Demystifying address translation and MMU
    • Dealing with memory allocation mechanisms
    • Working with I/O memory to talk with hardware
    • Memory remapping

    An introduction to Linux kernel memory-related terms

    Though system memory (also known as RAM) can be extended in some computers that allow it, physical memory is a limited resource in computer systems.
    Virtual Memory is a concept, an illusion given to each process so that it thinks it has large and almost infinite memory, and sometimes more than the system really has. To set up everything, we will introduce the address space, virtual or logical address, physical address, and bus address terms:
    • A physical address identifies a physical (RAM) location. Because of the Virtual Memory mechanism, the user or the kernel never directly deals with the physical address but can access it by its corresponding logical address.
    • A virtual address does not necessarily exist physically. This address is used as a reference to access the physical memory location by CPU on behalf of the Memory Management Unit (MMU ). The MMU sits between the CPU core and memory and is most often part of the physical CPU itself. That said, on ARM architectures, it's part of the licensed core. It is then responsible for converting virtual addresses into physical addresses every time memory locations are accessed. This mechanism is called address translation
  • Principles of Computer System Design
    eBook - ePub
    • Jerome H. Saltzer, M. Frans Kaashoek(Authors)
    • 2009(Publication Date)
    • Morgan Kaufmann
      (Publisher)
    Virtual address spaces . A single address space may not be large enough to hold all addresses of all applications at the same time. For example, a single large database program by itself may need all the address space available in the hardware. If we can create virtual address spaces, then we can give each program its own address space. This extension also allows a thread to have its program loaded at an address of its choosing (e.g., address 0).
    A memory manager that virtualizes memory is called a Virtual Memory manager . The design we work out in this section replaces the domain manager but incorporates the main features of domains: controlled sharing and permissions. We describe the Virtual Memory design in two steps. For the first step, Sections 5.4.1 and 5.4.2 introduce virtual addresses and describe an efficient way to translate them. For the second step, Section 5.4.3 introduces virtual address spaces. Section 5.4.4 discusses the trade-offs of software and hardware aspects of implementing a Virtual Memory manager. Finally, the section concludes with an advanced Virtual Memory design.

    5.4.1. Virtualizing Addresses

    The Virtual Memory manager will deal with two types of addresses, so it is convenient to give them names. The threads issue virtual addresses when reading and writing to memory (see Figure 5.16 ). The memory manager translates each virtual address issued by the processor into a physical address , a bus address of a location in memory or a register on a controller of a device.
    Figure 5.16 A Virtual Memory manager translating virtual addresses to physical addresses.
    Translating addresses as they are being used provides design flexibility. One can design computers whose physical addresses have a different width than its virtual addresses. The memory manager can translate several virtual addresses to the same physical address, but perhaps with different permissions. The memory manager can allocate virtual addresses to a thread but postpone allocating physical memory until the thread makes a reference to one of the virtual addresses.
  • Computer Principles and Design in Verilog HDL
    • Yamin Li(Author)
    • 2015(Publication Date)
    • Wiley
      (Publisher)
    Chapter 11 Memory Hierarchy and Virtual Memory Management
    Memory is a temporary place for storing programs (instructions and data). It is commonly implemented with dynamic random access memory (DRAM). Because DRAM is slower than the CPU (central processing unit), an instruction cache and a data cache are fabricated inside the CPU. Not only the caches but also TLBs (translation lookaside buffers) are fabricated for fast translation from a virtual address to a physical memory address.
    This chapter describes the memory structures, cache organizations, Virtual Memory management, and TLB organizations. The mechanism of the TLB-based MIPS (microprocessor without interlocked pipeline stages) Virtual Memory management is also introduced.

    11.1 Memory

    A computer consists of a CPU, the memory, and I/O interfaces. Memory is used to store programs that are being executed by the CPU. There are many types of memory, but we discuss only the following four types of memory in this book.
    1. SRAM (static random access memory), which is fast and expensive, is used to design caches and TLBs. Some high-performance computers also use it as the main memory.
    2. DRAM, which is large and inexpensive, is mainly used as the computer's main memory.
    3. ROM (read-only memory), which is nonvolatile and cheap, is typically used to store the computer's initial start-up program or firmware in embedded systems.
    4. CAM (content addressable memory), which is a very special memory, is mainly used to design a fully associative cache or TLB.
    Except for ROM, all memories are volatile. It means that when the power supply is off, the contents in the memory will be lost. The contents in such memories are not usable when the power supply is just turned on. Therefore, there must be a ROM in a computer or embedded system.
    “Random access” means that any location of the memory can be accessed directly by providing the address of that location. There are some other types of memory that cannot be accessed randomly, the FIFO (first-in first-out) memory, for instance.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.