Computer Science

Cache size

Cache size refers to the amount of data that can be stored in a computer's cache memory. The cache is a small, high-speed memory that stores frequently accessed data to improve the performance of the computer. A larger cache size can result in faster processing times and improved overall performance.

Written by Perlego with AI-assistance

4 Key excerpts on "Cache size"

  • DSP Software Development Techniques for Embedded and Real-Time Systems
    • Robert Oshana(Author)
    • 2006(Publication Date)
    • Newnes
      (Publisher)
    C Cache Optimization in DSP and Embedded Systems
    A cache is an area of high-speed memory linked directly to the embedded CPU. The embedded CPU can access information in the processor cache much more quickly than information stored in main memory. Frequently-used data is stored in the cache.
    There are different types of caches but they all serve the same basic purpose. They store recently-used information in a place where it can be accessed very quickly. One common type of cache is a disk cache. This cache model stores information you have recently read from your hard disk in the computer’s RAM, or memory. Accessing RAM is much faster than reading data off the hard disk and therefore this can help you access common files or folders on your hard drive much faster. Another type of cache is a processor cache which stores information right next to the processor. This helps make the processing of common instructions much more efficient, and therefore speeding up computation time.
    There has been historical difficulty in transferring data from external memory to the CPU in an efficient manner. This is important for the functional units in a processor as they should be kept busy in order to achieve high performance. However, the gap between memory speed and CPU speed is increasing rapidly. RISC or CISC architectures use a memory hierarchy in order to offset this increasing gap and high performance is achieved by using data locality.

    Principle of locality

    The principle of locality says a program will access a relatively small portion of overall address space at any point in time. When a program reads data from address N, it is likely that data from address N+1 is also read in the near future (spatial locality) and that the program reuses the recently read data several times (temporal locality). In this context, locality enables hierarchy. Speed is approximately that of the uppermost level. Overall cost and size is that of the lowermost level. A memory hierarchy from the top to bottom contains registers, different levels of cache, main memory, and disk space, respectively (Figure C.1
  • Programming for Problem-solving with C
    eBook - ePub

    Programming for Problem-solving with C

    Formulating algorithms for complex problems (English Edition)

    Cache memory is located between ultra-fast registers and main memory. It holds the frequently used data that are used by the CPU again and again. It is made up of SRAM chips. Holding repeatedly required data avoids accessing slower memory (DRAM—main memory) by the CPU, which enhances the computer’s performance as SRAM chips are faster than DRAM chips. The cache memory is generally divided into levels:
    • L1 cache : Present on the CPU chip (Internal cache).
    • L2 cache : Built outside the CPU, on the motherboard, the size is greater than L1.
    • L3 cache : Extra cache, not normally used, built outside of the CPU on the motherboard. L3 is larger than the L1 and L2 cache but faster than the main memory.
    The sizes of cache memory are generally in KB and MB. The connection of memories with the CPU is given in Figure 2.16 (address, data, and control buses).
    Figure 2.16: Connection of memories with CPU
    • Main memory: It can hold data in GB (Giga Byte). The modern computer has 4GB, 8 GB, and 16 GB of RAM. The main memory (RAM) is also present inside the computer and cannot be separated from it. It is present on the motherboard.
    • Secondary memory: It can hold data in TB (Tera Byte). The modern compiler has 1TB, 2 TB, and 4 TB storage. It can be internal (online) or external (offline). It is treated as infinite memory because it can be added to the computer when required.
    Measuring the memory
    The smallest unit for measuring memory is a bit. One bit means 0 or 1. The combination of four words is known as the nibble, and the combination of eight words is a byte. The following Table 2.1 contains the detail of memory units.
    Name Description In base 2 In base 10 Symbol
    1 Bit (Binary Digit) 0 or 1 0 or 1 Bit
    1 Nibble 4 bits
    2 2 Bits
    Nibble
    1 Byte 8 bits
    2 3 Bits
    B
    1 Kilobyte 1,024 Byte
    2 10 Bytes
    10 3 Bytes
    KB
    1 Megabyte 1,024 KB
    2 20 Bytes
    10 6 Bytes
    MB
    1 Gigabyte 1,024 MB
    2 30 Bytes
    10 9 Bytes
    GB
    1 Terabyte 1,024 GB
    2 40 Bytes
    10 12 Bytes
    TB
    1 Petabyte 1,024 TB
  • Computer Systems Architecture
    Since it is usually a smaller memory (in terms of capacity), it can be more expensive. The added cost it inflicts will have a very marginal effect on the overall system’s price. This combination provides the better of the two solutions: on the one hand, a fast memory that will enhance performance; and on the other hand, a meaningless increase in price. Every time the processor needs data or instructions, it will look first in the cache memory. If it is found, then the access time will be very fast. If it is not in the cache, it will have to be brought from memory, and then the time will be longer (Figure 6.6). To better understand the concept of cache memory, we will use a more realistic and known example. Let us assume that a student has to write a seminar paper and for that reason he or she is working in the library. The student has a limited-sized desk on which only one book can be used. Every time he or she has to refer to or cite various bibliographic sources, he or she has to return the book to the shelf, take the new book that is needed, and bring it to the working desk. It should be noted that sometimes it is a very large library, and the books needed may be on a different floor. Furthermore, the working space and the desk may be located in a special area far away from the shelves and sometimes even in a different building. Due to library rules that permit only one book on the desk, before using a new book, the current one has to be returned. Even if the previous book that he or she used several minutes ago is needed once more, it does not shorten the time, and the student has to return the current book and bring out the previous one once again
  • Computer System Design
    eBook - ePub
    • Michael J. Flynn, Wayne Luk(Authors)
    • 2011(Publication Date)
    • Wiley
      (Publisher)
    The cache designer must deal with the processor’s accessing requirements on the one hand, and the memory system’s requirements on the other. Effective cache designs balance these within cost constraints.
    4.4 BASIC NOTIONS
    Processor references contained in the cache are called cache hits. References not found in the cache are called cache misses. On a cache miss, the cache fetches the missing data from memory and places it in the cache. Usually, the cache fetches an associated region of memory called the line. The line consists of one or more physical words accessed from a higher-level cache or main memory. The physical word is the basic unit of access to the memory.
    The processor–cache interface has a number of parameters. Those that directly affect processor performance (Figure 4.4 ) include the following:
    1. Physical word—unit of transfer between processor and cache.
    Typical physical word sizes: 2–4 bytes—minimum, used in small core-type processors 8 bytes and larger—multiple instruction issue processors (superscalar)
    2. Block size (sometimes called line )—usually the basic unit of transfer between cache and memory. It consists of n physical words transferred from the main memory via the bus.
    3. Access time for a cache hit—this is a property of the Cache size and organization.
    4. Access time for a cache miss—property of the memory and bus.
    5. Time to compute a real address given a virtual address (not-in-translation lookaside buffer [TLB] time)—property of the address translation facility.
    6. Number of processor requests per cycle.
    Figure 4.4 Parameters affecting processor performance.
    Cache performance is measured by the miss rate or the probability that a reference made to the cache is not found. The miss rate times the miss time is the delay penalty due to the cache miss. In simple processors, the processor stalls on a cache miss.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.