Computer Science

Harvard Architecture

Harvard Architecture is a computer architecture that separates memory and processing units. It uses separate buses for instructions and data, allowing for simultaneous access to both. This architecture is commonly used in embedded systems and digital signal processing applications.

Written by Perlego with AI-assistance

3 Key excerpts on "Harvard Architecture"

  • Modern Computer Architecture and Organization
    eBook - ePub

    Modern Computer Architecture and Organization

    Learn x86, ARM, and RISC-V architectures and the design of smartphones, PCs, and cloud servers

    The Harvard Architecture potentially provides a higher performance level by parallelizing accesses to instructions and data. This architecture also removes the entire class of security issues associated with maliciously executing data as program instructions, as long as the instruction memory cannot be modified by program instructions. This assumes that the program memory is loaded with instructions in a trustworthy manner.
    In hindsight, with knowledge of the proliferation of von Neumann architecture-enabled security threats, there is reason to wonder whether the entire information technology industry would not have been vastly better off had there been early agreement to embrace the Harvard Architecture and its complete separation of code and data memory regions, despite the costs involved.
    In practice, a strict Harvard Architecture is rarely used in modern computers. Several variants of the Harvard Architecture are commonly employed, under the name modified Harvard Architecture.

    The modified Harvard Architecture

    Computers designed with a modified Harvard Architecture have, in general, some degree of separation between program instructions and data. This separation is rarely absolute. While systems with modified Harvard Architectures contain separate program instruction and data memory regions, such systems typically support some means of storing data in program memory and, in some cases, storing instructions in data memory.
    The following diagram shows a typical modified Harvard Architecture representing a variety of real-world computer systems:
    Figure 7.3: Modified Harvard Architecture
    As we saw in the previous chapter, digital signal processors (DSPs) achieve substantial benefits from the use of a Harvard-like architecture. By storing one numeric vector in instruction memory and a second vector in data memory, a DSP can execute one multiply-accumulate (MAC) operation per processor clock cycle. In these systems, instruction memory and the data elements it contains are typically read-only memory devices. This is indicated by the unidirectional arrow connecting the instruction memory to the processor in Figure 7.3. Consequently, only constant data values are suitable for storage in the instruction memory region.
  • Modern Computer Architecture and Organization
    • Jim Ledin, Dave Farley(Authors)
    • 2022(Publication Date)
    • Packt Publishing
      (Publisher)
    The Harvard Architecture potentially provides a higher performance level by parallelizing accesses to instructions and data. This architecture also removes the entire class of security issues associated with maliciously executing data as program instructions, provided the instruction memory cannot be modified by program instructions. This assumes the program memory is loaded with instructions in a trustworthy manner.
    In hindsight, with knowledge of the proliferation of von Neumann architecture-enabled security threats, there is reason to wonder if the entire information technology industry would not have been vastly better off had there been early agreement to embrace the Harvard Architecture and its complete separation of code and data memory regions, despite the costs involved.
    In practice, a strict Harvard Architecture is rarely used in modern computers. Several variants of the Harvard Architecture are commonly employed, collectively called modified Harvard Architectures . These architectures are the topic of the next section.

    The modified Harvard Architecture

    Computers designed with a modified Harvard Architecture have, in general, some degree of separation between program instructions and data. This reduces the effects of the von Neumann bottleneck and mitigates the security issues we’ve discussed. The separation between instructions and data is rarely absolute, however. While systems with modified Harvard Architectures contain separate program instruction and data memory regions, these processors typically support some means of storing data in program memory and, in some cases, storing instructions in data memory.
    The following diagram shows a modified Harvard Architecture representing many real-world computer systems: Figure 7.3: Modified Harvard Architecture
    As we saw in the previous chapter, digital signal processors (DSPs ) achieve substantial benefits from the use of a Harvard-like architecture.
    By storing one numeric vector in instruction memory and a second vector in data memory, a DSP can execute one multiply-accumulate (MAC ) operation per processor clock cycle. In these systems, instruction memory and the data elements it contains are typically read-only memory regions. This is indicated by the unidirectional arrow connecting the instruction memory to the processor in Figure 7.3
  • VLSI Risc Architecture and Organization
    • S.B. Furber(Author)
    • 2017(Publication Date)
    • Routledge
      (Publisher)
    figure 3 ) has one memory for instructions and a second for data. The name comes from the Harvard Mark 1, an electromechanical computer which pre-dates the stored-program concept of von Neumann, as does the architecture in this form. It is still used for applications which run fixed programs, in areas such as such as digital signal processing, but not for general-purpose computing. The advantage is the increased bandwidth available due to having separate communication channels for instructions and data; the disadvantage is that the storage is allocated to code and data in a fixed ratio.
    Figure 3 The Harvard Architecture
    Architectures with separate ports for instructions and data are often referred to by the term ‘Harvard Architecture’, even though the ports connect to a common memory. For instance each port may be supplied from its own local cache memory (figure 4 ). The cache memories reduce the external bandwidth requirements sufficiently to allow them both to be connected to the same main memory, giving the bandwidth advantage of a Harvard Architecture along with most of the flexibility of the simple von Neumann architecture. (The flexibility may be somewhat reduced because of cache consistency problems with self-modifying code.) Note that this type of Harvard Architecture is still a von Neumann machine.
    Although the Harvard Architecture (modified or not) offers double the bandwidth of the simple von Neumann architecture, this will only allow double performance when the instruction and data traffic are equal. The VAX-11/780 has been used for a lot of measurements of memory traffic, and these measurements tend to suggest a reasonable match. RISC processors have two characteristics which make the match less good than the VAX. Firstly, they usually have less dense code, which increases the instruction traffic. Secondly, they are register to register rather than memory to memory architectures. This causes compiler writers to be much more careful about register usage, which can in turn result in much less data traffic. The use of register windows also eliminates considerable data traffic associated with procedure calls. A RISC CPU can typically require an instruction bandwidth six to ten times the data bandwidth (Patterson and Sequin, 1981), so that a Harvard Architecture may only allow a ten percent speed-up (rather than the one hundred percent that would be suggested by the VAX statistics). Even so, the dual ported Harvard Architecture has become a popular choice with RISC designers.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.