Neuromorphic Photonics
eBook - ePub

Neuromorphic Photonics

  1. 412 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Neuromorphic Photonics

Book details
Book preview
Table of contents
Citations

About This Book

This book sets out to build bridges between the domains of photonic device physics and neural networks, providing a comprehensive overview of the emerging field of "neuromorphic photonics." It includes a thorough discussion of evolution of neuromorphic photonics from the advent of fiber-optic neurons to today's state-of-the-art integrated laser neurons, which are a current focus of international research. Neuromorphic Photonics explores candidate interconnection architectures and devices for integrated neuromorphic networks, along with key functionality such as learning. It is written at a level accessible to graduate students, while also intending to serve as a comprehensive reference for experts in the field.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Neuromorphic Photonics by Paul R. Prucnal,Bhavin J. Shastri in PDF and/or ePUB format, as well as other popular books in Biological Sciences & Biology. We have over one million books available in our catalogue for you to explore.

Information

Publisher
CRC Press
Year
2017
ISBN
9781315353494
Edition
1

1

Neuromorphic Engineering
“The human brain performs computations inaccessible to the most powerful of today’s computers—all while consuming no more power than a light bulb. Understanding how the brain computes reliably with unreliable elements, and how different elements of the brain communicate, can provide the key to a completely new category of hardware (Neuromorphic Computing Systems) and to a paradigm shift for computing as a whole. The economic and industrial impact is potentially enormous.”
Human Brain Project (2014)
Complexity manifests in our world in countless ways [1, 2]. Social interactions between large populations of individuals [3], physical processes such as protein folding [4], and biological systems such as human brain area function [5] are examples of complex systems, which each have enormous numbers of interacting parts that lead to emergent global behavior. Emergence is a type of behavior that arises only when many elements in a system interact strongly and with variation [6], which is very difficult to capture with reductionary models. Understanding, extracting knowledge about, and creating predictive models of emergent phenomena in networked groups of dynamical units represent some the most challenging questions facing society and scientific investigation. Emergent phenomena play an important role in gene expression, brain disease, homeland security, and condensed matter physics. Analyzing complex and emergent phenomena requires data-driven approaches in which reams of data are synthesized by computational tools into probable models and predictive knowledge. Most current approaches to complex system and big-data analysis are software solutions that run on traditional von Neumann machines; however, the interconnected structure that leads to emergent behavior is precisely the reason why complex systems are difficult to reproduce in conventional computing frameworks. Memory and data interaction bandwidths greatly constrain the types of informatic systems that are feasible to simulate.
The human brain is believed to be the most complex system in the universe. It has approximately 1011 neurons, and each neuron connected to up to 10,000 other neurons, communicating with each other via as many as 1015 synaptic connections. The brain is also indubitably a natural standard for information processing, one that has been compared to artificial processing systems since their earliest inception. It is estimated to perform between 1013 and 1016 operations per second while consuming only 25 W of power [7]. Such exceptional performance is in part due to the neuron biochemistry, its underlying architecture, and the biophysics of neuronal computation algorithms. The brain as a processor differs radically from computers today, both at the physical level and at the architectural level.
Brain-inspired computing systems could potentially have paradigm defining degrees of data interconnection throughput (a key correlate of emergent behavior), which could enable the study of new regimes in signal processing, at least some of which (e.g., real-time complex system assurance and big data awareness) exude a pronounced societal need for new signal processing approaches. Unconventional computing platforms inspired by the human brain could simultaneously break performance limitations inherent in traditional von Neumann architectures in solving particular classes of problems.
Image
Figure 1.1 Information processing with a standard von Neumann architecture. Instructions and data are both stored in memory and pass through a common bus to reach the processor. Interconnect bottleneck limits processor performance.
Conventional digital computers are based on the von Neumann architecture [8] (also called the Princeton architecture). As shown in Fig. 1.1, it consists of a memory that stores both data and instructions, a central processing unit (CPU) and inputs and outputs. Instructions and data stored in the memory unit lie behind a shared multiplexed bus which means that both cannot be accessed simultaneously. This leads to the well known von Neumann bottleneck [9] which fundamentally limits the performance of the system—a problem that is aggravated as CPUs become faster and memory units larger. Nonetheless, this computing paradigm has dominated for over 60 years driven in part by the continual progress dictated by Moore’s law1 [10] for CPU scaling and Koomey’s law2 [11] for energy efficiency (multiply-accumulate (MAC) operations per joule) compensating the bottleneck. Over the last several years, though, such scaling has not followed suit, approaching an asymptote (see Fig. 1.2). The computation efficiency levels off below 10 MMAC/mW (or 10 GMAC/W or 100 pJ per MAC) [12]. The reasons behind this trend can be traced to both the representation of information at the physical level and the interaction of processing with memory at the architectural level [13].
Image
Figure 1.2 Energy efficiency (giga multiply-accumulates per Joule) versus year for commercial digital processors along with general trends of feature size used in leading edge systems. MAC operations normalized to 32 bit size computation. Koomey’s law no longer holds as of about 2005. The energy efficiency asymptotes create a gap between processing capability and next generation application need. Note: statistical analysis shows that the difference in quality of the fit is statistically significant. There is less than a 0.05% probability that this difference is due to chance. Copyright 2013 IEEE. Reprinted, with permission, from Marr et al. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 21, 147–151 (2013) Ref. [13].
At the device level, digital CMOS is reaching physical limits [14, 15]. As the CMOS feature sizes scale down below 90 nm to 65 nm, the voltage, capacitance, and delay no longer scale according to a well-defined rate by Dennard’s law [16]. This leads to a tradeoff between performance (when transistor is on) and subthreshold leakage (when it is off). For example, as the gate oxide (which serves as an insulator between the gate and channel) is made as thin as possible (1.2 nm, around five atoms thick Si) to increase the channel conductivity, a quantum mechanical phenomenon of electron tunneling [17, 18] occurs between the gate and channel leading to increased power consumption. The computational power efficiency for biological systems, on the other hand, is around 1 aJ per MAC operation, which is eight orders of magnitude higher (better) than the power efficiency wall for digital computation (100 pJ/MAC) [12, 13]. There is a widening gap (see Fig. 1.2) between the efficiency wall (supply) and the next generation application need (demand) with the incipient rise of big data and complex systems.
At the architectural level, brain-inspired platforms approach information processing by focusing on substrates that exhibit the interconnected causal structures and dynamics analogous to the complex systems present in our world which virtualized in conventional computing frameworks. Computational tools have been revolutionary in hypothesis testing and simulation and have led to the discovery of innumerable theories in science, and they will be an indispensable aspect of a holistic approach to problems in big data and many body physics; however, huge gaps between information structures in observed systems and standard computing architectures motivates a need for alternative paradigms if computational abilities are to be brought to the growing class of problems associated with complex systems.
Over the last several years, there has been a deeply committed exploration of unconventional computing techniques—neuromorphic engineering—to alleviate the device level and system/architectural level challenges faced by conventional computing platforms [12, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31–32]. Neuromorphic engineering aims to build machines employing basic nervous systems operations by bridging the physics of biology with engineering platforms (see Fig. 1.3) enhancing performance for applications interacting with natural environments such as vision and speech [12]. As such, neuromorphic engineering is going through a very exciting period as it promises to make processors that use low energies while integrating massive amounts of information. These neural-inspired systems are typified by a set of computational principles, including hybrid analog-digital signal representations, co-location of memory and processing, unsupervised statistical learning, and distributed representations of information.
Information representation can have a profound effect on information processing. In what is considered the third generation of neuromorphic electronics, approaches are typified by their use of spiking signals. Spiking is a sparse coding scheme recognized by the neuroscience community as a neural encoding strategy for information processing [33, 34, 35, 36, 37–38], and has firm code-theoretic justifications [39, 40–41]. Digital in amplitude but temporally analog, spike codes exhibit the expressiveness and efficiency of analog processing with the robustness of digital communication. This distri...

Table of contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright Page
  5. Table of Contents
  6. Foreword
  7. Preface
  8. Contributors
  9. Biographies
  10. List of Figures
  11. List of Tables
  12. Chapter 1 Neuromorphic Engineering
  13. Chapter 2 Primer on Spike Processing and Excitability
  14. Chapter 3 Primer on Photonics
  15. Chapter 4 Spike Processing with SOA Dynamics
  16. Chapter 5 Excitable Laser for Unified Spike Processing
  17. Chapter 6 Semiconductor Photonic Devices as Excitable Processors
  18. Chapter 7 Silicon Photonics
  19. Chapter 8 Reconfigurable Analog Photonic Networks
  20. Chapter 9 Photonic Weight Banks
  21. Chapter 10 Processing-Network Node
  22. Chapter 11 System Architecture
  23. Chapter 12 Principles of Neural Network Learning
  24. Chapter 13 Photonic Reservoir Computing
  25. Chapter 14 Neuromorphic Platforms Comparison
  26. Index