Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Event-Based Neuromorphic Systems
Event-Based Neuromorphic Systems
Event-Based Neuromorphic Systems
Ebook986 pages10 hours

Event-Based Neuromorphic Systems

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Neuromorphic electronic engineering takes its inspiration from the functioning of nervous systems to build more power efficient electronic sensors and processors. Event-based neuromorphic systems are inspired by the brain's efficient data-driven communication design, which is key to its quick responses and remarkable capabilities.  This cross-disciplinary text establishes how circuit building blocks are combined in architectures to construct complete systems. These include vision and auditory sensors as well as neuronal processing and learning circuits that implement models of nervous systems.

Techniques for building multi-chip scalable systems are considered throughout the book, including methods for dealing with transistor mismatch, extensive discussions of communication and interfacing, and making systems that operate in the real world. The book also provides historical context that helps relate the architectures and circuits to each other and that guides readers to the extensive literature. Chapters are written by founding experts and have been extensively edited for overall coherence.

This pioneering text is an indispensable resource for practicing neuromorphic electronic engineers, advanced electrical engineering and computer science students and researchers interested in neuromorphic systems.

Key features:

  • Summarises the latest design approaches, applications, and future challenges in the field of neuromorphic engineering.
  • Presents examples of practical applications of neuromorphic design principles.
  • Covers address-event communication, retinas, cochleas, locomotion, learning theory, neurons, synapses, floating gate circuits, hardware and software infrastructure, algorithms, and future challenges.
LanguageEnglish
PublisherWiley
Release dateDec 24, 2014
ISBN9781118927625
Event-Based Neuromorphic Systems

Related to Event-Based Neuromorphic Systems

Related ebooks

Electrical Engineering & Electronics For You

View More

Related articles

Reviews for Event-Based Neuromorphic Systems

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Event-Based Neuromorphic Systems - Shih-Chii Liu

    1

    Introduction

    The effortless ability of animal brains to engage with their world provides a constant challenge for technology. Despite vast progress in digital computer hardware, software, and system concepts, it remains true that brains far outperform technological computers across a wide spectrum of tasks, particularly when these are considered in the light of power consumption. For example, the honeybee demonstrates remarkable task, navigational, and social intelligence while foraging for nectar, and achieves this performance using less than a million neurons, burning less than a milliwatt, using ionic device physics with a bulk mobility that is about 10 million times lower than that of electronics. This performance is many orders of magnitude more task-competent and power-efficient than current neuronal simulations or autonomous robots. For example, a 2009 ‘cat-scale’ neural simulation on a supercomputer simulated 10¹³ synaptic connections at 700 times slower than real time, while burning about 2 MW (Ananthanarayanan et al. 20092009); and the DARPA Grand Challenge robotic cars drove along a densely GPS-defined path, carrying over a kilowatt of sensing and computing power (Thrun et al. 2007).

    Although we do not yet grasp completely nature’s principles for generating intelligent behavior at such low cost, neuroscience has made substantial progress toward describing the components, connection architectures, and computational processes of brains. All of these are remarkably different from current technology. Processing is distributed across billions of elementary units, the neurons. Each neuron is wired to thousands of others, receiving input through specialized modifiable connections, the synapses. The neuron collects and transforms this input via its tree-like dendrites, and distributes its output via tree-like axons. Memory instantiated through the synaptic connections between neurons is co-localized with processing through their spatial arrangements and analog interactions on the neurons’ input dendritic trees. Synaptic plasticity is wonderfully complex, yet allows animals to retain important memories over a lifetime while learning on the time scale of milliseconds. The output axons convey asynchronous spike events to their many targets via complex arborizations. In the neocortex the majority of the targets are close to the source neuron, indicating that network processing is strongly localized, with relatively smaller bandwidth devoted to long-range integration.

    The various perceptual, cognitive, and behavioral functions of the brain are systematically organized across the space of the brain. Nevertheless at least some aspects of these various processes can be discerned within each specialized area, and their organization suggests a coalition of richly intercommunicating specialists. Overall then, the brain is characterized by vast numbers of processors, with asynchronous message passing on a vast point-to-point wired communication infrastructure. Constraints on the construction and maintenance of this wiring enforce a strategy of local collective specialization, with longer range coordination.

    For the past two decades neuromorphic engineers have grappled with the implementation of these principles in integrated circuits and systems. The opportunity of this challenge is the realization of a technology for computing that combines the organizing principles of the nervous system with the superior charge carrier mobility of electronics. This book provides some insights and many practical details into the ongoing work toward this goal. These results become ever more important for more mainstream computing, as limits on component density force ever more distributed processing models.

    The origin of this neuromorphic approach dates from the 1980s, when Carver Mead’s group at Caltech came to understand that they would have to emulate the brain’s style of communication if they were to emulate its style of computation. These early developments continued in a handful of laboratories around the world, but more recently there has been an increase of development both in academic and industrial labs across North America, Europe, and Asia. The relevance of the neuromorphic approach to the broader challenges of computation is now clearly recognized (Hof 2014). Progress in neuromorphic methods has been facilitated by the strongly cooperative community of neuroscientists and engineers interested in this field. That cooperation has been promoted by practical workshops such as the Telluride Neuromorphic Cognition Engineering Workshop in the United States, and the CapoCaccia Cognitive Neuromorphic Engineering Workshop in Europe.

    Event-Based Neuromorphic Systems arose from this community’s wish to disseminate state-of-the-art techniques for building neuromorphic electronic systems that sense, communicate, compute, and learn using asynchronous event-based communication. This book complements the introductory textbook (Liu et al. 2002) that explained the basic circuit building blocks for neuromorphic engineering systems. Event-Based Neuromorphic Systems now shows how those building blocks can be used to construct complete systems, with a primary focus on the hot field of event-based neuromorphic systems. The systems described in this book include sensors and neuronal processing circuits that implement models of the nervous systems. Communication between the modules is based on the crucial asynchronous event-driven protocol called the address-event representation (AER), which transposes the communication of spike events on slow point-to-point axons, into digital communication of small data packets on fast buses (see, for example, Chapter 2). The book as a whole describes the state of the art in the field of neuromorphic engineering, including the building blocks necessary for constructing complete neuromorphic chips and for solving the technological challenges necessary to make multi-chip scalable systems. A glance at the index shows the wide breadth of topics, for example, next to ‘Moore’s law’ is ‘motion artifact’ and next to ‘bistable synapse’ is ‘bootstrapped mirror.’

    The book is organized into two parts: Part I (Chapters 2–6) is accessible to readers from a wider range of backgrounds. It describes the range of AER communication architectures, AER sensors, and electronic neural models that are being constructed without delving exhaustively into the underlying technological details. Several of these chapters also include a historical tree that helps relate the architectures and circuits to each other, and that guides readers to the extensive literature. It also includes the largely theoretical Chapter 6 on learning in event-based systems.

    Part II (Chapters 7–16) is addressed to readers who intend to construct neuromorphic electronic systems. These readers are assumed to be familiar with transistor physics (particularly subthreshold operation), and in general to be comfortable with reasoning about analog CMOS circuits. A mixed-signal CMOS designer should be comfortable reading these more specialized topics, while an application engineer would be able easily to follow the chapters on hardware and software infrastructure. This part of the book provides information about the various approaches used to construct the building blocks for the sensors and computational units modeling the nervous system, including details of silicon neurons, silicon synapses, silicon cochlea circuits, floating-gate circuits, and programmable digital bias generators. It also includes chapters on hardware and software communication infrastructure and algorithmic processing of event-based sensor output.

    The book concludes with Chapter 17, which considers differences between current computers and nervous systems in the ways that computational processing is implemented, and discusses the long-term route toward more cognitive neuromorphic systems.

    1.1 Origins and Historical Context

    Many of the authors of Event-Based Neuromorphic Systems were strongly influenced by Analog VLSI and Neural Systems (Mead 1989). Carver Mead’s book was the story of an extended effort to apply the subthreshold transistor operating region of CMOS electronics to realize a neural style and scale of computation. The book was written at a time when automatically compiled synchronous logic circuits were just beginning to dominate silicon production, a field that Mead was central in creating. Much like the famous Mead and Conway (1980) book on logic design, which was focused toward instilling a set of methodologies for practical realization of logic chips in digital designers, Analog VLSI and Neural Systems was focused on providing a set of organizing principles for neuromorphic designers. These ideas were centered around the name of Mead’s group at Caltech, the Physics of Computation group, and emphasized notions such as signal aggregation by current summing on wires, multiplication by summed exponentials, and relations between the fundamental Boltzmann physics of energy barriers and the physics of activation of voltage-sensitive nerve channels.

    However, at that time the field was so new that there were many practical aspects that did not work out in the long run, mainly because they suffered from transistor mismatch effects. So the early systems were good for demonstration but not for real-world application and mass production. The fact that current copying in CMOS is the least precise operation possible to implement in practice, was barely mentioned in the book. This omission led to designs that worked ideally in simulation but functioned poorly in practice. In relation to Event-Based Neuromorphic Systems, the central importance of communication of information was not realized until after the book was completed, and so none of the systems described in the book had an AER output; rather the analog information was scanned out serially from the systems described there. Even a later collection of chapters (Mead and Ismail 1989) about Mead-lab systems and Mead’s review paper in Proceedings of the IEEE (1990) barely touched on communication aspects.

    Since 1989 there has been a continued drive to improve the technology of neuromorphic engineering. But to place the progress of neuromorphic engineering in context, we can consider logic, that is, digital chip design. Around 1990, a high-end personal computer had about 8 MB of RAM and about 25 MHz clock speed (one of the authors remembers being a proud owner of a personal CAD station that could be used to work on chip design at home). As of 2013, a state-of-the-art personal computer has about 16 GB of memory and 3 GHz clock speed. So in about 20 years we have seen approximately a 1000-fold increase in memory capacity and a 100-fold increase in clock speed. These of course are reflections of Moore’s law and investments of hundreds of billions of dollars. But the basic organizing principles used in computation have hardly changed at all. Most advances have come about because of the availability of more raw memory and computing power, not by fundamental advances in architectures.

    During this period the neuromorphic engineering community has expanded considerably from its origins at Caltech, Johns Hopkins, and EPFL (Figure 1.1). At first only a few modest, rather unconvincing lab prototypes could be shown in a couple of labs, and these barely made it off the lab bench. But, after 20 years, neuromorphic engineering has scaled the number of spiking neurons in a system from a few hundred up to about a million (Chapter 16), neuromorphic sensors are available as high-performance computer peripherals (Liu and Delbruck 2010), and these components can be used by people at neuromorphic engineering workshops who know little about transistor-level circuit design (Cap n.d.; Tel n.d.). The literature shows a steady exponential growth in papers with the keywords ‘neuromorphic’ or ‘address-event representation’ (Figure 1.2), which is a higher growth rate than for the term ‘synchronous logic.’ Although the slopes of these exponentials tend to flatten over time, the number of papers mentioning ‘address-event representation’ has increased for the last 5 years at the rate of about 16% per year. If this growth is considered as resulting from perhaps 15 labs working for an average of 15 years at an investment of $200,000 per year, then this progress has been achieved at a total financial investment of perhaps 50 million dollars, a tiny fraction of the hundreds of billions spent on developing conventional electronics during this period.

    Figure 1.1 Maps of the neuromorphic electrical engineering community in 1990 (left) and 2013 (right, © 2013 Google)

    Figure 1.2 Growth of literature over time. From Google Scholar

    1.2 Building Useful Neuromorphic Systems

    To be adopted as mainstream technology and have the financial support and competitive environment of an active industrial market there are some obvious requirements. Neuromorphic systems must function robustly and repeatably across chips, across temperature, with noisy power supplies, must have interfaces that allow easy development of applications, and need to be portable for use in the field without specialized equipment. Event-Based Neuromorphic Systems teaches the knowledge of the required technologies built up over the past 20 years of effort.

    These features of a neuromorphic electronic system are necessary but not sufficient. A neuromorphic system must outperform conventional technology, or at least justify the investment of effort based on the belief that it could outperform conventional approaches when scaled up or when silicon technology can no longer scale down to smaller feature sizes or power supply voltages. And this last point has been a weakness: Proposals have not convincingly shown that the neuromorphic approach is better than simply scaling logic and making it more parallel. Many grants have been funded nonetheless, but the proposals are too vague to be very credible. One could say that funders are simply hopeful; there is no alternative that offers anything but a new device technology (e.g., graphene) to enable yet more clock and feature size scaling.

    The scaling problem brings up the importance of communication: To scale up systems requires growing not only smaller in feature size and cost but also larger in system capabilities. To be neuromorphic, these systems must emulate something like the hybrid data-driven computation and communication architecture used in brains with their massive numbers of connections. One can see the evidence for this requirement from the direction of conventional electronics as well, with logic systems becoming more parallel and distributed. This requirement for communication is why the neuromorphic community has focused its efforts on event-based architectures and it is why this book is aimed at teaching the state-of-the-art techniques for building such systems. Chapter 2 will begin by outlining the principles of event-based communication architectures for neuromorphic systems.

    References

    Ananthanarayanan R, Esser SK, Simon HD, and Modha DS. 2009. The cat is out of the bag: cortical simulations with 10⁹ neurons, 10¹³ synapses. Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis, Portland, OR, November 14– 20, 2009. IEEE. pp. 1–12.

    Cap. n.d. Capo Caccia Cognitive Neuromorphic Engineering Workshop, http://capocaccia.ethz.ch/ (accessed July 16, 2014).

    Hof RD. 2014. Qualcomm’s neuromorphic chips could make robots and phones more astute about the world. MIT Technology Review. http://www.technologyreview.com/featuredstory/526506/neuromorphic-chips/.

    Liu SC and Delbruck T. 2010. Neuromorphic sensory systems. Curr. Opin. Neurobiol.20(3), 288–295.

    Liu SC, Kramer J, Indiveri G, Delbrück T, and Douglas R. 2002. Analog VLSI: Circuits and Principles. MIT Press.

    Mead CA. 1989. Analog VLSI and Neural Systems. Addison-Wesley, Reading, MA.

    Mead CA. 1990. Neuromorphic electronic systems. Proc. IEEE78(10), 1629–1636.

    Mead CA and Conway L. 1980. Introduction to VLSI Systems. Addison-Wesley, Reading, MA.

    Mead CA and Ismail M (eds). 1989. Analog VLSI Implementation of Neural Systems. Kluwer Academic Publishers, Norwell, MA.

    Tel. n.d. Telluride Neuromorphic Cognition Engineering Workshop, www.ine-web.org/ (accessed July 16, 2014).

    Thrun S, Montemerlo M, Dahlkamp H, Stavens D, Aron A, Diebel J, Fong P, Gale J, Halpenny M, Hoffmann G, Lau K, Oakley C, Palatucci M, Pratt V, Stang P, Strohband S, Dupont C, Jendrossek LE, Koelen C, Markey C, Rummel C, Niekerk J, Jensen E, Alessandrini P, Bradski G, Davies B, Ettinger S, Kaehler A, Nefian A, and Mahoney P. 2007. Stanley: the robot that won the DARPA grand challenge. In: The 2005 DARPA Grand Challenge (eds Buehler M, Iagnemma K, and Singh S). Vol. 36: Springer Tracts in Advanced Robotics. Springer, Berlin Heidelberg. pp. 1–43.

    Part I

    Understanding Neuromorphic Systems

    2

    Communication

    This chapter focuses on the fundamentals of communication in event-based neuromorphic electronic systems. Overall considerations on requirements for communication and circuit- versus packet-switched systems are followed by an introduction to Address-Event Representation (AER), asynchronous handshake protocols, address encoders, and address decoders. There follows a section on considerations regarding trade-offs in the design of AER links, and a section describing the details of the implementation of such links and how these have evolved.

    2.1 Introduction

    In evolution, the great expansion of computational power of brains seems to be implemented by expansion of cortex, a sheet of tissue surrounding older structures. Cortex is a layered structure divided into what is known as the gray matter and white matter. Figure 2.1 shows a cross section of a small chunk of visual cortex of a cat brain. The many types of neurons, of which there are about 10⁵/mm³, make long-range connections through axons (their output branches) in the white matter (with a wiring density of 9 m of axon per mm³) while the gray matter is composed mostly of dendrites (the neurons’ input branches) with a wiring density of an amazing 4 km/mm³ (Braitenberg and Schüz 1991). The long-range white matter connections occupy much more volume because they are myelinated, that is, thickly sheathed in a material called myelin. The myelin acts as an insulator, reducing the capacitance and increasing the resistance to the outside of the cell to reduce decay of the action potential impulses by which the neurons communicate, as these impulses travel along the axons.

    Figure 2.1 Cross-sectional view of layered cortical cells with their dendrites within the gray matter and axons projecting out to white matter. Only a few cells are shown, but gray matter is completely filled with neurons and some other supporting cells. Adapted from Binzegger et al. (2004). Reproduced with permission of the Society for Neuroscience

    Action potentials, or so-called spikes from the shapes of their waveforms, whether traveling along axons in white matter or unmyelinated axons in gray matter are stereotypical (Gerstner and Kistler 2002). Although their amplitudes, durations (around 1 ms), and precise shapes can vary somewhat, they can be treated as all-or-none, essentially digital events.

    Neuromorphic electronic systems must embed complex networks of neurons, axons, and synapses, which nature builds in three dimensions (3D), into an essentially 2D silicon substrate. Unlike standard digital logic where the output of a gate, on average, is connected to the input of three to four other gates, a neuron typically delivers a spike to thousands of destinations. Hence there is a fundamental physical mismatch between the logic-optimized, 2D silicon technology and the interconnectivity requirements for implementing biological networks of neurons. This mismatch is overcome using time-multiplexed communication.

    Modern digital gates have switching delays that are on the order of tens of picoseconds, which is many orders of magnitude faster than the time constant of the output spiking activity in a neuron. Since the communication fabric is only carrying spikes from one neuron to another, the high connectivity problem as well as the 3D/2D mismatch can be resolved by using a time-multiplexed fabric for interneuron communication. Such a network is very similar to a packet-switched communication network used in on-chip, chip-to-chip, and system-to-system communication (e.g., the Internet) today.

    Communication networks can be broadly divided into two major categories: circuit switched and packet switched. In a circuit-switched network, two end points communicate by setting up a virtual circuit – a path through the network that is dedicated for the communication between the two end points. Once the path is setup, data can be communicated over that path. Once the communication ends, the virtual circuit is destroyed and the hardware resources used by the circuit are released for use by another virtual circuit. This approach was used in the original telephone network. Communication in a circuit-switched network has a setup/teardown cost for the virtual circuit, after which the dedicated communication circuit can be used with very low overhead. Such networks are efficient when the communication between two end points occurs for a very long duration.

    Packet-switched networks, on the other hand, operate by time-multiplexing individual segments of the network. Instead of creating an end-to-end path up front, each item being communicated (a packet) requests access to shared resources on the fly. Hence, each packet must contain sufficient information that allows each step in the communication network to determine the appropriate next step in the path taken by the packet. Large messages in packet-switched networks are typically packetized – converted into a sequence of packets, where each packet contains replicated path information. However, there is no overhead in sending the first packet through the network. Such networks are efficient when the communication between two end points is in bursts of small amounts of information.

    A neuron spiking event is only used to convey a small amount of information in typical neuromorphic electronic systems. In the extreme case, the only information conveyed by a spike is the fact that the spike occurred at all – that is, the time at which the spike occurred relative to other spikes in the system. A very sophisticated model might attempt to model the spike waveform directly, and convey a certain number of bits required to reconstruct the waveform to a certain degree of precision. Most large-scale neuromorphic electronic systems model spikes with a small number of bits of precision. Hence interneuron communication is typically implemented using packet-switched networks as they use hardware resources more effectively when small amounts of information are exchanged.

    Representation of time is a nontrivial task, as temporal resolution is an important factor in the design of communication networks for spiking neurons. There are two major approaches to representing time. (i) Discrete time: In this approach, time is discretized into global ticks, and a communication network is designed to deliver time-stamped spikes at the appropriate global time step. (ii) Continuous time: In this approach, the entire system operates in continuous time, and spike delays are also modeled with continuous time electronics. This is a challenging design problem, and practitioners of this approach typically make use of the following set of observations:

    Neuron spiking rates are very low (tens of Hz) compared to the speed of digital electronics (GHz). This means that a fast communication fabric operating in the tens or hundreds of MHz regime would be idle almost all the time.

    Axonal delays are also on the order of milliseconds compared to the switching delay of gates (tens of picoseconds).

    A very small (< 0.1%) variation in spike arrival time should not have a significant impact on overall system behavior, because biological systems are known to be very robust and should be able to adapt to a small variation in spike arrival time.

    Combining these three observations leads to the conclusion that we can ignore the uncertainty in spike delivery time if it can be kept in the order of microseconds, since the dominant delay term is the one introduced by the time constant of the neuron itself (tens of milliseconds) or the axonal delay model (milliseconds). For this approach to be successful, it is important for the communication fabric to be over-engineered so that the network is never heavily loaded. This philosophy is sometimes described by stating that ‘time represents itself’ – that is, the arrival time of spikes itself represents the time at which the spikes should be delivered. This relies on real-time behavior of spiking networks and their silicon implementation.

    2.2 Address-Event Representation

    Typical neuron firing rates are in the regime of 1–10 Hz. Hence, thousands or even millions of neurons combined have spiking rates in the KHz to low MHz regime. This data rate can be easily supported by modern digital systems. Hence, instead of creating a network of individual neurons, neuromorphic electronic systems have end points that correspond to clusters of neurons, where a cluster can correspond to a specific processing layer. The circuits used to multiplex communication for a cluster of neurons into an individual communication channel are referred to as address event representation (AER) circuits. AER was first proposed in 1991 by Mead’s lab at Caltech (Lazzaro et al. 1993; Lazzaro and Wawrzynek 1995; Mahowald 1992, 1994; Sivilotti 1991), and has been used since then by a wide community of hardware engineers.

    The function of AER circuits is to provide multiplexing/demultiplexing functionality for spikes that are generated by/delivered to an array of individual neurons. Figure 2.2 shows an example of how the spiking behavior of four neurons is encoded onto a single output channel.

    Figure 2.2 Multiplexing different neurons onto a single communication channel

    Spikes are generated asynchronously, and the AER circuits accept spikes as they are generated and multiplex them onto a single output channel. The sequence of values produced on the output channel indicates which neuron fired. The time at which the neuron identifier is generated corresponds to the time at which the neuron produced a spike, plus a small delay due to the encoding process. As long as spikes are sufficiently separated in time, the encoding process ensures that the neuron identifiers are correctly

    Enjoying the preview?
    Page 1 of 1