Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Retinal Computation
Retinal Computation
Retinal Computation
Ebook645 pages4 hours

Retinal Computation

Rating: 5 out of 5 stars

5/5

()

Read preview

About this ebook

Retinal Computation summarizes current progress in defining the computations performed by the retina, also including the synaptic and circuit mechanisms by which they are implemented. Each chapter focuses on a single retinal computation that includes the definition of the computation and its neuroethological purpose, along with the available information on its known and unknown neuronal mechanisms. All chapters contain end-of-chapter questions associated with a landmark paper, as well as programming exercises. This book is written for advanced graduate students, researchers and ophthalmologists interested in vision science or computational neuroscience of sensory systems.

While the typical textbook's description of the retina is akin to a biological video camera, the real retina is actually the world’s most complex image processing machine. As part of the central nervous system, the retina converts patterns of light at the input into a rich palette of representations at the output. The parallel streams of information in the optic nerve encode features like color, contrast, orientation of edges, and direction of motion. Image processing in the retina is undeniably complex, but as one of the most accessible parts of the central nervous system, the tools to study retinal circuits with unprecedented precision are up to the task. This book provides a practical guide and resource about the current state of the field of retinal computation.

  • Provides a practical guide on the field of retinal computation
  • Summarizes and clearly explains important topics such as luminance, contrast, spatial features, motion and other computations
  • Contains discussion questions, a landmark paper, and programming exercises within each chapter
LanguageEnglish
Release dateAug 7, 2021
ISBN9780128231777
Retinal Computation
Author

Greg Schwartz

Gregory William Schwartz is the Derrick T. Vail Associate Professor in the Departments of Ophthalmology and Physiology of the Feinberg School of Medicine at Northwestern University, Chicago, IL, USA. His lab works on computation in the mouse retina and early visual system at various levels, including neuronal biophysics, synapses and circuits, retina-to-brain connectivity, and innate visual behavior. In addition to mentoring and teaching topics related to the retina, Dr. Schwartz directs graduate courses on statistics and data science in neuroscience and scientific communication.

Related to Retinal Computation

Related ebooks

Medical For You

View More

Related articles

Related categories

Reviews for Retinal Computation

Rating: 5 out of 5 stars
5/5

1 rating0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Retinal Computation - Greg Schwartz

    Introduction

    When I told a friend of mine that I was writing a book on retinal computation, his sincere reply was, I didn’t know that the retina does math! My friend is a successful and intelligent engineer, but not an academic scientist. His reply got me thinking about how I can explain succinctly that the retina does, in fact, do math, and why I find how and why it does that math important enough to write a book about it.

    Some of you, like my friend, are approaching the subject of retinal computation with little prior knowledge. Others were making foundational discoveries before I was born, let alone when I entered the field less than 20 years ago. But I hope that everyone who reads this book will learn something new, as I do every day studying the retina, and share some of my fascination and wonder at its amazing ability to make us feel as if we have a high-resolution video camera with a pipeline right to conscious perception when what we really have is a bunch of noisy neurons doing math.

    At my public PhD defense, in front of an audience mostly of non-neuroscientists, I tried to sum it up by saying that, The retina is not a video camera; it is the world’s most advanced image-processing machine. But that is not quite right. The camera on my iPhone has more pixels, the photomultiplier on my microscope has far superior temporal resolution, and algorithms used by security agencies around the world can recognize millions of faces, far exceeding the abilities of any human. What makes the retina powerful is not easily quantified as the smallest, the fastest, or the most.

    The word flexibility comes closer to capturing the right concept. Flexibility in the brain is well documented. It is the property that allows us to learn, adapt, and grow over the course of weeks, months, and years. Synaptic connectivity changes every time we learn a new skill, and entire regions of the brain can be rewired and repurposed after injury. But in the retina, flexibility means something different. Synaptic plasticity and rewiring are minimal. The retina’s circuits must represent all the relevant information in the visual world—brightness, contrast, color, motion, texture, threat, and more—all simultaneously in parallel. The same circuits must work in the moonlight and under the noon sun when there are a billion times as many photons. They must enable us to read street signs with small letters while at the same time judging the speed of nearby vehicles in the periphery. The biological substrate for all that computation is highly constrained to be thin, transparent, and must transmit all its information through the optic nerve’s bottleneck. Human-made imaging devices do not come close to that kind of computational flexibility despite being made of faster and more reliable electronic components than the retina’s biological components. And Apple has not made an iPhone camera with an integrated image processor that is transparent and less than 200 μm thick, at least not yet!

    Purpose and scope

    This is not a textbook designed to summarize the enormous body of literature on the anatomy and physiology of the retina. Several other excellent books serve that purpose (Dowling, 1987; Kolb et al., 2011). The perspective of this book is that of computation. Each chapter covers a different computation and broadly asks four questions.

    1.Why did the retina evolve to perform this computation?

    2.How do the signals transmitted to the brain carry information about this particular feature of the visual world?

    3.What mechanisms in retinal circuits implement the computation?

    4.How are the signals from the retina for this computation used to drive behavior?

    The degree to which the answers to each of these questions are known varies widely, so in some chapters, instead of providing answers, I will pose more specific versions of these questions as topics for future work.

    For graduate courses

    I assume that the reader has a basic understanding of cellular and systems neuroscience and retinal anatomy and physiology at the early graduate level. While I hope this book will appeal to readers at many different career stages, from advanced undergraduates to established investigators, and in disparate fields, from psychophysics to sensory biology outside vision, I wrote it with neuroscience and vision science graduate students in mind. The chapters are independent modules, and each chapter contains discussion questions from one to three landmark papers that could be assigned in a seminar-style course. Each chapter also contains a programming exercise in MATLAB® for students who want to develop an intuition for retinal computation by implementing and exploring simulations. The content could fill a full-semester graduate course, or individual chapters could be incorporated into a broader course on sensory computation or visual processing in the retina and the brain.

    Model species

    Retinal neurobiology is somewhat unique in sensory neuroscience in its use of many different model species. Prominent vertebrate species in the field over the years have included salamander, goldfish, mudpuppy, frog, zebrafish, chicken, mouse, rabbit, guinea pig, ground squirrel, ferret, tree shrew, macaque, and marmoset. While a proper comparative study including the visual ecology of each of these animals is of great importance (Baden et al., 2020) and fascinating to me personally, it is unfortunately beyond the scope of this book. There is also an entire field of study on invertebrate vision, including a detailed understanding of several retinal computations that I will not cover here—perhaps in a future edition! My own work has been on salamander, mouse, and nonhuman primate retina, and many of the examples will be from these species. In particular, mouse has become the model of choice for retinal neurobiology over the last decade because of the vast knowledge of cell types and genetic tools available. Fortunately, where they have been studied, most of the principles of retinal computation are well conserved among vertebrates. I will highlight some exceptions to that rule, like in color vision, where the visual niche of different animals has placed very different constraints on the computation.

    Guided by the scientific community that welcomed me

    I have thoroughly enjoyed my career so far as a retinal neuroscientist in large part because of the extraordinarily thoughtful and caring community of scientists that makes up our discipline. I have tried to thank many of them personally, but I want to thank them collectively here. At every stage in my journey, including the current one, I have benefited from the insights, wisdom, advice, collegiality, openness, and trust of many of the people who make up the foundation of modern retinal neuroscience. Table 1 lists my colleagues who formally reviewed chapters of this book, but the list of additional people with whom I discussed aspects of what I wrote would comprise the majority of the field. Part of the joy of these professional relationships, of course, is that there are many voices to tell me when I am wrong or what I missed. And our understanding of retinal computation is evolving so rapidly that I predict, and hope, that parts of this book will be out of date in the next decade. For both of these reasons, I encourage feedback from readers. I am already taking notes for suggestions for future editions.

    Table 1

    Acknowledgments

    I thank everyone on the Elsevier team who gave me this opportunity that turned into an extremely rewarding way to spend a largely locked-down 2020 and Isabel Romero Calvo for the beautiful chapter title and cover illustrations. I also thank my formal mentors in the field—Michael Berry and Fred Rieke—as well as the many informal mentors and colleagues who have welcomed me into retinal neuroscience and have created an intellectually exciting, open, and supportive community. My colleagues who were kind enough to review chapters are listed in Table 1, but many others contributed through conversations. Some of the chapters were cowritten with major contributions from my trainees who are credited on those chapters, and Chapter 11 (Direction selectivity) was primarily written by my colleagues Benjamin L. Murphy-Baum and Gautam Awatramani. These contributions were vital and much appreciated. I am grateful to the members of my lab who were endlessly patient with me in accommodating this time commitment and to my department chair, Nicholas Volpe, for his tremendous support and belief in me. Finally, and most importantly, I thank my family—Sarah, Nathan, and Erica—for their love, support, and encouragement throughout this journey.

    References

    Baden T., Euler T., Berens P. Understanding the retinal basis of vision across species. Nature Reviews Neuroscience. 2020;21(1):5–20.

    Dowling J.E. The Retina: An Approachable Part of the Brain. (Cambridge: Harvard University Press); 1987.doi:10.1016/0092-8674(88)90378-9.

    Kolb H., Nelson R., Fernandez E., Jones B.W. Webvision—The Organization of the Retina and Visual System. https://webvision.med.utah.edu/. 2011.

    Part 1

    Luminance

    Chapter 1: Photon detection

    Gregory William Schwartza,b,c    a Departments of Ophthalmology and Physiology, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States

    b Department of Neurobiology, Weinberg College of Arts and Sciences, Northwestern University, Chicago, IL, United States

    c Northwestern Interdepartmental Neuroscience Program (NUIN), Northwestern University, Chicago, IL, United States

    Abstract

    Photon detection can be considered the most elementary retinal computation, but achieving exquisite sensitivity to dim lights using the hardware of biology is no simple task; it requires specializations from molecules to circuits to behavior. Amazingly, our ability to detect light approaches the physical limits set by its quantization into discrete photons. This is one of the most extraordinary examples in biology of evolution’s power to optimize computation. As one of the best-studied retinal computations, much is known about the components of photon detection from the level of the molecules involved in phototransduction up to the neurons and synapses of the downstream circuit. Decades of behavioral studies, including some brilliant classical works, have made quantitative links between photons captured by the retina and perception. This chapter will review the signal and noise sources for photon detection at several processing stages in the retina and discuss the neural mechanisms that enable the signal to be amplified and detected over the noise. I will bookend this discussion of neural mechanisms with accounts of the behavioral literature in both humans and animal models.

    Keywords

    Photon detection; Neural mechanism; Signal and noise; Phototransduction; Behavioral studies; Physical limits

    Unlabelled Image

    In the dimmest conditions, the visual world is no longer continuous but is instead made up of discrete, sparse photons. Nocturnal hunters like owls rely on their keen vision in these extreme conditions. Remarkably, retinal circuits are optimized to detect sparse arrivals of single photons amidst the dark and mysterious background.

    Photon detection can be considered the most elementary retinal computation, but as we will see, achieving exquisite sensitivity to dim lights using the hardware of biology is no simple task; it requires specializations from molecules to circuits to behavior. Amazingly, our ability to detect light approaches the physical limits set by its quantization into discrete photons. This is one of the most extraordinary examples in biology of evolution’s power to optimize computation. The adaptive advantage of maximizing photon detection is clear; an animal that can detect the fewest photons gains information about the world that remains invisible to its predators, prey, and competitors.

    As one of the best-studied retinal computations, much is known about the components of photon detection from the level of the molecules involved in phototransduction up to the neurons and synapses of the downstream circuit. Decades of behavioral studies, including some brilliant classical works, have made quantitative links between photons captured by the retina and perception. The long and accomplished history of the study of photon detection makes summarizing it in a single chapter difficult, and I will first point the reader to several reviews by some of the researchers who made seminal contributions to the field (Donner, 1992; Field et al., 2005; Kiani et al., 2020; Nelson, 2017; Rieke and Baylor, 1998b).

    My goal is not to cover all the details of the steps between photon arrival and retinal ganglion cell (RGC) spikes, but rather to examine the signal and noise sources at several processing stages and discuss the neural mechanisms that enable the signal to be amplified and detected over the noise. I will bookend this discussion of neural mechanisms with accounts of the behavioral literature. Insights from behavioral studies in humans provided the analytical framework for studying photon detection decades before electrophysiological measurements in the retina caught up. Our contemporary understanding of photon detection in the retina has inspired a new set of behavioral questions that are increasingly accessible with the rapid advance of technologies in animal models.

    How many photons does it take to create a percept?

    This question has captivated people for over a century. Only two years after Einstein proposed that light is made up of quanta, the first psychophysics experiments claimed that humans could detect 50 quanta arriving at the cornea (von Kries and Eyster, 1907). But it was not until 1942 that Hecht, Schaler, and Pirenne made a breakthrough, demonstrating that as few as 5 photons absorbed by the retina can drive perception and that rods must be single-photon detectors (Hecht et al., 1942).

    The experimental paradigm used by Hecht et al. was essentially the same that had been used in all previous works on the absolute sensitivity of vision, and it is about as simple an experiment as you will find in neuroscience. Observers (in this case, the three authors) sat in a dark room and watched brief, small, very dim flashes, and they reported on each trial whether or not they had seen a flash. What made this work revolutionary was not the data but rather the authors’ insight into its analysis.

    As in previous studies, Hecht et al. could use optical measurements to estimate the number of photons from each flash reaching the cornea, but they had no way to measure how many of those photons were ultimately absorbed by the rods. So, they took an approach to the problem based on signal detection theory. They supposed that the observer acted like a perfect threshold detector, responding that he had seen a flash when the number of counted photons exceeded a fixed threshold. The emission of photons from a light source is a Poisson process, so even with identical settings on their lightbulb, the number of photons reaching the observer, or his rods, should follow a Poisson distribution; the mean photon count should be equal to its variance. As the number of photons increases, the Poisson distribution becomes more and more Gaussian, so its shape varies very little. In the limit of small counts, however, the shape of the Poisson distribution changes substantially with each increment of its mean, and this fact proved essential.

    Returning to the observer as a threshold detector, Hecht et al. proposed that a seen flash is one that emits n or more photons, where n is the threshold. For the photon detection task, the probability of seeing a flash is then equivalent to the Poisson cumulative distribution function (CDF). Before they could compare their frequency-of-seeing data to Poisson CDFs, however, Hecht et al. needed one more insight. Still lacking a measurement of the fraction of photons incident on the cornea that was absorbed by the photoreceptors, they could not plot their data on an absolute light intensity axis. Instead, they used a logarithmic x-axis in which changes in this unknown factor became translations along x without distorting the shape of the curve (Fig. 1.1A). In the end, the act of estimating photon detection thresholds was reduced to finding which Poisson CDF with a logarithmic x scale best matched their data. This single-parameter fit yielded n = 6, 7, and 5 photons for the three observers (Fig. 1.1B). From anatomical studies of the human retina, the authors estimated that the flashed spots in their experiment covered an area of ~ 500 rods. Therefore, they concluded that humans can perceive as few as 5 photons among a pool of 500 rods and that rods themselves must be single-photon detectors.

    Fig. 1.1

    Fig. 1.1 The frequency-of-seeing experiment by Hecht, Schaler, and Pirenne.

    (A) Cumulative Poisson distributions plot the probability of absorbing "n" or more photons for n = 1–9 as a function of the mean number of photons per flash plotted with a logarithmic x-axis. (B) Psychometric curves for the frequency-of-seeing experiment for the three subjects (authors) in the study plotted as in (A). The best fit curve from (A) is noted for each subject. (From Hecht et al. (1942).)

    Increment threshold and dark light

    The experiment by Hecht, Schlaer, and Pirenne was performed in absolute darkness, but how does the problem of photon detection change in the presence of background light? In this situation, the subject must detect a flash containing ΔI photons on a background of I photons. Horace Barlow (1957b) performed such an experiment and, like Hecht, Schlaer, and Pirenne, he used the frameworks of signal detection theory and Poisson statistics to analyze the data. Barlow plotted the intensity of the dimmest detectable flash (ΔI) against the intensity of the background (I), both on logarithmic axes (Fig. 1.2). This resulting curve, called the increment threshold curve, has three distinct regions when the test spot is small and brief. I will discuss these regions from right to left, beginning with the highest luminance portion of the curve.

    Fig. 1.2

    Fig. 1.2 The threshold increment detection curve in human perception.

    Minimum detectable threshold in luminance (y-axis) versus the mean background luminance (x-axis) both on logarithmic scale. Labeled lines in the bottom right quadrant show predictions from fluctuation theory (slope = 0.5) and from the Weber-Fechner law (slope = 1.0). (Modified from Barlow (1957b).)

    At background intensities greater than 7 log units in Barlow’s study, the detection threshold rose proportionally with the background. This gives a slope of 1 on a log-log plot, and it is referred to as the Weber-Fechner region. In this luminance regime, the retina is not optimized for photon detection but instead implements luminance adaptation to support contrast constancy. We will examine this computation in the next chapter.

    Next, in the middle region of the increment threshold curve lies a regime defined by fluctuation theory, where the slope is 0.5. This regime has been interpreted as evidence of a fixed signal-to-noise ratio (SNR) threshold in photon detection by the following logic. In the increment detection task, an observer must detect I + ΔI photons in the spot location as greater than I photons in the background. Recall that SNR is the mean signal divided by the standard deviation, σ, of the noise, in this case, the standard deviation of the background photon count, σI. Because photon counts are Poisson, the mean light intensity, I, is always equal to its variance, σΙ². Therefore, as I increases, its standard deviation increases as si1_e . Thus, to maintain a fixed SNR, the increment threshold, ΔI, must increase in proportion to the square root of the background intensity, I, leading to a slope of 0.5 on the log-log increment threshold detection curve.

    Finally, the threshold detection curve has a flat region at the dimmest background intensities. In this regime, changing the background light intensity does not affect the detection threshold, and ΔI is the same as the absolute detection limit in darkness, as measured by Hecht, Schlaer, and Pirenne. Barlow argued that the light intensity at which the curve reaches this flat region is a measure of the system’s intrinsic biological noise, since extrinsic noise from background light must be comparable to intrinsic noise to affect the threshold. He called this value the intrinsic retinal noise or, more poetically, the dark light. Dark light is an equivalent light intensity, so its units are events per second (or events per rod per second converting spot area to number of rods), but it represents the total noise of the system, from phototransduction all the way to the motor output to initiate a response. Barlow (1957b) calculated a dark light of 1260 events per degree of visual angle per second, equal to ~ 0.01 events per rod per second. As we will see, this value has a striking correspondence to biological phenomena, even though the experiments were purely behavioral.

    Signal and noise for sparse photon detection through the retina

    The behavioral work of Hecht, Schaler, and Pirenne, Barlow, and others framed the question of photon detection in a new way as a retinal computation that must preserve sparse signals within the background of biological noise (Koenig and Hofer, 2011; Sakitt, 1972). We now know that the most sensitive RGCs collect from ~ 10,000 rods and that single-photon responses in only two or three of those rods within an ~ 50 ms integration period are sufficient to drive the percept of a flash (Ala-Laurila and Rieke, 2014). This remarkable sensitivity creates two constraints that have driven research on the mechanisms of the computation. First, the signal originating from a single photon absorbed by a single rhodopsin molecule must be amplified enormously to create a sufficiently large voltage change in a rod to alter neurotransmitter release. Eventually, through additional amplification, it must create one or more action potentials in an RGC. Second, the entire pathway from photon detection in rods to the motor command to initiate a response must be essentially noiseless, such that signals in the tiny proportion of responding neurons are not swamped by noise in the vast majority of cells that carry no signal.

    The pathway in the retina for dim signals is well known (Fig. 1.3). Rods are the only photoreceptors sensitive in this regime, and they synapse onto rod bipolar cells (RBCs). Each RBC in the primate or mouse retina collects from ~ 20 rods. RBCs synapse onto AII amacrine cells, which form an electrically coupled network both with each other and with ON cone bipolar cells (CBCs) (Kolb and Nelson, 1983; Sterling et al., 1988; Tsukamoto et al., 2001). Thus, convergence at the level of the AII-ON CB network is not a straightforward anatomical measurement, but estimates in mice, cats, and primates range from 300 to 800 rods pooled at this level in the circuit (Grimes et al., 2018; Sterling et al., 1988; Tsukamoto et al., 2001). Finally, ON CBCs synapse onto ON RGCs. In the primate retina, ON parasol RGCs are among the most sensitive to dim flashes, and depending on eccentricity, they can collect from up to 10,000 rods (Ala-Laurila and Rieke, 2014; Grimes et al., 2018). There are also OFF circuits that operate in dim conditions converging onto OFF RGCs. Since behavioral measurements described later in this chapter have shown that the ON pathway is responsible for photon detection (Smeds et al., 2019), this circuit will be our focus here.

    Fig. 1.3

    Fig. 1.3 The rod bipolar circuit of the mammalian retina and its convergence.

    (A) Schematic of the rod bipolar circuit in mammalian retina. Chemical synapses marked with arrows and neurotransmitters. Electrical synapses marked with resistors. Abbreviations: CBC, cone bipolar cell; RBC, rod bipolar cell; AC, amacrine cell; RGC, retinal ganglion cell; Gly, glycine; Glut, glutamate; iGluR, ionotropic glutamate receptor; mGluR, metabotropic glutamate receptor. (B) Rod convergence at different levels in the circuit. Note that there is also divergence, so the number of converging rods is not simply the product of convergence of each cell type.

    Amplification in rod phototransduction

    Single-photon responses in rods were first measured using the suction pipette technique developed by Baylor, Lamb, and Yau (Baylor et al., 1979a,b) (Fig. 1.4). Not only did these recordings serve as experimental confirmation that rods, indeed, transduce single photons into an electrical response, but they also allowed researchers, for the first time, to quantify both signal and noise at the first stage in vision. Measurements of noise and signal reproducibility that came from these recordings are detailed in the following sections, but first, we will consider the amplification that converts the absorption of a single photon into a macroscopic current in the rod outer

    Enjoying the preview?
    Page 1 of 1