Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Prostheses for the Brain: Introduction to Neuroprosthetics
Prostheses for the Brain: Introduction to Neuroprosthetics
Prostheses for the Brain: Introduction to Neuroprosthetics
Ebook804 pages8 hours

Prostheses for the Brain: Introduction to Neuroprosthetics

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Prostheses for the Brain: Introduction to Neuroprosthetics bridges the disciplines required in the field of neuroprosthetics and provides the interdisciplinary base required for understanding neuroprosthetic devices. It introduces basic aspects from the physical, bioengineering and medical perspectives, and forms a common knowledge base. It provides the entrance to the field and sets realistic expectations, both regarding potentials as well as limitations, for the devices in both design and outcomes.

The book additionally reviews the technology behind the most frequently used and most clinically successful neuroprosthetic devices. It provides the physiological background for their function, as well as the technology behind them. Finally, the authors suggest future possible developments that may play crucial role in new prostheses for the brain. This gives the reader a comprehensive view on the principles and applications of neuroprostheses. This book has been built from the authors course they teach on neuroprostheses and is ideal for students, engineers and medical professionals in this field.

  • Introduces the general principles of conductivity of electrolytes and the processes at the tissue–electrode interface
  • Describes safety issues and regulatory rules, clarifies conceptual differences between stimulating and sensing electrodes
  • Reviews stimulation strategies, tissue reactions, potential medical complications, brain adaptations and the clinically most successful applications of neuroprostheses
LanguageEnglish
Release dateApr 3, 2021
ISBN9780128188934
Prostheses for the Brain: Introduction to Neuroprosthetics
Author

Andrej Kral

Andrej Kral received MD in general medicine in 1993 and a PhD in pathological physiology in 1998 from the Medical School, Comenius University, Slovak Republic. Since 2009, he has been professor of auditory neurophysiology at the Medical School Hanover, director of Dept. of Experimental Otology, and Co-Director of the Institute of AudioNeuroTechnology of the Medical School Hannover. In 2017, he became member of the National Academy of Science and in 2018, he was appointed professor of systems neuroscience at Macquarie University, Sydney, Australia. Dr. Kral has been teaching different aspects of neuroscience and neuroprosthetics since 1998 (to medical students, biologists and engineers). Dr. Kral has published more than 90 peer-reviewed articles, including high-impact journals New England J Med, Science, Lancet Neurol, Nat Neurosci, Trends Neurosci and Brain. He published several reviews on deafness both in clinical as well as in theoretical journals. He coauthored a book on computational neuroscience, edited one of the Springer Handbook of Auditory Research volume (47) and is a chapter author (vol. 20). Dr. Kral has been chapter author for several edited volumes including the recent “The Auditory Cortex” (Springer). His areas of expertise include electrical stimulation of neurons, cochlear implants, central neuroprosthetics, plasticity and development of the brain. For more details, publications and a complete CV, see www.neuroprostheses.com.

Related to Prostheses for the Brain

Related ebooks

Technology & Engineering For You

View More

Related articles

Related categories

Reviews for Prostheses for the Brain

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Prostheses for the Brain - Andrej Kral

    support.

    Part 1

    Introduction

    Chapter 1: The historic preconditions of neural prostheses

    Abstract

    Neural prostheses are devices that assist or restore function lost as a result of damage to the nervous system. Chapter 1 introduces the historic development necessary for the emergence of neural prostheses in the field in biomedical technology. This area of medicine developed from two converging areas of science: information technology and neuroscience. Information technology allowed for the separation of information from its physical substrate and provided tools that quantify and process this information to develop software and control equipment. Neuroscience discovered that the nervous system uses electric phenomena to control the body and that neurons process information using these phenomena. Together the progression of these fields enabled the development of artificial systems that could control or replace nervous system function.

    Keywords

    Neuron doctrine; Information theory; Computer science; Turing machine; Electrophysiology; Bioelectricity

    We are often not aware of changes that take place within our society. While changes that overturn everyday life may in retrospect appear as revolutions, on a day-by-day basis they may be small and incremental. Such incremental changes eventually fundamentally impact society. Sociologists call small changes leading to large leaps in societal structure baseline shifts.

    Today we experience such a change. The digitization of our lives progresses constantly and influences daily life in form of the Internet, smart phones, and tablets. We use them to buy tickets for the bus, plane, or train, to watch movies, stream TV programs and listen to our favorite music, and to navigate through unknown places. We are digitizing our culture. This revolution, with its positive and negative consequences, affects medicine in a fundamental way. It has provided us with one of the mightiest tools that science has developed: the computer and information technology.

    This revolution is affecting the treatment of nervous system diseases in particular. Many new treatments rest on the success of the field of bionics (biologically inspired engineering). By mimicking nature, bionics leads to the design of supportive devices that can subsequently enhance or replace the function of the original biology (prosthetics). Since the brain is an information-processing device, progress in computer technology and prosthetic devices will lead to the treatment of many neural diseases, and potentially to the extension of brain capabilities. However, development of neural prostheses is complex. It requires (1) an understanding of nervous system function, (2) an understanding of how to interface with the nervous system using technology, and (3) a level of technical development that allows for the fabrication of devices that imitate, compensate, or replace nervous system function.

    In clinical neuroscience there has been a baseline shift in the application of bionics to the treatment of disease, since we are now able to intervene in conditions that were at the border of science fiction only a few years ago. We can treat deaf subjects with cochlear implants, can alleviate symptoms of Parkinson's disease using deep brain stimulation, can help to reduce chronic pain, and regain control of lost bladder function. Many new applications are in clinical use or rapidly approaching it. How did this field come so far in such a short time, and where were the fundamental roots of these approaches?

    Historic roots of biomedical technology

    Knowledge about the brain has accumulated over decades of neuroscientific research, primarily driven by experimentation on animal (model) systems. This allowed us to understand the fundamental principles of nervous function. In the past, the brain was regarded as a syncytium: a tissue where individual cells fused into a huge mesh without clear delineation of their original border and with extensive communication of the intracellular space (this is indeed partly true in the heart or for the placenta of an unborn child). Histologists Camillo Golgi (1843–1926) and Santiago Ramon y Cajal (1852–1934) studied the brains of animals and deceased humans. Golgi developed a method that stains individual nerve cells (neurons, Fig. 1.1). Using this method he overturned the theory that brain is a syncytium and demonstrated that cells in the brain are separated and distinct (Chapter 4). From this we learned that the brain is an intricate structure of an immense number of such neurons (estimations for human brain go to 10¹¹), each of them being connected with thousands of other neurons (in some cases up to 100,000 or more). It was Ramon y Cajal who then explicitly formulated the neuron doctrine: that individual neurons are the elementary processing elements of the brain. Neurons are cells specialized to process information, coded in an electrical signal on their neuronal membrane, and processed in the chemical contacts between neurons (the synapses). We have subsequently understood that a network of such cells can process information in a complex way; not only can it perform logical operations, as Warren McCulloch and Walter Pitts showed in the 1940s, but it can also store information in a distributed way via the pattern of network connections, rather than in individual cells. A neuronal network can process information, generate sequences of events (like as required for motor action), and a multi-layered neuronal network can learn in a desired (supervised) way; deep learning artificial intelligence is inspired by this and has become a part of our daily cell phone usage when, for example, transcribing spoken language into a text or talking to Siri. Over the centuries we have also learned that the brain is the seat of our mind, our feelings, and our perceptions. Now the principles of action of the brain have allowed computers and electronics to extend back into the body and communicate with organic tissue.

    Fig. 1.1

    Fig. 1.1 Drawn traces of neurons stained using the Golgi method from the cerebellum, drawings by S. Ramon y Cajal. The Golgi method showed conclusively that the brain contained nerve cells, or neurons, that are part of an interconnected web. From Museo Cajal in Madrid, Spain, public domain.

    To reach this point, two streams of science had to come together: the concept of virtual representations and virtual instructions (information theory and computers) and the knowledge of electric fields and bioelectricity. Both of these were equally important and, interestingly, also started in parallel. They involved two different branches of knowledge: technology and natural science.

    History of computer technology

    The separation of information from its signal, and the storage of this information, started millennia ago with the invention of written words. Clay tablets from Mesopotamia (around 2000 BCE) allowed for the first time the storage of information and usage of this information to artificially combine signs and develop new meanings. This information could be conserved by transforming its form, that is, separating information from the signals transmitting it. Over centuries this principle developed into the alphabet that we use today, including centuries of development of writing instruments and the evolution of clay tablets to papyrus or metal rolls, and leather to paper. At the same time, mathematicians realized that devices used to store mathematical information, such as the abacus, could also be used to simplify the calculation (i.e., processing) of said information. These concepts form the groundwork for what would become information technology: the study of systems and devices that store, convert, and process information.

    The roots of modern computing technology arose in France and in England in the eighteenth century. The device that initiated the development of a computing machine was the loom controlled by punch cards (Fig. 1.2), the most famous built in 1804 by Joseph Marie Jacquard (1752–1834) in Lyon, France, to allow efficient manufacturing textiles with complex patterns like brocade or damask that otherwise required extensive manual work and were prone to error. Introduction of the Jacquard loom sped up weaving of complex fabric substantially. Using punch cards, a production pattern was predefined (and could be modified nearly at will) for the machine to read out and execute. It was the first time that a machine followed an algorithmic instruction that could be varied to produce different outcomes. By this, the Jacquard loom and its less famous predecessors were the first devices that physically separated instructions (programs) from the machine hardware. The loom revealed that information could be extracted, abstracted, stored, and then fed back into a system to reproduce the original output. This invention proved that computing technology was not only academically interesting but also a useful technique for practical economic applications.

    Fig. 1.2

    Fig. 1.2 Loom of Jean Baptiste Falcon, a predecessor of the famous Jacquard loom, with the wooden punch cards in front. Thread could be fed down through a system of punch cards to convert stored information into patterned weaving. This eliminated the need for error-prone humans to follow complex weaving patterns and is the first modern example of a computing machine. (Photograph by Rama, Wikimedia Commons, Cc-by-sa-2.0-fr.)

    The Jacquard loom is the predecessor to machines produced, for example, by IBM that were initially used to simplify population counting and store large amounts of information. These devices were used to replace human computers (a term signifying employees who manually performed mathematical calculations; usually womena). The oldest human computers were probably Hipparchus of Nicaea (180–125  BCE) and Claudius Ptolemy (90–165 CE), generating tables for the tracing of celestial objects. Human computers were popular in the sixteenth through nineteenth centuries; they were used to calculate tables to track celestial objects over time, to determine the movement of comets, for the navigation of ships, for military purposes, in banks for performing financial calculations, and so on. Human computers developed mathematical tables for computing goniometric functions or logarithms, an essential tool for all students until ~ 30 years ago. Further innovation towards a computing machine capable of following sequences of instructions at a rate faster and less error-prone than a human was necessary to enable greater insight into neuroscience.

    One essential step of development of computers was the development of the machine code. The initial contribution came from the German polymath Gottfried Wilhelm Leibniz (1646–1716, Fig. 1.3) by developing binary calculus. Inspired by old Chinese writings and some contemporary works, he formalized computation in binary numbers, anticipating information theory as well as the development of computing machines. Using his work, Leibniz laid the concepts of computer science and cybernetics (in addition to other monumental contributions in the fields of mathematics, physics, and other sciences).

    Fig. 1.3

    Fig. 1.3 Gottfried Wilhelm Leibniz (1646–1716). German polymath and inventor of binary calculus. By his formalization of calculation with binary numbers he formed the mathematical basis of machine code used in present-day computers. (Painting by Christoph Bernard Francke.)

    The genuine ideas of real computing machines, however, date to Charles Babbage (1791–1871, Fig. 1.4), who imagined and designed a plan for mechanical machines that were to follow a program and perform mathematical computations (an Analytical Engine and a Difference Engine). While he designed the principles of such machines, including a punch card system inspired by the Jacquard loom, he never built a functional prototype. His successors much later built a machine similar to Babbage, and eventually proved the applicability of the design.

    Fig. 1.4

    Fig. 1.4 Charles Babbage (1791–1871). Inventor of the first mechanical computer.

    Babbage was very skilled in selling his ideas and receiving funding for his work. Even though his machines were never built, they inspired the theory of computation and led to many subsequent inventions. For example, Ada Lovelace (1815–1852, Fig. 1.5), a mathematician interested in Babbage‘s work, conceptually designed the first computer program for Babbage's Engines that already included subroutines and conditional terms. By that, she can be considered the first software developer. Since Babbage's machine was never built, these programs were never practically tested and used during Lovelace's life, although subsequently they were proven functional. To honor the contribution of Lovelace to computer technology, her name was selected for a programming language, Ada, developed for the company Honeywell Bull; it shared some similarity with the more popular language Pascal widely used for nearly two decades on many computer systems.

    Fig. 1.5

    Fig. 1.5 Ada Lovelace (1815–1852). Founder of software. Ada Lovelace formulated the basic ideas related to computer programs. Subroutines and conditional terms are also her invention.

    It further took further nearly 200 years until Alan Turing (1912–1954, Fig. 1.6) wondered what happens in the mind of human computers; is human nature and consciousness really necessary to perform complex mathematical computations? In his influential 1936 paper On computable numbers, with an application to the Entscheidungs problem,b he provided parts of the answer. He realized that the human computers simply follow a set of rules (instructions) and operate them on provided data. Thus a virtual machine (the universal Turing machine) could perform the same tasks provided it had enough memory and a processor following a sequence of instructions. In this sense Alan Turing followed the ideas of Lovelace, Jacquard, and others and constructed a formal mathematical background for computer science.

    Fig. 1.6

    Fig. 1.6 Alan Turing (1912–1954). Pioneer of computer science.

    In parallel with the development of information theory, technological innovation fostered the quantification and miniaturization of information flow.c Punch cards were bulky physical objects that had to be mechanically transported from place to place. This is too slow a technique for most practical applications of information interchange. Samuel Thomas Sommering (1755–1830) was an influential anatomist (he discovered the corpus luteum in the retina) and a famous inventor. Building on the inventions in the field of electricity, he suggested the use of electrical signals to transmit written information. The received information was made visible by electrochemical reactions leading to bubble formation in a fluid that also caused acoustic signals. The codes used were complex and not practical, but Samuel F. B. Morse (1791–1872) improved the transmission speed substantially by transforming the alphabet to binary signals based on duration (short and long signals), using the frequency of occurrence to further boost transmission speed (e.g., the most frequent letter E was coded by the shortest signal). The invention of Morse code allowed a fast transmission of information and finally to the practical use of the telegraph. The invention of the telephone by Alexander Graham Bell (1847–1922) and many follow-up inventions in his Bell Laboratories, together with the wireless telegraphy developed by Guilermo Marconi (1874–1937) and Karl F. Braun (1850–1918; Braun also built the first cathode-ray tubes that were the basis for television sets), revolutionized and sped up information interchange over large distances.

    The final essential step in information theory took place in the famous Bell Laboratories. These laboratories provided scientists with extensive academic freedoms, similar to the traditional academic freedoms of university professors in some countries, with a very solid financial funding of Bell Telephone Company (later renamed to American Telephone & Telegraph Company, AT&T). The scientists in Bell Laboratories were allowed to follow any idea they considered important, and were only obliged to respond to the requests of other scientists and members of Bell‘s company. One of these scientists was Claude E. Shannon (1916–2001); he made excellent use of these freedoms and decided to focus on measuring information. Bell's invention, by that time already in world-wide use, was transmitting information, but nobody could measure (quantify) it. Claude Shannon realized that a measure of information is not related to the interpretation (meaning), but to the amount of how surprising (unexpected) the information is. To measure information it is sufficient to measure the change or variance in a signal, that is, the part of the signal that is surprising or unexpected.d Only the unexpected is true (new) information. In his 1948 paper A mathematical theory of computation, based on the binary calculus developed by Leibniz, he measured information using entropy (the measure of the disorder used in thermodynamicse). Here the elementary amount of information, the informational atom, was one binary digit, abbreviated bit. Using his formalisms he eventually transformed information into an exact science. From this, the technological development of computing machines was purely a matter of the physical development and miniaturization of components that allow computational processing of electrical information, such as the transistor and later the integrated circuit.

    These developments established the concept that information processing can be separated from a substrate on which it is running, and that complex electrical or mechanical machines can then interpret and transform this information based on a set of pre-programmed rules. While our brains do not work identically to computers, they are also responsible for the transmission and transformation of information, and neuroscience required an understanding of information theory to progress as a field. This background was essential for development of active prostheses that process some kind of input information and convert this into output signals that finally stimulate neurons. Thus, these prostheses are controlled by dedicated computers and stimulate neurons of our brain. For this, computers had to become small, cheap, and easy to implement. Therefore, development of computers initiated the development of prosthetics that control, based on some kind of input, neuronal activity. They need to process the input and transform it to a signal that can be meaningfully processed by the brain.

    Development of electrophysiology

    The second root of neuroprosthetics, bioelectricity, originated in Italy. The electrical revolution was initiated by Alessandro Volta (1745–1827, Fig. 1.7) who described static electricity and subsequently he and others observed similar phenomena in nervous tissue: the discovery of bioelectricity. Luigi Galvani (1737–1798, Fig. 1.8) used static electricity to stimulate the muscle of a frog and his bimetal experiments paved the way for the invention of the battery. More importantly, his frog experiments proved that animals use electricity in their bodies. This was the period of great electrical inventions, and electricity was a controversial and popular topic of the time similar to today's smartphones. A product of this time was also the novel from Mary Goodwin (later Shelley) (1797–1851) called Frankenstein or The Modern Prometheus. Here electricity played a key role; the body constructed from parts of dead men was brought to life by an igniting electric shock. Thus, electricity was considered the source of the spark of life. These early ideas proved false, but such fascination for electricity allowed for further innovation.f

    Fig. 1.7

    Fig. 1.7 Alessandro Volta (1745–1827). Described static electricity and bioelectricity.

    Fig. 1.8

    Fig. 1.8 Luigi Galvani (1737–1798). Performed the first muscle stimulation experiments with electricity.

    While a link between electricity and the body was proven from the onset of electrical science, at that time technology was insufficient to explore the electrical processes in the brain in more detail. Nearly 100 years of further technological development finally enabled the recording of electrical phenomena in individual neurons. The first recording of electrical signals from the human brain, the electro-encephalogram (and also electrocardiogram) was measured by Emil Heinrich du Bois-Reymond (1818–1896), followed in 1868 by recording of nerve potentials by Julius Bernstein (1839–1917) and much later Edgar Douglas Adrian (1898–1977) together with Keith Lucas (1879–1916). The latter developed the capillary electrometer, a technical device that recorded action potentials extracellularly from the sciatic nerve of a frog. This technological development initiated the series of studies by Adrian. Subsequently, Joseph Erlanger (1874–1965) and Herbert Spencer Gasser (1888–1963) followed this approach in the first decades of 1900. Recording of electrical phenomena allowed for the calculation of nerve fiber conduction velocity, which revealed that anatomically similar neurons were further differentiated according to their functional properties. Many features used in clinical neurology today still refer to these observations.

    The nervous system, as these researches could demonstrate, and today‘s computers share aspects of information processing and use electrical signals to convey this information. In the brain, pulse-like depolarizations of the cell‘s outer membrane, lasting ~ 1–2 ms and having an amplitude of ~ 90 mV, called action potentials, serve this function (see Chapter 4). In modern technical equipment, e.g., transistor-transistor logic (TTL) pulses is one way to perform a similar function. Alan L. Hodgkin (1914–1998) and Andrew F. Huxley (1917–2012) finally mathematically described the physiological process behind action potentials and their work allowed us to understand the molecular mechanisms of action potentials.

    In parallel, neuroscience begun to understand that the function of the brain was separated into different processing streams, which were sometimes physically discreet. Pierre Paul Broca (1824–1880) documented a specific loss of language function following damage in a distinct brain region (the area is now called Broca's area). This was followed by the work of Gustav Theodor Fritsch (1838–1927) and Julius Eduard Hitzig (1838–1907) who in 1870 documented that electrical stimulation of a certain brain region (now called motor cortex) caused limb movements on the opposite side of the body in a dog. A similar technique has been repeatedly used to delineate the function of human brain regions, most widely exploited by the neurosurgeon Wilder Penfield (1891–1976) to identify language areas. This was of key importance in prevention of damage to these areas during brain operations, to prevent the loss of a key cognitive function: language. Penfield realized another cardinal aspect of this technique: it allowed him to better localize different neuronal functions to distinct brain regions. This had huge impact on both neuroscience and neurology.

    Using these developments, it was established that the brain uses electrical phenomena for processing of information, both for generating perceptions and for controlling muscle activity. This knowledge, combined with the general principles of coding within sensory organs, allows us to transfer information between technology and neural tissue.

    The concept of a neural prosthesis

    Since we now know how to manipulate and transform information across different signals, and we understand that the brain also processes and stores information, we can in theory use technology to convey information from a constructed device to the brain, and vice versa. That is the basic concept of a neural prosthesis.

    Diseases of the nervous system remained unrecognized for a long time in human history. This was partly due to the limited ability of humans to localize the seat of the human mind, and the director of our body, to the brain. Traditionally, the heart had been considered the organ where feelings are generated, and many famous philosophers, including Aristotle, failed to recognize the brain as the center of our thinking. For this reason, treatments involving the brain are relatively new in medical practice. After scientists discovered the principles of electricity, it became possible to reveal electric phenomena in the brain, and understand that neurons control muscles that exert physical force. Particularly during World War I, brain damage became a subject of medical research and diagnosis. While the principles of brain stimulation, as used by Fritsch and Hitzig, were already known, the technology was very underdeveloped and precluded any practical use of such techniques beyond diagnostic features. Here electrophysiology increasingly involved direct electrical stimulation, as currently performed to test peripheral nerves (the H-reflex).

    The brain, as any other organ of the body, is subject to degeneration and disease. Inborn metabolic disease can affect the function of neurons, as can inflammation and infection, injury, intoxication, vascular disease, and tumors. The brain can be affected locally or globally and damage to its structures means that functions cannot be executed effectively. This may involve our sensory systems, our motor system, or also the integrative function of the brain that combines several individual functions into one. Perception and cognition are examples of such functions. The brain is a differentiated tissue with discreet neuronal and regional functions that in contrast to some species (like lizards) does not regenerate after injury in humans. While the growth of new neurons has been discovered in some parts of the brain, the numbers are far too low to provide the brain with the substrate for regeneration. Individual nerve cells can change their function and connective structure to some extent (this is the substrate for learning), but they cannot replace lost neurons due to injury. Controlling the process of regeneration has been an elusive goal of molecular biology and we cannot yet encourage parts of the brain to regrow. However, an understanding of brain function provides us with the possibility to interfere using artificial systems, that is, to replace the lost function by a technical device (a prosthesis). Prosthetic devices can alleviate the consequences of some brain diseases or injuries. Prostheses do not directly treat the disease, but can partially (or ideally fully) replace some of the lost function.

    It has required the substantial progress in science and technology outlined in this chapter to allow for the manipulation of the human nervous system in a way that is pragmatically useful for replacing some lost function in patients. Today we know that this is possible, and the potential of this technology is still growing. This field has remained focused on replacing degenerated or underdeveloped sensory functions, motor disease, and modulating pathologic behaviors. As shown in this introductory book, this typically requires active implants that are implanted into the body and can sense neuronal activity or stimulate neurons. These devices are highly complex and must be biologically inert (do not undergo chemical reactions when placed in the body, and do not disassemble); some such implants have to be hermetically packed to prevent any interaction between the technology and the bodily fluids. Active implants interact with neurons using electrical fields, and this can influence the health of the surrounding tissue. Electric current may dissolve the implant material and cause further chemical reactions in the tissue. Eventually, such reactions may lead to tissue damage. Therefore, neural prostheses are complex and potentially dangerous devices that must be carefully developed, evaluated, and implemented in order to provide useful benefit to patients. The combination of such a diverse set of scientific concepts necessary to produce these implants means that scientists, engineers, and clinicians working on these devices must have a broad base of knowledge. This book thus provides an overview of the biology, engineering, and clinical applications that are necessary to properly understand neural prosthesis development.

    Brief summary

    •Neural prostheses are devices that replace or enhance the function of the nervous system.

    •The modern understanding of neural prostheses relies on the complementary development of medical and information technologies.

    •Information and signal can be separated and information can be stored independently of the original signal.

    •The brain transforms, processes, and stores information using a combination of electrical and chemical signals.

    •In conjunction with computer technology, it is possible to accurately record and decode bioelectrical signals.

    •Artificial electrical signals can be used to generate bioelectrical signals.

    Key literature and further reading

    Ashby W.R. An Introduction to Cybernetics. Chapman & Hall, Ltd; 1957.

    Cooper S., van Leeuwen J. Alan Turing: His Work and Impact. 1st ed. Elsevier Science; 2013.

    Davis M., Sigal R., Weyuker E.J. Computability, Complexity, and Languages: Fundamentals of Theoretical Computer Science. 2nd ed. Academic Press; 1994.

    Essinger J. Jacquard's Web: How a Hand-Loom Led to the Birth of the Information Age. Oxford: Oxford University Press; 2004.

    Finger S. Origins of Neuroscience: A History of Explorations into Brain Function. US: Oxford University Press; 2001.

    Finger S. Paul Broca (1824–1880). J. Neurol.. 2004;251:769–770.

    Hyman A., ed. Science and Reform: Selected Works of Charles Babbage. Cambridge, England: Cambridge University Press; 1989.

    Hammerman R., Russell A.L. Ada's Legacy: Cultures of Computing from the Victorian to the Digital Age. Morgan & Claypool; 2015.

    Johnston J.B. The Nervous System of Vertebrates. Philadelphia: P. Blakiston's Son & Co.; 1906.

    Pancaldi G. Volta, Science and Culture in the Age of Enlightenment. Princeton University Press; 2003.

    Posselt E.A. The Jacquard Machine, Analyzed and Explained: With an Appendix on the Preparation of Jacquard Cards, and Practical Hints to Learners of Jacquard Designing. Philadelphia: Dando Printing and Publishing Co.; 1887.

    Shannon C.E. A mathematical theory of communication. Bell Syst. Technol. J.. 1948;27:379–423 623–656.

    Turing A.M. On computable numbers, with an application to the Entscheidungs problem. J. Math.. 1936;58:230–265.

    Verkhratsky A., Krishtal O.A., Petersen O.H. From Galvani to patch clamp: the development of electrophysiology. Pflugers Arch..

    Enjoying the preview?
    Page 1 of 1