Artificial Intelligence in the Age of Neural Networks and Brain Computing
By Robert Kozma, Cesare Alippi and Yoonsuck Choe
()
About this ebook
Artificial Intelligence in the Age of Neural Networks and Brain Computing demonstrates that existing disruptive implications and applications of AI is a development of the unique attributes of neural networks, mainly machine learning, distributed architectures, massive parallel processing, black-box inference, intrinsic nonlinearity and smart autonomous search engines. The book covers the major basic ideas of brain-like computing behind AI, provides a framework to deep learning, and launches novel and intriguing paradigms as future alternatives. The success of AI-based commercial products proposed by top industry leaders, such as Google, IBM, Microsoft, Intel and Amazon can be interpreted using this book.
- Developed from the 30th anniversary of the International Neural Network Society (INNS) and the 2017 International Joint Conference on Neural Networks (IJCNN)
- Authored by top experts, global field pioneers and researchers working on cutting-edge applications in signal processing, speech recognition, games, adaptive control and decision-making
- Edited by high-level academics and researchers in intelligent systems and neural networks
Related to Artificial Intelligence in the Age of Neural Networks and Brain Computing
Related ebooks
Trends in Deep Learning Methodologies: Algorithms, Applications, and Systems Rating: 0 out of 5 stars0 ratingsComputational Intelligence and Its Applications in Healthcare Rating: 0 out of 5 stars0 ratingsArtificial Neural Systems: Principle and Practice Rating: 0 out of 5 stars0 ratingsJob Ready Go Rating: 0 out of 5 stars0 ratingsDeep Learning through Sparse and Low-Rank Modeling Rating: 0 out of 5 stars0 ratingsToward Human-Level Artificial Intelligence: Representation and Computation of Meaning in Natural Language Rating: 0 out of 5 stars0 ratingsNatural and Artificial Intelligence: Misconceptions about Brains and Neural Networks Rating: 0 out of 5 stars0 ratingsReadings in Artificial Intelligence Rating: 0 out of 5 stars0 ratingsRobotics And Virtuality: How The Future Will Be Transformed By Robotics And Virtual Reality Rating: 0 out of 5 stars0 ratingsGPU-based Parallel Implementation of Swarm Intelligence Algorithms Rating: 0 out of 5 stars0 ratingsF# for Machine Learning Essentials Rating: 0 out of 5 stars0 ratingsArtificial Intelligence-Based Brain-Computer Interface Rating: 0 out of 5 stars0 ratingsMobile Edge Artificial Intelligence: Opportunities and Challenges Rating: 0 out of 5 stars0 ratingsJob Ready Python Rating: 0 out of 5 stars0 ratingsThe Natural Language for Artificial Intelligence Rating: 0 out of 5 stars0 ratingsAdvanced Neural Computers Rating: 0 out of 5 stars0 ratingsPython Testing with Selenium: Learn to Implement Different Testing Techniques Using the Selenium WebDriver Rating: 0 out of 5 stars0 ratingsCognitive Computing for Human-Robot Interaction: Principles and Practices Rating: 0 out of 5 stars0 ratingsReadings in Artificial Intelligence and Software Engineering Rating: 3 out of 5 stars3/5Artificial Intelligence Foundations: Learning from experience Rating: 0 out of 5 stars0 ratingsA First Course in Artificial Intelligence Rating: 0 out of 5 stars0 ratingsNatural Language Processing A Complete Guide - 2020 Edition Rating: 0 out of 5 stars0 ratingsMachine Learning Techniques for Space Weather Rating: 0 out of 5 stars0 ratingsPattern Recognition and Artificial Intelligence Rating: 0 out of 5 stars0 ratingsCalculus of Thought: Neuromorphic Logistic Regression in Cognitive Machines Rating: 2 out of 5 stars2/5Graph Analytics A Clear and Concise Reference Rating: 0 out of 5 stars0 ratingsDeep Learning and Parallel Computing Environment for Bioengineering Systems Rating: 0 out of 5 stars0 ratingsComputer Programming in Quantitative Biology Rating: 0 out of 5 stars0 ratings
Science & Mathematics For You
Ultralearning: Master Hard Skills, Outsmart the Competition, and Accelerate Your Career Rating: 4 out of 5 stars4/5Feeling Good: The New Mood Therapy Rating: 4 out of 5 stars4/5The Gulag Archipelago: The Authorized Abridgement Rating: 4 out of 5 stars4/5Outsmart Your Brain: Why Learning is Hard and How You Can Make It Easy Rating: 4 out of 5 stars4/5Becoming Cliterate: Why Orgasm Equality Matters--And How to Get It Rating: 4 out of 5 stars4/5Lies My Gov't Told Me: And the Better Future Coming Rating: 4 out of 5 stars4/5The Invisible Rainbow: A History of Electricity and Life Rating: 4 out of 5 stars4/5The Gulag Archipelago [Volume 1]: An Experiment in Literary Investigation Rating: 4 out of 5 stars4/5The Dorito Effect: The Surprising New Truth About Food and Flavor Rating: 4 out of 5 stars4/5The Psychology of Totalitarianism Rating: 5 out of 5 stars5/5Fantastic Fungi: How Mushrooms Can Heal, Shift Consciousness, and Save the Planet Rating: 5 out of 5 stars5/5Metaphors We Live By Rating: 4 out of 5 stars4/5The Big Book of Hacks: 264 Amazing DIY Tech Projects Rating: 4 out of 5 stars4/5Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness Rating: 4 out of 5 stars4/5The Wisdom of Psychopaths: What Saints, Spies, and Serial Killers Can Teach Us About Success Rating: 4 out of 5 stars4/5Activate Your Brain: How Understanding Your Brain Can Improve Your Work - and Your Life Rating: 4 out of 5 stars4/5A Letter to Liberals: Censorship and COVID: An Attack on Science and American Ideals Rating: 3 out of 5 stars3/5The Rise of the Fourth Reich: The Secret Societies That Threaten to Take Over America Rating: 4 out of 5 stars4/5Memory Craft: Improve Your Memory with the Most Powerful Methods in History Rating: 3 out of 5 stars3/5Suicidal: Why We Kill Ourselves Rating: 4 out of 5 stars4/5How to Think Critically: Question, Analyze, Reflect, Debate. Rating: 5 out of 5 stars5/5The Joy of Gay Sex: Fully revised and expanded third edition Rating: 4 out of 5 stars4/5How Emotions Are Made: The Secret Life of the Brain Rating: 4 out of 5 stars4/5Homo Deus: A Brief History of Tomorrow Rating: 4 out of 5 stars4/5A Crack In Creation: Gene Editing and the Unthinkable Power to Control Evolution Rating: 4 out of 5 stars4/5The Big Fat Surprise: Why Butter, Meat and Cheese Belong in a Healthy Diet Rating: 4 out of 5 stars4/5On Food and Cooking: The Science and Lore of the Kitchen Rating: 5 out of 5 stars5/5The Unpersuadables: Adventures with the Enemies of Science Rating: 4 out of 5 stars4/5Oppenheimer: The Tragic Intellect Rating: 5 out of 5 stars5/5
Related categories
Reviews for Artificial Intelligence in the Age of Neural Networks and Brain Computing
0 ratings0 reviews
Book preview
Artificial Intelligence in the Age of Neural Networks and Brain Computing - Robert Kozma
Artificial Intelligence in the Age of Neural Networks and Brain Computing
Editors
Robert Kozma
University of Memphis, Department of Mathematics, Memphis, TN, United States
University of Massachusetts Amherst, Department of Computer Science, Amherst, MA, United States
Cesare Alippi
Politecnico di Milano, Milano, Italy
Universita` della Svizzera italiana, Lugano, Switzerland
Yoonsuck Choe
Samsung Research & Texas A&M University, College Station, TX, United States
Francesco Carlo Morabito
University Mediterranea of Reggio Calabria, Reggio Calabria, Italy
Table of Contents
Cover image
Title page
Copyright
List of Contributors
Editors' Brief Biographies
Introduction
Chapter 1. Nature's Learning Rule: The Hebbian-LMS Algorithm
1. Introduction
2. ADALINE and the LMS Algorithm, From the 1950s
3. Unsupervised Learning With Adaline, From the 1960s
4. Robert Lucky's Adaptive Equalization, From the 1960s
5. Bootstrap Learning With a Sigmoidal Neuron
6. Bootstrap Learning With a More Biologically Correct
Sigmoidal Neuron
7. Other Clustering Algorithms
8. A General Hebbian-LMS Algorithm
9. The Synapse
10. Postulates of Synaptic Plasticity
11. The Postulates and the Hebbian-LMS Algorithm
12. Nature's Hebbian-LMS Algorithm
13. Conclusion
Appendix: Trainable Neural Network Incorporating Hebbian-LMS Learning
Chapter 2. A Half Century of Progress Toward a Unified Neural Theory of Mind and Brain With Applications to Autonomous Adaptive Agents and Mental Disorders
1. Towards a Unified Theory of Mind and Brain
2. A Theoretical Method for Linking Brain to Mind: The Method of Minimal Anatomies
3. Revolutionary Brain Paradigms: Complementary Computing and Laminar Computing
4. The What and Where Cortical Streams Are Complementary
5. Adaptive Resonance Theory
6. Vector Associative Maps for Spatial Representation and Action
7. Homologous Laminar Cortical Circuits for All Biological Intelligence: Beyond Bayes
8. Why a Unified Theory Is Possible: Equations, Modules, and Architectures
9. All Conscious States Are Resonant States
10. The Varieties of Brain Resonances and the Conscious Experiences That They Support
11. Why Does Resonance Trigger Consciousness?
12. Towards Autonomous Adaptive Intelligent Agents and Clinical Therapies in Society
Chapter 3. Third Gen AI as Human Experience Based Expert Systems
1. Introduction
2. Third Gen AI
3. MFE Gradient Descent
4. Conclusion
Chapter 4. The Brain-Mind-Computer Trichotomy: Hermeneutic Approach
1. Dichotomies
2. Hermeneutics
3. Schizophrenia: A Broken Hermeneutic Cycle
4. Toward the Algorithms of Neural/Mental Hermeneutics
Chapter 5. From Synapses to Ephapsis: Embodied Cognition and Wearable Personal Assistants
1. Neural Networks and Neural Fields
2. Ephapsis
3. Embodied Cognition
4. Wearable Personal Assistants
Chapter 6. Evolving and Spiking Connectionist Systems for Brain-Inspired Artificial Intelligence
1. From Aristotle's Logic to Artificial Neural Networks and Hybrid Systems
2. Evolving Connectionist Systems (ECOS)
3. Spiking Neural Networks (SNN) as Brain-Inspired ANN
4. Brain-Like AI Systems Based on SNN. NeuCube. Deep Learning Algorithms
5. Conclusion
Chapter 7. Pitfalls and Opportunities in the Development and Evaluation of Artificial Intelligence Systems
1. Introduction
2. AI Development
3. AI Evaluation
4. Variability and Bias in Our Performance Estimates
5. Conclusion
Chapter 8. The New AI: Basic Concepts, and Urgent Risks and Opportunities in the Internet of Things
1. Introduction and Overview
2. Brief History and Foundations of the Deep Learning Revolution
3. From RNNs to Mouse-Level Computational Intelligence: Next Big Things and Beyond
4. Need for New Directions in Understanding Brain and Mind
5. Information Technology (IT) for Human Survival: An Urgent Unmet Challenge
Chapter 9. Theory of the Brain and Mind: Visions and History
1. Early History
2. Emergence of Some Neural Network Principles
3. Neural Networks Enter Mainstream Science
4. Is Computational Neuroscience Separate From Neural Network Theory?
5. Discussion
Chapter 10. Computers Versus Brains: Game Is Over or More to Come?
1. Introduction
2. AI Approaches
3. Metastability in Cognition and in Brain Dynamics
4. Multistability in Physics and Biology
5. Pragmatic Implementation of Complementarity for New AI
Chapter 11. Deep Learning Approaches to Electrophysiological Multivariate Time-Series Analysis
1. Introduction
2. The Neural Network Approach
3. Deep Architectures and Learning
4. Electrophysiological Time-Series
5. Deep Learning Models for EEG Signal Processing
6. Future Directions of Research
7. Conclusions
Chapter 12. Computational Intelligence in the Time of Cyber-Physical Systems and the Internet of Things
1. Introduction
2. System Architecture
3. Energy Harvesting and Management
4. Learning in Nonstationary Environments
5. Model-Free Fault Diagnosis Systems
6. Cybersecurity
7. Conclusions
Chapter 13. Multiview Learning in Biomedical Applications
1. Introduction
2. Multiview Learning
3. Multiview Learning in Bioinformatics
4. Multiview Learning in Neuroinformatics
5. Deep Multimodal Feature Learning
6. Conclusions
Chapter 14. Meaning Versus Information, Prediction Versus Memory, and Question Versus Answer
1. Introduction
2. Meaning Versus Information
3. Prediction Versus Memory
4. Question Versus Answer
5. Discussion
6. Conclusion
Chapter 15. Evolving Deep Neural Networks
1. Introduction
2. Background and Related Work
3. Evolution of Deep Learning Architectures
4. Evolution of LSTM Architectures
5. Application Case Study: Image Captioning for the Blind
6. Discussion and Future Work
7. Conclusion
Index
Copyright
Academic Press is an imprint of Elsevier
125 London Wall, London EC2Y 5AS, United Kingdom
525 B Street, Suite 1650, San Diego, CA 92101, United States
50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States
The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom
Copyright © 2019 Elsevier Inc. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions.
This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein).
Notices
Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary.
Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility.
To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein.
Library of Congress Cataloging-in-Publication Data
A catalog record for this book is available from the Library of Congress
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library
ISBN: 978-0-12-815480-9
For information on all Academic Press publications visit our website at https://www.elsevier.com/books-and-journals
Publisher: Mara Conner
Acquisition Editor: Chris Katsaropoulos
Editorial Project Manager: John Leonard
Production Project Manager: Kamesh Ramajogi
Cover Designer: Christian J. Bilbow
Typeset by TNQ Technologies
List of Contributors
Cesare Alippi
Politecnico di Milano, Milano, Italy
Università della Svizzera Italiana, Lugano, Switzerland
David G. Brown, US Food and Drug Administration, Silver Spring, MD, United States
Maurizio Campolo, NeuroLab, DICEAM, University Mediterranea of Reggio Calabria, Reggio Calabria, Italy
Yoonsuck Choe
Samsung Research, Seoul, Korea
Department of Computer Science and Engineering, Texas A&M University
Nigel Duffy, Sentient Technologies, Inc., San Francisco, CA, United States
Péter Érdi
Center for Complex Systems Studies, Kalamazoo College, Kalamazoo, MI, United States
Institute for Particle and Nuclear Physics, Wigner Research Centre for Physics, Hungarian Academy of Sciences, Budapest, Hungary
Daniel Fink, Sentient Technologies, Inc., San Francisco, CA, United States
Olivier Francon, Sentient Technologies, Inc., San Francisco, CA, United States
Paola Galdi, NeuRoNe Lab, DISA-MIS, University of Salerno, Fisciano, Italy
Stephen Grossberg, Center for Adaptive Systems Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering, Boston University, Boston, MA, United States
Babak Hodjat, Sentient Technologies, Inc., San Francisco, CA, United States
Cosimo Ieracitano, NeuroLab, DICEAM, University Mediterranea of Reggio Calabria, Reggio Calabria, Italy
Nikola Kasabov, Knowledge Engineering and Discovery Research Institute – KEDRI, Auckland University of Technology, Auckland, New Zealand
Youngsik Kim, Department of Electrical Engineering, Stanford University, Stanford, CA, United States
Robert Kozma
University of Memphis, Department of Mathematics, Memphis, TN, United States
University of Massachusetts Amherst, Department of Computer Science, Amherst, MA, United States
Daniel S. Levine, University of Texas at Arlington, Arlington, TX, United States
Jason Liang
Sentient Technologies, Inc., San Francisco, CA, United States
The University of Texas at Austin, Austin, TX, United States
Nadia Mammone, NeuroLab, DICEAM, University Mediterranea of Reggio Calabria, Reggio Calabria, Italy
Elliot Meyerson
Sentient Technologies, Inc., San Francisco, CA, United States
The University of Texas at Austin, Austin, TX, United States
Risto Miikkulainen
Sentient Technologies, Inc., San Francisco, CA, United States
The University of Texas at Austin, Austin, TX, United States
Francesco Carlo Morabito, NeuroLab, DICEAM, University Mediterranea of Reggio Calabria, Reggio Calabria, Italy
Arshak Navruzyan, Sentient Technologies, Inc., San Francisco, CA, United States
Roman Ormandy, Embody Corporation, Los Gatos, CA, United States
Seiichi Ozawa, Kobe University, Kobe, Japan
Dookun Park, Department of Electrical Engineering, Stanford University, Stanford, CA, United States
Jose Krause Perin, Department of Electrical Engineering, Stanford University, Stanford, CA, United States
Bala Raju, Sentient Technologies, Inc., San Francisco, CA, United States
Aditya Rawal
Sentient Technologies, Inc., San Francisco, CA, United States
The University of Texas at Austin, Austin, TX, United States
Frank W. Samuelson, US Food and Drug Administration, Silver Spring, MD, United States
Angela Serra, NeuRoNe Lab, DISA-MIS, University of Salerno, Fisciano, Italy
Hormoz Shahrzad, Sentient Technologies, Inc., San Francisco, CA, United States
Harold Szu, Catholic University of America, Washington, DC, United States
Roberto Tagliaferri, NeuRoNe Lab, DISA-MIS, University of Salerno, Fisciano, Italy
The Al Working Group, Catholic University of America, Washington, DC, United States
Paul J. Werbos, US National Science Foundation, retired, and IntControl LLC, Arlington, VA, United States
Bernard Widrow, Department of Electrical Engineering, Stanford University, Stanford, CA, United States
Editors' Brief Biographies
Robert Kozma
ROBERT KOZMA is a Professor of Mathematics, Director of Center for Large-Scale Integrated Optimization and Networks, University of Memphis, TN, USA; and Visiting Professor of Computer Science, University of Massachusetts Amherst. He is Fellow of IEEE and Fellow of the International Neural Network Society (INNS). He is President of INNS (2017–18) and serves on the Governing Board of IEEE Systems, Man, and Cybernetics Society (2016–18). He has served on the AdCom of the IEEE Computational Intelligence Society and on the Board of Governors of INNS. Dr. Kozma is the recipient of the INNS Gabor Award (2011) and Alumni Association Distinguished Research Achievement Award (2010). He has also served as Senior Fellow (2006–08) of US Air Force Research Laboratory. His research includes robust decision support systems, autonomous robotics and navigation, distributed sensor networks, brain networks, and brain–computer interfaces.
Cesare Alippi
CESARE ALIPPI is a Professor with the Politecnico di Milano, Milano, Italy, and Università della Svizzera italiana, Lugano, Switzerland. He is an IEEE Fellow, Member of the Administrative Committee of the IEEE Computational Intelligence Society, Board of Governors member of the International Neural Network Society (INNS), and Board of Directors member of the European Neural Network Society. In 2018, he received the IEEE CIS Outstanding Computational Intelligence Magazine Award, the (2016) Gabor award from the INNS, and the IEEE Computational Intelligence Society Outstanding Transactions on Neural Networks and Learning Systems Paper Award; and in 2004, he received the IEEE Instrumentation and Measurement Society Young Engineer Award. His current research activity addresses adaptation and learning in nonstationary and time-variant environments, graphs learning, and intelligence for embedded and cyberphysical systems.
Yoonsuck Choe
YOONSUCK CHOE is a Corporate Vice President at Samsung Research Artificial Intelligence Center (2017–present) and Professor and Director of the Brain Networks Laboratory at Texas A&M University (2001–present). He received his PhD degree in computer science from the University of Texas at Austin in 2001. His research interests are in neural networks and computational neuroscience and has published over 100 papers on these topics, including a research monograph on computations in the visual cortex. He serves on the Executive Committee of the International Neural Network Society (INNS). He served as Program Chair and General Chair for IJCNN2015 and IJCNN2017, respectively, and served on the editorial board of IEEE Transactions on Neural Networks and the INNS journal Neural Networks.
Francesco Carlo Morabito
FRANCESCO CARLO MORABITO is a Professor of Electrical Engineering with the University Mediterranea
of Reggio Calabria, Italy, and the Former Dean of the Faculty of Engineering (2001–08) and Deputy Rector of the University. He is now serving as the Vice-Rector for International Relations (2012–18). He is a Foreign Member of the Royal Academy of Doctors, Spain (2004), and Member of the Institute of Spain, Barcelona Economic Network (2017). He served as the Governor of the International Neural Network Society for 12 years and as the President of the Italian Society of Neural Networks (2008–14). He served in the organization of IJCNN conferences (Tutorial, International Liaison, European Link, Plenary). He has coauthored over 400 papers in various fields of engineering. He is coauthor of 15 books and has three international patents. He is an Associate Editor for International Journal of Neural Systems, Neural Networks, Sensors, and Renewable Energy.
Introduction
We live in the era of Artificial Intelligence (AI), and AI is everywhere. It is on the front page of your favorite newspaper, it is in your pocket inside your smartphone, on your kitchen table, in your car, on the street, at your office, on the trains and airplanes, everywhere. The success of AI-based commercial products, proposed by many important companies, like Google, IBM, Microsoft, Intel, Amazon, to name a few, can be interpreted as the coexistence of a successful synergism among what we call Computational Intelligence, Natural Intelligence, Brain Computing, and Neural Engineering.
The emergence of AI in many IT technologies happened almost overnight, in the past couple of years. The blessing and the curse of AI are here! And all this is just the beginning, for the better or for the worse. How did all this happen all of a sudden? Yes, it requires the powerful computing offered by advanced chips at a cheap cost. It also requires massive amount of data available through the Internet and via prolific communication resources, also called as Big Data. That is not all. Computational algorithms, called deep learning (DL), provide the framework of the programming approaches. Deep learning was coined about a decade ago, but many experts employing these technologies do not realize that DL is rooted in the technology developed by the biologically motivated neural networks field in the 1960s.
Neural networks thus powerfully reemerged with different names and meanings in different, also unexpected, contexts within the current new wave of AI and DL. Neural networks represent a well-grounded paradigm rooted in many disciplines, including computer science, physics, psychology, information science, and engineering.
This volume collects selected invited contributions from pioneers and experts of the field of neural networks. This collection aims to show that the present implications and applications of AI is nothing but a development of the endowed unique attributes of neural networks, namely, machine learning, distributed architectures, massively parallel processing, black-box inference, intrinsic nonlinearity, and smart autonomous search engine. We strive to cover the major basic ideas of brain-like computing behind AI and to contribute to give a framework to DL as well as to launch novel intriguing paradigms as possible future alternatives.
This book has been designed to commemorate the 30th anniversary of International Neural Network Society (INNS), following the 2017 International Joint Conference on Neural Networks, in Anchorage, AK, USA, May 14–18, 2017. The conference is organized jointly by the INNS and the IEEE Computational Intelligence Society (CIS), and is the premiere international meeting for researchers and other professionals in neural networks and related areas, including neural network theory, DL, computational neuroscience, robotics, and distributed intelligence.
The chapters included here are written by authors who are a blend of top experts, worldwide-recognized pioneers of the field, and researchers working on cutting-edge applications in signal processing, speech recognition, games, and adaptive control and decision-making. Our intent is to present the concepts involved to a target audience, who are not a narrow group of specialists working in the field but rather a broad segment of the public intrigued by recent advances in AI.
The volume presents an introduction and 15 peer-reviewed contributions briefly described in what follows.
In Chapter 1, Widrow et al. reconsider Hebbian learning, originally proposed in the field of neurobiology as one of the basis of (unsupervised) adaptive algorithms directly derived from nature. Although the LMS algorithm was previously proposed by Widrow and Hoff as a supervised learning procedure, it can be implemented in an unsupervised fashion. The two algorithms can thus be combined to form the Hebbian-LMS unsupervised learning algorithm, which can be the key to interpret nature's way of learning at the neuron and synapse level.
In Chapter 2, Grossberg presents a survey of the main principles, architectures, and circuits proposed by half a century of researches in the field, whose aim was to develop a unified theory of brain and mind where the psychological perspective can be read through the emergence of brain mechanisms. The chapter describes novel revolutionary paradigms, like complementary computing and laminar computing, with reference to the autonomous adaptive intelligence characteristic of the brain. The chapter reanalyzes the fundamental approach of adaptive resonance theory (ART) as a core model for engineering and technology, as well as to abstract insights into mental disorders such as autism and Alzheimer disease.
Chapter 3 is the work presented by the AI Working Group spearheaded by H. Szu and coordinated by M. Wardlaw under the aegis of the Office of Naval Research (ONR). This work provides a vista of AI from the pioneering age in the 1960s starting with narrowly defined rule-based systems through adaptive AI approaches using supervised and unsupervised neural networks. They elaborate on third generation AI based on the Zaheh-Freeman dynamic fuzzy theory, in which the Zadeh fuzzy open sets and fuzzy membership functions are not predefined; rather they evolve as the result of self-organizing recurrent chaotic neural networks, according to Freeman neurodynamics. Their approach is truly human-centered and has the premise to provide breakthroughs in AI beyond today's cutting-edge DL.
In Chapter 4, Erdi presents an insightful review on the topic of hermeneutics applied to brain science. Brain–computer–mind trichotomy is discussed, where downward causality is discussed as a unique feature of brain-mind as opposed to computation. Hermeneutics is introduced next, applied to the brain, and it is argued that the brain is in principle a hermeneutic device. One application of this idea is the explanation of schizophrenia, which is argued to be due to a broken hermeneutic cycle. Finally, the chapter concludes with thoughts on how to achieve algorithms for neural/mental hermeneutics. This is a deep theoretical essay that touches upon fundamental issues in brain and neural sciences.
In Chapter 5, Ormandy addresses crucial issues related to the limitations of mainstream AI and neural network technologies, especially in the context of the usefulness of the AI in developing new technologies to improve our quality of life. He describes the work started in collaboration with the late Walter Freeman in order to capture the dynamics of embodied human cognition and incorporate it to novel wearable personal assistants. The author reviews the literature from file theories through embodied cognition across species. The main thesis of this work is the critical importance of ephaptic interactions between neural populations, which produce neural fields measurable by noninvasive means, thus providing an opportunity for the development of wearable personal assistants in everyday life, including augmented memory, stress relief, fitness training, relaxation, and other applications.
In Chapter 6, Kasabov presents an approach based on evolutionary connectionist systems (ECOS) that are able to evolve their architecture and functionality in adaptive data-driven modality. Evolving spiking neural networks (eSNN) are illustrated and proposed as a third generation of artificial neural networks (ANN). eSNN can be used for future brain-like AI, and the NeuCube architecture is presented as a machine that can implement DL procedures. The chapter also proposes a combination of AI and ANN approaches as a unique method derived from neuroscience.
In Chapter 7, Brown et al. focused on the pitfalls and opportunities of developing techniques of evaluation of AI systems. This is a cool topic, considering the relevant progresses that DL methodologies have introduced in the computational intelligence community. However, they raise the problem of measuring and comparing performance. The receiver operating characteristic (ROC) paradigm and the bootstrap method are considered as well-grounded approaches for performance metric in order to avoid overestimation that frequently limits the practical implementations of AI.
In Chapter 8, Werbos provides an impressive summary and a critical review of the reasons underlying today's wave of AI successes, including DL and the Internet-of-Things (IoT). As a major exponent both in research and research funding in the past decades, he provides an exciting insider's view of these developments, as well as points toward possible avenues in future research, neuroscience in particular. He also points out the key role researchers play in applying the novel technological development for the benefit of humanity.
In Chapter 9, Levine reviews the history of neural networks as an artificial model of brain and mind. Neural networks are a paradigm that in principle link biology and technology: it thus comes as no surprise that the flagship journal of the International Neural Network Society is indeed Neural Networks. His chapter reconsiders the original ideas that motivate the nascence of this interdisciplinary society at the light of present developments.
In Chapter 10, Kozma touches a fundamental problem humans have focused on for centuries, that of creating machines that act like them. The chapter investigates various aspects of biological and AI issues and introduces a balanced approach based on the concepts of complementarity and multistability as manifested in human brain operation and cognitive processing. As intelligence in human brains is the result of a delicate balance between fragmentation of local components and global dominance of coherent overall states, the chapter elaborates on how intelligence is manifested through the emergence of a sequence of coherent metastable amplitude patterns. This behavior leads to the cinematic theory of human cognition that both provides insights into key principles of intelligence in biological brains and helps in building more powerful artificially intelligent devices.
In Chapter 11, Morabito et al. introduce a comprehensive investigation of DL applications in brain engineering and biomedical signal processing, with a particular focus on the processing of multivariate time-series coming from electrophysiology. Electroencephalography (EEG) and high density EEG and magnetoencephalography technologies are reviewed, as they constitute the measurement systems yielding multivariate electrophysiological time-series. The use of DL methods for multivariate EEG time-series processing are then detailed to permit the reader to easily enter this fascinating application framework. Future direction of research in DL, encompassing interpretability, architectures and learning procedures, and robustness aspects are then discussed, so as to provide the reader with some relevant open research topics.
In Chapter 12, Alippi et al. present a timely and thorough review on the use of computational intelligence and machine learning methods in cyber-physical systems (CPS) and IoT. The review goes over four major research topics in this domain: (1) system architecture; (2) energy harvesting, conservation, and management; (3) fault detection and mitigation; and (4) cyberattack detection and countermeasures. Importantly, the authors challenge assumptions that are taken for granted but do not apply anymore to the increasingly more complex CPS and IoT systems. These assumptions include high energy availability, stationarity, correct data availability, and security guarantees. This chapter provides an excellent review of the status of CPS and IoT and how to overcome the issues in these emerging fields.
In Chapter 13, Tagliaferri et al. present an innovative approach of multiview learning as a branch of machine learning for the analysis of multimodal data in biomedical applications, among which, in particular, the authors focus on bioinformatics (i.e., gene expression, microRNA expression, protein–protein interactions, genome-wide association). The approach proposed allows capturing information regarding different aspects of the underlying principles governing complex biological systems. The authors also propose an example of how both clustering and classification can be combined in a multiview setting for the automated diagnosis of neurodegenerative disorders. They also show using some application examples how recent DL techniques can be applied to multimodal data to learn complex representations.
In Chapter 14, Choe provides educated discussions and analyses about some key concepts central to brain science and AI, namely those associated with the dichotomies: meaning vs. information, prediction vs. memory, and question vs. answer. The author shows how a slightly different view on these concepts can help us move forward, beyond current limits of our understanding in these fields. In detail, the chapter elaborates over the intriguing definition of information as seen from different perspectives and its relationship with the concept of meaning. Then, it investigates the role of plasticity, that of memory, and how they relate to prediction. Finally, the focus moves on to the question vs. answer issue in AI algorithms and how it impacts on their ability to solve problems.
In Chapter 15, Miikkulainen et al. present a novel automated method for designing deep neural network architecture. The main idea is based on neuroevolution to evolve the neural network topology and parameters. In the proposed work, neuroevolution is extended to evolve topology, components (modules), and hyperparameters. The method is applied to both feedforward architectures like CNN and also to recurrent neural networks (with LSTM units). The proposed method is tested on standard image tasks (object recognition) and natural language tasks (image captioning), demonstrating comparable results as state-of-the-art methods. This chapter provides a great overview of evolutionary methods developed at Sentient technologies for the design and optimization of deep neural networks.
Chapter 1
Nature's Learning Rule
The Hebbian-LMS Algorithm
Bernard Widrow, Youngsik Kim, Dookun Park, and Jose Krause Perin Department of Electrical Engineering, Stanford University, Stanford, CA, United States
Abstract
Hebbian learning is widely accepted in the fields of psychology, neurology, and neurobiology. It is one of the fundamental premises of neuroscience. The LMS (least mean square) algorithm of Widrow and Hoff is the world's most widely used adaptive algorithm, fundamental in the fields of signal processing, control systems, communication systems, pattern recognition, and artificial neural networks. These learning paradigms are very different. Hebbian learning is unsupervised. LMS learning is supervised. However, a form of LMS can be constructed to perform unsupervised learning and, as such, LMS can be used in a natural way to implement Hebbian learning. Combining the two paradigms creates a new unsupervised learning algorithm, Hebbian-LMS. This algorithm has practical engineering applications and provides insight into learning in living neural networks. A fundamental question is how does learning take place in living neural networks? Nature's little secret,
the learning algorithm practiced by nature at the neuron and synapse level, may well be the Hebbian-LMS algorithm.
Keywords
Adaptive filtering; Bootstrap learning; Clustering; Decision-directed learning; Hebbian learning; Hebbian-LMS algorithm; LMS algorithm; Neural networks; Synaptic plasticity
Chapter Outline
1. Introduction
2. ADALINE and the LMS Algorithm, From the 1950s
3. Unsupervised Learning With Adaline, From the 1960s
4. Robert Lucky's Adaptive Equalization, From the 1960s
5. Bootstrap Learning With a Sigmoidal Neuron
6. Bootstrap Learning With a More Biologically Correct
Sigmoidal Neuron
6.1 Training a Network of Hebbian-LMS Neurons
7. Other Clustering Algorithms
7.1 K-Means Clustering
7.2 Expectation-Maximization Algorithm
7.3 Density-Based Spatial Clustering of Application With Noise Algorithm
7.4 Comparison Between Clustering Algorithms
8. A General Hebbian-LMS Algorithm
9. The Synapse
10. Postulates of Synaptic Plasticity
11. The Postulates and the Hebbian-LMS Algorithm
12. Nature's Hebbian-LMS Algorithm
13. Conclusion
Appendix: Trainable Neural Network Incorporating Hebbian-LMS Learning
Acknowledgments
References
1. Introduction
Donald O. Hebb has had considerable influence in the fields of psychology and neurobiology since the publication of his book The Organization of Behavior in 1949 [1]. Hebbian learning is often described as: neurons that fire together wire together.
Now imagine a large network of interconnected neurons whose synaptic weights are increased because the presynaptic neuron and the postsynaptic neuron fired together. This might seem strange. What purpose would nature fulfill with such a learning algorithm?
In his book, Hebb actually said: "When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased."
Fire together wire together
is a simplification of this. Wire together means increase the synaptic weight. Fire together is not exactly what Hebb said, but some researchers have taken this literally and believe that information is carried with the timing of each activation pulse. Some believe that the precise timing of presynaptic and postsynaptic firings has an effect on synaptic weight changes. There is some evidence for these ideas [2–4] but they remain controversial.
Neuron-to-neuron signaling in the brain is done with pulse trains. This is AC coupling and is one of nature's good ideas,
avoiding the effects of DC level drift that could be caused by the presence of fluids and electrolytes in the brain. We believe that the output signal of a neuron is the neuron's firing rate as a function of time.
Neuron-to-neuron signaling in computer simulated artificial neural networks is done in most cases with DC levels. If a static input pattern vector is presented, the neuron's output is an analog DC level that remains constant as long as the input pattern vector is applied. That analog output can be weighted by a synapse and applied as an input to another neuron, a postsynaptic
neuron, in a layered network or otherwise interconnected network.
The purpose of this chapter is to review a new learning algorithm that we call Hebbian-LMS [5]. It is an implementation of Hebb's teaching by means of the LMS algorithm of Widrow and Hoff. With the Hebbian LMS algorithm, unsupervised or autonomous learning takes place locally, in the individual neuron and its synapses, and when many such neurons are connected in a network, the entire network learns autonomously. One might ask, what does it learn?
This question will be considered below where applications will be presented.
There is another question that can be asked: Should we believe in Hebbian learning? Did Hebb arrive at this idea by doing definitive biological experiments, by ‘getting his hands wet’
? The answer is no. The idea came to him by intuitive reasoning. Like Newton's theory of gravity, like Einstein's theory of relativity, like Darwin's theory of evolution, it was a thought experiment propounded long before modern knowledge and instrumentation could challenge it, refute it, or verify it. Hebb described synapses and synaptic plasticity, but how synapses and neurotransmitters worked was unknown in Hebb's time. So far, no one has contradicted Hebb, except for some details. For example, learning with fire together wire together
would cause the synaptic weights to only increase until all of them reached saturation. That would make an uninteresting neural network, and nature would not do this. Gaps in the Hebbian learning rule will need to be filled, keeping in mind Hebb's basic idea, and well-working adaptive algorithms will be the result. The Hebbian-LMS algorithm will have engineering applications, and it may provide insight into learning in living neural networks.
The current thinking that led us to the Hebbian-LMS algorithm has its roots in a series of discoveries that were made since Hebb, from the late 1950s through the 1960s. These discoveries are reviewed in the next three sections. The sections beyond describe Hebbian-LMS and how this algorithm could be nature's algorithm for learning at the neuron and synapse level.
2. ADALINE and the LMS Algorithm, From the 1950s
Adaline is an acronym for Adaptive Linear Neuron.
A block diagram of the original Adaline is shown in Fig. 1.1. Adaline was adaptive, but not really linear. It was more than a neuron since it also included the weights or synapses. Nevertheless, Adaline was the name given in 1959 by Widrow and Hoff.
Adaline was a trainable classifier. The input patterns, the vectors Xk, k = 1,2,…,N, were weighted by the weight vector Wk = [w1k,w2k,…,wnk]T. Each input pattern Xk was to be classified as a +1 or a −1 in accord with its assigned class, the desired response.
Adaline was trained to accomplish this by adjusting the weights to minimize mean square error. The error was the difference between the desired response dk and the sum yk, ek = dk − yk. Adaline's final output qk was taken as the sign of the sum yk, that is, qk = SGN(yk), where the function SGN(·) is the signum, take the sign of. The sum yk will henceforth be referred to as (SUM)k.
Figure 1.1 Adaline (Adaptive linear neuron.).
The weights of Adaline were trained with the LMS algorithm, as follows:
(1.1)
(1.2)
Averaged over the set of training patterns, the mean square error is a quadratic function of the weights, a quadratic bowl.
The LMS algorithm uses the methodology of steepest descent, a gradient method, for pulling the weights to the bottom of the bowl, thus minimizing mean square error.
The LMS algorithm was invented by Widrow and Hoff in 1959 [6], 10 years after the publication of Hebb's seminal book. The derivation of the LMS algorithm is given in many references. One such reference is the book Adaptive Signal Processing by Widrow and Stearns [7]. The LMS algorithm is the most widely used learning algorithm in the world today. It is used in adaptive filters that are key elements in all modems, for channel equalization and echo canceling. It is one of the basic technologies of the Internet and of wireless communications. It is basic to the field of digital signal processing.
The LMS learning rule is quite simple and intuitive. Eqs. (1.1) and (1.2) can be represented in words:
With the presentation of each input pattern vector and its associated desired response, the weight vector is changed slightly by adding the pattern vector to the weight vector, making the sum more positive, or subtracting the pattern vector from the weight vector, making the sum more negative, changing the sum in proportion to the error in a direction to make the error smaller.
A photograph of a physical Adaline made by Widrow and Hoff in 1960 is shown in Fig. 1.2. The input patterns of this Adaline were binary, 4 × 4 arrays of pixels, each pixel having a value of +1 or −1, set by the 4 × 4 array of toggle switches. Each toggle switch was connected to a weight, implemented by a potentiometer.