Competitive Learning: Fundamentals and Applications for Reinforcement Learning through Competition
By Fouad Sabry
()
About this ebook
What Is Competitive Learning
In artificial neural networks, competitive learning is a type of unsupervised learning in which nodes fight for the right to respond to a subset of the input data. This type of learning is known as "competitive learning." Competitive learning is a form of learning that is similar to Hebbian learning. It operates by raising the level of specialization at each node in the network. It works quite well for discovering clusters hidden within data.
How You Will Benefit
(I) Insights, and validations about the following topics:
Chapter 1: Competitive Learning
Chapter 2: Self-organizing map
Chapter 3: Perceptron
Chapter 4: Unsupervised Learning
Chapter 5: Hebbian Theory
Chapter 6: Backpropagation
Chapter 7: Multilayer Perceptron
Chapter 8: Learning Rule
Chapter 9: Feature Learning
Chapter 10: Types of artificial neural networks
(II) Answering the public top questions about competitive learning.
(III) Real world examples for the usage of competitive learning in many fields.
Who This Book Is For
Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of competitive learning.
What Is Artificial Intelligence Series
The artificial intelligence book series provides comprehensive coverage in over 200 topics. Each ebook covers a specific Artificial Intelligence topic in depth, written by experts in the field. The series aims to give readers a thorough understanding of the concepts, techniques, history and applications of artificial intelligence. Topics covered include machine learning, deep learning, neural networks, computer vision, natural language processing, robotics, ethics and more. The ebooks are written for professionals, students, and anyone interested in learning about the latest developments in this rapidly advancing field.
The artificial intelligence book series provides an in-depth yet accessible exploration, from the fundamental concepts to the state-of-the-art research. With over 200 volumes, readers gain a thorough grounding in all aspects of Artificial Intelligence. The ebooks are designed to build knowledge systematically, with later volumes building on the foundations laid by earlier ones. This comprehensive series is an indispensable resource for anyone seeking to develop expertise in artificial intelligence.
Read more from Fouad Sabry
Related to Competitive Learning
Titles in the series (100)
Multilayer Perceptron: Fundamentals and Applications for Decoding Neural Networks Rating: 0 out of 5 stars0 ratingsRestricted Boltzmann Machine: Fundamentals and Applications for Unlocking the Hidden Layers of Artificial Intelligence Rating: 0 out of 5 stars0 ratingsHopfield Networks: Fundamentals and Applications of The Neural Network That Stores Memories Rating: 0 out of 5 stars0 ratingsConvolutional Neural Networks: Fundamentals and Applications for Analyzing Visual Imagery Rating: 0 out of 5 stars0 ratingsControl System: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsStatistical Classification: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsKernel Methods: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsHybrid Neural Networks: Fundamentals and Applications for Interacting Biological Neural Networks with Artificial Neuronal Models Rating: 0 out of 5 stars0 ratingsAlternating Decision Tree: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsFeedforward Neural Networks: Fundamentals and Applications for The Architecture of Thinking Machines and Neural Webs Rating: 0 out of 5 stars0 ratingsArtificial Neural Networks: Fundamentals and Applications for Decoding the Mysteries of Neural Computation Rating: 0 out of 5 stars0 ratingsCompetitive Learning: Fundamentals and Applications for Reinforcement Learning through Competition Rating: 0 out of 5 stars0 ratingsPerceptrons: Fundamentals and Applications for The Neural Building Block Rating: 0 out of 5 stars0 ratingsRecurrent Neural Networks: Fundamentals and Applications from Simple to Gated Architectures Rating: 0 out of 5 stars0 ratingsEmbodied Cognition: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsHebbian Learning: Fundamentals and Applications for Uniting Memory and Learning Rating: 0 out of 5 stars0 ratingsAttractor Networks: Fundamentals and Applications in Computational Neuroscience Rating: 0 out of 5 stars0 ratingsHierarchical Control System: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsBio Inspired Computing: Fundamentals and Applications for Biological Inspiration in the Digital World Rating: 0 out of 5 stars0 ratingsLong Short Term Memory: Fundamentals and Applications for Sequence Prediction Rating: 0 out of 5 stars0 ratingsRadial Basis Networks: Fundamentals and Applications for The Activation Functions of Artificial Neural Networks Rating: 0 out of 5 stars0 ratingsGroup Method of Data Handling: Fundamentals and Applications for Predictive Modeling and Data Analysis Rating: 0 out of 5 stars0 ratingsArtificial Immune Systems: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsNouvelle Artificial Intelligence: Fundamentals and Applications for Producing Robots With Intelligence Levels Similar to Insects Rating: 0 out of 5 stars0 ratingsBackpropagation: Fundamentals and Applications for Preparing Data for Training in Deep Learning Rating: 0 out of 5 stars0 ratingsK Nearest Neighbor Algorithm: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsNaive Bayes Classifier: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsLearning Intelligent Distribution Agent: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAgent Architecture: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsEmbodied Cognitive Science: Fundamentals and Applications Rating: 0 out of 5 stars0 ratings
Related ebooks
Long Short Term Memory: Fundamentals and Applications for Sequence Prediction Rating: 0 out of 5 stars0 ratingsPerceptrons: Fundamentals and Applications for The Neural Building Block Rating: 0 out of 5 stars0 ratingsFeedforward Neural Networks: Fundamentals and Applications for The Architecture of Thinking Machines and Neural Webs Rating: 0 out of 5 stars0 ratingsBio Inspired Computing: Fundamentals and Applications for Biological Inspiration in the Digital World Rating: 0 out of 5 stars0 ratingsArtificial Neural Networks: Fundamentals and Applications for Decoding the Mysteries of Neural Computation Rating: 0 out of 5 stars0 ratingsHybrid Neural Networks: Fundamentals and Applications for Interacting Biological Neural Networks with Artificial Neuronal Models Rating: 0 out of 5 stars0 ratingsNeuroevolution: Fundamentals and Applications for Surpassing Human Intelligence with Neuroevolution Rating: 0 out of 5 stars0 ratingsMultilayer Perceptron: Fundamentals and Applications for Decoding Neural Networks Rating: 0 out of 5 stars0 ratingsConvolutional Neural Networks: Fundamentals and Applications for Analyzing Visual Imagery Rating: 0 out of 5 stars0 ratingsSupport Vector Machine: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAttractor Networks: Fundamentals and Applications in Computational Neuroscience Rating: 0 out of 5 stars0 ratingsNeural Networks: Advances and Applications, 2 Rating: 0 out of 5 stars0 ratingsNeural Networks and Fuzzy Logic Rating: 0 out of 5 stars0 ratingsPathways to Machine Learning and Soft Computing: 邁向機器學習與軟計算之路(國際英文版) Rating: 0 out of 5 stars0 ratingsHopfield Networks: Fundamentals and Applications of The Neural Network That Stores Memories Rating: 0 out of 5 stars0 ratingsNeural Modeling Fields: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsConvolutional Neural Networks in Python: Beginner's Guide to Convolutional Neural Networks in Python Rating: 0 out of 5 stars0 ratingsRandomly Deployed Wireless Sensor Networks Rating: 0 out of 5 stars0 ratingsRadial Basis Networks: Fundamentals and Applications for The Activation Functions of Artificial Neural Networks Rating: 0 out of 5 stars0 ratingsKernel Methods: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsK Nearest Neighbor Algorithm: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsArtificial Intelligence Algorithms Rating: 0 out of 5 stars0 ratingsEEG Brain Signal Classification for Epileptic Seizure Disorder Detection Rating: 0 out of 5 stars0 ratingsMachine Learning - Advanced Concepts Rating: 0 out of 5 stars0 ratingsEvolutionary Algorithms and Neural Networks: Theory and Applications Rating: 0 out of 5 stars0 ratingsPointers Rating: 0 out of 5 stars0 ratingsIntroduction to Deep Learning and Neural Networks with Python™: A Practical Guide Rating: 0 out of 5 stars0 ratingsDeep Learning with Python: A Comprehensive Guide to Deep Learning with Python Rating: 0 out of 5 stars0 ratingsSignal Processing in Electronic Communications: For Engineers and Mathematicians Rating: 0 out of 5 stars0 ratingsHandbook of Ultra-Wideband Short-Range Sensing: Theory, Sensors, Applications Rating: 0 out of 5 stars0 ratings
Intelligence (AI) & Semantics For You
101 Midjourney Prompt Secrets Rating: 3 out of 5 stars3/5Midjourney Mastery - The Ultimate Handbook of Prompts Rating: 5 out of 5 stars5/5Killer ChatGPT Prompts: Harness the Power of AI for Success and Profit Rating: 2 out of 5 stars2/5ChatGPT Rating: 3 out of 5 stars3/5AI for Educators: AI for Educators Rating: 5 out of 5 stars5/5How To Become A Data Scientist With ChatGPT: A Beginner's Guide to ChatGPT-Assisted Programming Rating: 5 out of 5 stars5/5ChatGPT For Dummies Rating: 0 out of 5 stars0 ratingsCreating Online Courses with ChatGPT | A Step-by-Step Guide with Prompt Templates Rating: 4 out of 5 stars4/5Mastering ChatGPT: 21 Prompts Templates for Effortless Writing Rating: 5 out of 5 stars5/5Artificial Intelligence: A Guide for Thinking Humans Rating: 4 out of 5 stars4/5Chat-GPT Income Ideas: Pioneering Monetization Concepts Utilizing Conversational AI for Profitable Ventures Rating: 4 out of 5 stars4/5TensorFlow in 1 Day: Make your own Neural Network Rating: 4 out of 5 stars4/5ChatGPT For Fiction Writing: AI for Authors Rating: 5 out of 5 stars5/5ChatGPT Ultimate User Guide - How to Make Money Online Faster and More Precise Using AI Technology Rating: 0 out of 5 stars0 ratingsMake Money with ChatGPT: Your Guide to Making Passive Income Online with Ease using AI: AI Wealth Mastery Rating: 0 out of 5 stars0 ratingsThe Secrets of ChatGPT Prompt Engineering for Non-Developers Rating: 5 out of 5 stars5/5A Quickstart Guide To Becoming A ChatGPT Millionaire: The ChatGPT Book For Beginners (Lazy Money Series®) Rating: 4 out of 5 stars4/5Enterprise AI For Dummies Rating: 3 out of 5 stars3/5Dark Aeon: Transhumanism and the War Against Humanity Rating: 5 out of 5 stars5/5Summary of Super-Intelligence From Nick Bostrom Rating: 5 out of 5 stars5/5ChatGPT: The Future of Intelligent Conversation Rating: 4 out of 5 stars4/5
Reviews for Competitive Learning
0 ratings0 reviews
Book preview
Competitive Learning - Fouad Sabry
Chapter 1: Competitive learning
In artificial neural networks, competitive learning is a kind of unsupervised learning in which nodes fight for the right to react to a subset of the input data. This type of learning is known as competitive learning.
In competitive learning, which is a subtype of Hebbian learning, the goal is to increase the level of specialization attained by each node in the network. It is particularly useful for locating clusters within data sets.
Models and algorithms such as vector quantization and self-organizing maps are examples of those that are founded on the notion of competitive learning (Kohonen maps).
A rule for competitive learning may be broken down into three primary components:
A group of neurons that are identical to one another, with the exception of their synaptic weights, which have been arbitrarily assigned. As a result, these neurons react uniquely to a predetermined collection of input patterns.
A restriction placed on the amount of strength
possessed by each neuron
A system that allows neurons to compete for the right to react to a certain selection of inputs, with the end result being that only one output neuron (or only one neuron per group) is active (also known as on
) at any one moment. One kind of neuron is known as a winner-take-all
neuron, and it is the one that comes out on top in a competition.
Therefore, the individual neurons that make up the network become 'feature detectors' for various categories of input patterns as a result of their learning to specialize on groups of patterns that are very similar to one another.
Because competitive networks recode sets of correlated inputs to one of a few output neurons, the redundancy in representation, which is a key component of processing in biological sensory systems, may effectively be eliminated.
Competitive Learning is usually implemented with Neural Networks that contain a hidden layer which is commonly known as competitive layer
.
Every competitive neuron is described by a vector of weights
{{\mathbf {w}}}_{i}=\left({w_{{i1}},..,w_{{id}}}\right)^{T},i=1,..,Mand calculates the similarity measure between the input data
{{\mathbf {x}}}^{n}=\left({x_{{n1}},..,x_{{nd}}}\right)^{T}\in {\mathbb {R}}^{d}and the weight vector {{\mathbf {w}}}_{i} .
Regarding each and every input vector, the competitive neurons compete
with each other to see which one of them is the most similar to that particular input vector.
The winner neuron m sets its output {\displaystyle o_{m}=1} and all the other competitive neurons set their output
o_{i}=0,i=1,..,M,i\neq m.
Usually, in order to measure similarity the inverse of the Euclidean distance is used: \left\|{{{\mathbf {x}}}-{{\mathbf {w}}}_{i}}\right\| between the input vector {{\mathbf {x}}}^{n} and the weight vector {{\mathbf {w}}}_{i} .
A simple approach for competitive learning is shown here, with the goal of locating three clusters within given input data.
1. (Set-up.) It would be beneficial to have a set of sensors that all fed into three distinct nodes, since this would ensure that each sensor was linked to every node. Allow a random value between 0.0 and 1.0 to be chosen for the weight that is assigned to each sensor by each node. The output of each node should be equal to the total of all of its sensors, with the signal intensity of each sensor multiplied by the weight of the node.
2. The node with the greatest output is considered to be the winner whenever the network is presented with a new input. The input is placed into the category of belonging to the cluster that corresponds to that node.
3. The winner revises each of its weights by shifting weight away from connections that have given it fewer or weaker signals and toward connections that have given it more or stronger signals.
When a result, as more data are obtained, each node moves closer to the center of the cluster that it has come to represent and activates more strongly for inputs that are associated with this cluster while activating less strongly for inputs that are associated with other clusters.
{End Chapter 1}
Chapter 2: Self-organizing map
A self-organizing map (SOM) or self-organizing feature map (SOFM) is an unsupervised machine learning technique used to produce a low-dimensional (typically two-dimensional) representation of a higher dimensional data set while preserving the topological structure of the data.
For example, a data set with p variables measured in n observations could be represented as clusters of observations with similar values for the variables.
These clusters then could be visualized as a two-dimensional map
such that observations in proximal clusters have more similar values than observations in distal clusters.
This can make high-dimensional data easier to visualize and analyze.
An SOM is a type of artificial neural network but is trained using competitive learning rather than the error-correction learning (e.g., backpropagation with gradient descent) used by other artificial neural networks. The SOM was introduced by the Finnish professor Teuvo Kohonen in the 1980s and therefore is sometimes called a Kohonen map or Kohonen network. SOMs create internal representations reminiscent of the cortical homunculus, a distorted representation of the human body, based on a neurological map
of the areas and proportions of the human brain dedicated to processing sensory functions, for different parts of the body.
Self-organizing maps, like most artificial neural networks, operate in two modes: training and mapping. First, training uses an input data set (the input space
) to generate a lower-dimensional representation of the input data (the map space
). Second, mapping classifies additional input data using the generated map.
In most cases, the goal of training is to represent an input space with p dimensions as a map space with two dimensions. Specifically, an input space with p variables is said to have p dimensions. A map space consists of components called nodes
or neurons,
which are arranged as a hexagonal or rectangular grid with two dimensions. The number of nodes and their arrangement are specified beforehand based on the larger goals of the analysis and exploration of the data.
Each node in the map space is associated with a weight
vector, which is the position of the node in the input space. While nodes in the map space stay fixed, training consists in moving weight vectors toward the input data (reducing a distance metric such as Euclidean distance) without spoiling the topology induced from the map space. After training, the map can be used to classify additional observations for the input space by finding the node with the closest weight vector (smallest distance metric) to the input space vector.
The goal of learning in the self-organizing map is to cause different parts of the network to respond similarly to certain input patterns. This is partly motivated by how visual, auditory or other sensory information is handled in separate parts of the cerebral cortex in the human brain.
The weights of the neurons are initialized either to small random values or sampled evenly from the subspace spanned by the two largest principal component eigenvectors. With the latter alternative, learning is much faster because the initial weights already give a good approximation of SOM weights.
The network must be fed a large number of example vectors that represent, as close as possible, the kinds of vectors expected during mapping. The examples are usually administered several times as iterations.
The training utilizes competitive learning.
When a training example is fed to the network, its Euclidean distance to all weight vectors is computed.
The neuron whose weight vector is most similar to the input is called the best matching unit (BMU).
The weights of the BMU and neurons close to it in the SOM grid are adjusted towards the input vector.
The magnitude of the change decreases with time and with the grid-distance from the BMU.
The update formula for a neuron v with weight vector Wv(s) is
{\displaystyle W_{v}(s+1)=W_{v}(s)+\theta (u,v,s)\cdot \alpha (s)\cdot (D(t)-W_{v}(s))}, where s is the step index, t is an index into the training sample, u is the index of the BMU for the input vector D(t), α(s) is a monotonically decreasing learning coefficient; θ(u, v, s) is