Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Competitive Learning: Fundamentals and Applications for Reinforcement Learning through Competition
Competitive Learning: Fundamentals and Applications for Reinforcement Learning through Competition
Competitive Learning: Fundamentals and Applications for Reinforcement Learning through Competition
Ebook125 pages1 hour

Competitive Learning: Fundamentals and Applications for Reinforcement Learning through Competition

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is Competitive Learning


In artificial neural networks, competitive learning is a type of unsupervised learning in which nodes fight for the right to respond to a subset of the input data. This type of learning is known as "competitive learning." Competitive learning is a form of learning that is similar to Hebbian learning. It operates by raising the level of specialization at each node in the network. It works quite well for discovering clusters hidden within data.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Competitive Learning


Chapter 2: Self-organizing map


Chapter 3: Perceptron


Chapter 4: Unsupervised Learning


Chapter 5: Hebbian Theory


Chapter 6: Backpropagation


Chapter 7: Multilayer Perceptron


Chapter 8: Learning Rule


Chapter 9: Feature Learning


Chapter 10: Types of artificial neural networks


(II) Answering the public top questions about competitive learning.


(III) Real world examples for the usage of competitive learning in many fields.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of competitive learning.


What Is Artificial Intelligence Series


The artificial intelligence book series provides comprehensive coverage in over 200 topics. Each ebook covers a specific Artificial Intelligence topic in depth, written by experts in the field. The series aims to give readers a thorough understanding of the concepts, techniques, history and applications of artificial intelligence. Topics covered include machine learning, deep learning, neural networks, computer vision, natural language processing, robotics, ethics and more. The ebooks are written for professionals, students, and anyone interested in learning about the latest developments in this rapidly advancing field.
The artificial intelligence book series provides an in-depth yet accessible exploration, from the fundamental concepts to the state-of-the-art research. With over 200 volumes, readers gain a thorough grounding in all aspects of Artificial Intelligence. The ebooks are designed to build knowledge systematically, with later volumes building on the foundations laid by earlier ones. This comprehensive series is an indispensable resource for anyone seeking to develop expertise in artificial intelligence.

LanguageEnglish
Release dateJun 21, 2023
Competitive Learning: Fundamentals and Applications for Reinforcement Learning through Competition

Read more from Fouad Sabry

Related to Competitive Learning

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Competitive Learning

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Competitive Learning - Fouad Sabry

    Chapter 1: Competitive learning

    In artificial neural networks, competitive learning is a kind of unsupervised learning in which nodes fight for the right to react to a subset of the input data. This type of learning is known as competitive learning. In competitive learning, which is a subtype of Hebbian learning, the goal is to increase the level of specialization attained by each node in the network. It is particularly useful for locating clusters within data sets.

    Models and algorithms such as vector quantization and self-organizing maps are examples of those that are founded on the notion of competitive learning (Kohonen maps).

    A rule for competitive learning may be broken down into three primary components:

    A group of neurons that are identical to one another, with the exception of their synaptic weights, which have been arbitrarily assigned. As a result, these neurons react uniquely to a predetermined collection of input patterns.

    A restriction placed on the amount of strength possessed by each neuron

    A system that allows neurons to compete for the right to react to a certain selection of inputs, with the end result being that only one output neuron (or only one neuron per group) is active (also known as on) at any one moment. One kind of neuron is known as a winner-take-all neuron, and it is the one that comes out on top in a competition.

    Therefore, the individual neurons that make up the network become 'feature detectors' for various categories of input patterns as a result of their learning to specialize on groups of patterns that are very similar to one another.

    Because competitive networks recode sets of correlated inputs to one of a few output neurons, the redundancy in representation, which is a key component of processing in biological sensory systems, may effectively be eliminated.

    Competitive Learning is usually implemented with Neural Networks that contain a hidden layer which is commonly known as competitive layer.

    Every competitive neuron is described by a vector of weights

    {{\mathbf {w}}}_{i}=\left({w_{{i1}},..,w_{{id}}}\right)^{T},i=1,..,M

    and calculates the similarity measure between the input data

    {{\mathbf {x}}}^{n}=\left({x_{{n1}},..,x_{{nd}}}\right)^{T}\in {\mathbb {R}}^{d}

    and the weight vector {{\mathbf {w}}}_{i} .

    Regarding each and every input vector, the competitive neurons compete with each other to see which one of them is the most similar to that particular input vector.

    The winner neuron m sets its output {\displaystyle o_{m}=1} and all the other competitive neurons set their output

    o_{i}=0,i=1,..,M,i\neq m

    .

    Usually, in order to measure similarity the inverse of the Euclidean distance is used: \left\|{{{\mathbf {x}}}-{{\mathbf {w}}}_{i}}\right\| between the input vector {{\mathbf {x}}}^{n} and the weight vector {{\mathbf {w}}}_{i} .

    A simple approach for competitive learning is shown here, with the goal of locating three clusters within given input data.

    1. (Set-up.) It would be beneficial to have a set of sensors that all fed into three distinct nodes, since this would ensure that each sensor was linked to every node. Allow a random value between 0.0 and 1.0 to be chosen for the weight that is assigned to each sensor by each node. The output of each node should be equal to the total of all of its sensors, with the signal intensity of each sensor multiplied by the weight of the node.

    2. The node with the greatest output is considered to be the winner whenever the network is presented with a new input. The input is placed into the category of belonging to the cluster that corresponds to that node.

    3. The winner revises each of its weights by shifting weight away from connections that have given it fewer or weaker signals and toward connections that have given it more or stronger signals.

    When a result, as more data are obtained, each node moves closer to the center of the cluster that it has come to represent and activates more strongly for inputs that are associated with this cluster while activating less strongly for inputs that are associated with other clusters.

    {End Chapter 1}

    Chapter 2: Self-organizing map

    A self-organizing map (SOM) or self-organizing feature map (SOFM) is an unsupervised machine learning technique used to produce a low-dimensional (typically two-dimensional) representation of a higher dimensional data set while preserving the topological structure of the data.

    For example, a data set with p variables measured in n observations could be represented as clusters of observations with similar values for the variables.

    These clusters then could be visualized as a two-dimensional map such that observations in proximal clusters have more similar values than observations in distal clusters.

    This can make high-dimensional data easier to visualize and analyze.

    An SOM is a type of artificial neural network but is trained using competitive learning rather than the error-correction learning (e.g., backpropagation with gradient descent) used by other artificial neural networks. The SOM was introduced by the Finnish professor Teuvo Kohonen in the 1980s and therefore is sometimes called a Kohonen map or Kohonen network. SOMs create internal representations reminiscent of the cortical homunculus, a distorted representation of the human body, based on a neurological map of the areas and proportions of the human brain dedicated to processing sensory functions, for different parts of the body.

    Self-organizing maps, like most artificial neural networks, operate in two modes: training and mapping. First, training uses an input data set (the input space) to generate a lower-dimensional representation of the input data (the map space). Second, mapping classifies additional input data using the generated map.

    In most cases, the goal of training is to represent an input space with p dimensions as a map space with two dimensions. Specifically, an input space with p variables is said to have p dimensions. A map space consists of components called nodes or neurons, which are arranged as a hexagonal or rectangular grid with two dimensions. The number of nodes and their arrangement are specified beforehand based on the larger goals of the analysis and exploration of the data.

    Each node in the map space is associated with a weight vector, which is the position of the node in the input space. While nodes in the map space stay fixed, training consists in moving weight vectors toward the input data (reducing a distance metric such as Euclidean distance) without spoiling the topology induced from the map space. After training, the map can be used to classify additional observations for the input space by finding the node with the closest weight vector (smallest distance metric) to the input space vector.

    The goal of learning in the self-organizing map is to cause different parts of the network to respond similarly to certain input patterns. This is partly motivated by how visual, auditory or other sensory information is handled in separate parts of the cerebral cortex in the human brain.

    The weights of the neurons are initialized either to small random values or sampled evenly from the subspace spanned by the two largest principal component eigenvectors. With the latter alternative, learning is much faster because the initial weights already give a good approximation of SOM weights.

    The network must be fed a large number of example vectors that represent, as close as possible, the kinds of vectors expected during mapping. The examples are usually administered several times as iterations.

    The training utilizes competitive learning.

    When a training example is fed to the network, its Euclidean distance to all weight vectors is computed.

    The neuron whose weight vector is most similar to the input is called the best matching unit (BMU).

    The weights of the BMU and neurons close to it in the SOM grid are adjusted towards the input vector.

    The magnitude of the change decreases with time and with the grid-distance from the BMU.

    The update formula for a neuron v with weight vector Wv(s) is

    {\displaystyle W_{v}(s+1)=W_{v}(s)+\theta (u,v,s)\cdot \alpha (s)\cdot (D(t)-W_{v}(s))}

    , where s is the step index, t is an index into the training sample, u is the index of the BMU for the input vector D(t), α(s) is a monotonically decreasing learning coefficient; θ(u, v, s) is

    Enjoying the preview?
    Page 1 of 1