Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Artificial Neural Networks: Fundamentals and Applications for Decoding the Mysteries of Neural Computation
Artificial Neural Networks: Fundamentals and Applications for Decoding the Mysteries of Neural Computation
Artificial Neural Networks: Fundamentals and Applications for Decoding the Mysteries of Neural Computation
Ebook161 pages1 hour

Artificial Neural Networks: Fundamentals and Applications for Decoding the Mysteries of Neural Computation

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is Artificial Neural Networks


Computing systems that are inspired by the biological neural networks that make up animal brains are called artificial neural networks (ANNs). These systems are more commonly referred to as neural networks (NNs) or neural nets.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Artificial neural network


Chapter 2: Artificial neuron


Chapter 3: Unsupervised learning


Chapter 4: Backpropagation


Chapter 5: Types of artificial neural networks


Chapter 6: Deep learning


Chapter 7: Convolutional neural network


Chapter 8: Long short-term memory


Chapter 9: Recurrent neural network


Chapter 10: History of artificial neural networks


(II) Answering the public top questions about artificial neural networks.


(III) Real world examples for the usage of artificial neural networks in many fields.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of artificial neural networks.


What Is Artificial Intelligence Series


The Artificial Intelligence eBook series provides comprehensive coverage in over 200 topics. Each ebook covers a specific Artificial Intelligence topic in depth, written by experts in the field. The series aims to give readers a thorough understanding of the concepts, techniques, history and applications of artificial intelligence. Topics covered include machine learning, deep learning, neural networks, computer vision, natural language processing, robotics, ethics and more. The ebooks are written for professionals, students, and anyone interested in learning about the latest developments in this rapidly advancing field.
The Artificial Intelligence eBook series provides an in-depth yet accessible exploration, from the fundamental concepts to the state-of-the-art research. With over 200 volumes, readers gain a thorough grounding in all aspects of Artificial Intelligence. The ebooks are designed to build knowledge systematically, with later volumes building on the foundations laid by earlier ones. This comprehensive series is an indispensable resource for anyone seeking to develop expertise in artificial intelligence.

LanguageEnglish
Release dateJun 21, 2023
Artificial Neural Networks: Fundamentals and Applications for Decoding the Mysteries of Neural Computation

Read more from Fouad Sabry

Related to Artificial Neural Networks

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Artificial Neural Networks

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Artificial Neural Networks - Fouad Sabry

    Chapter 1: Artificial neural network

    Computing systems that are modeled after the biological neural networks that make up animal brains are known as artificial neural networks, or ANNs for short. These networks are more often referred to as neural networks, or just NNs.

    An ANN is built on top of a network of interconnected units or nodes that are referred to as artificial neurons. These neurons are meant to roughly imitate the neurons that are found in a biological brain. Each connection, similar to the synapses that are found in real brains, has the ability to send a signal to other neurons. An artificial neuron is one that first receives signals, then analyzes those signals, and then sends messages to other neurons that it is linked to. The signal at a connection is a real number, and the output of each neuron is calculated by some non-linear function of the sum of its inputs. The signal at a connection is referred to as the signal at the connection. Edges are another name for the connections. The weight of neuronal dendrites and edge connections often changes as a function of ongoing learning. The intensity of the signal at a connection may be increased or decreased depending on the weight. It's possible that neurons have a threshold, and that only when the total signal exceeds that threshold will the neuron send out a signal. In most cases, groups of neurons are organized into layers. It's possible for each layers to make unique changes on the data that they receive. Signals make their way from the first layer, which is known as the input layer, to the final layer, which is known as the output layer, perhaps after traveling through the layers more than once.

    Processing instances, each of which has a known input and output, allows neural networks to learn (or be trained) by creating probability-weighted connections between the two, which are then stored inside the data structure of the net itself. This is how neural networks learn. The process of training a neural network using an existing example often involves calculating the difference between the processed output of the network (typically a prediction) and a target output. This is done in order to determine how well the network has learned from the example. This distinction is where the mistake lies. The network will then make any necessary adjustments to its weighted associations in accordance with a learning rule and making use of this error value. The output that is generated by the neural network as a result of successive changes will become progressively similar to the output that is intended. After a sufficient number of these changes, the training may be ended depending on specified conditions after having been completed. This kind of learning is referred to as supervised learning.

    These kinds of systems learn to carry out tasks by thinking about examples, in most cases without being programmed with rules that are relevant to the activities themselves. For instance, in image recognition, they might learn to recognize images that contain cats by analyzing example images that have been manually labeled as cat or no cat, and then using the results to recognize cats in other images. In this way, they would learn to identify images that contain cats by analyzing images that have been manually labeled as cat or no cat. They accomplish this despite the fact that they have no previous knowledge about cats, such as the fact that cats have hair, tails, whiskers, and faces that are similar to cats. Instead, they develop identifying traits in a completely automated fashion from the instances that they analyse.

    It was Warren McCulloch and Walter Pitts who made the discovery that basic perceptrons were unable to process the exclusive-or circuit and that computers did not have the capability to process neural networks that were helpful.

    1970 saw the publication of the general approach for automated differentiation (AD) of discrete linked networks of nested differentiable functions that had been developed by Seppo Linnainmaa. in terms of criteria such as the identification of traffic signs (IJCNN 2012).

    ANNs were first conceived as an effort to mimic the structure of the human brain in order to solve problems that were difficult for traditional algorithmic approaches to solve. They quickly shifted their focus to enhancing the empirical outcomes, largely giving up on efforts to stay faithful to the biological origins of their products. Neurons are linked to one another in a variety of different configurations, which enables the output of some neurons to become the input of other neurons. The network has the shape of a graph that is directed and weighted.

    Artificial neural networks (ANNs) are made up of artificial neurons, which are essentially drawn from biological neuronal networks. Each artificial neuron receives inputs and generates a single output that is capable of being distributed to several additional neurons. The feature values of a sample of external data, such as photos or papers, may serve as the inputs. Alternatively, the outputs of other neurons can serve in this capacity. The job, such as identifying an item in a picture, is completed by the outputs of the last output neurons of the neural net.

    In order to determine what the output of the neuron is, we must first calculate the weighted sum of all of the neuron's inputs. This sum must be computed while taking into account the weights of the connections that go from the inputs to the neuron. We are going to add a bias term to this total.

    In deep learning, the neurons are often arranged in numerous layers, since this kind of learning emphasizes complexity. Neurons in one layer can only make connections with other neurons in the layer immediately before and the layer immediately succeeding it. The layer that takes in data from the outside is called the input layer. The output layer is the one that generates the completed product in its entirety. There are at least one and perhaps more secret layers in between them. There are also networks with a single layer and networks with no layers. There is potential for numerous connection patterns to exist between the two levels. They are able to be completely linked, meaning that every neuron in one layer is able to connect to every neuron in the layer above it. They may be pooling, which is when a group of neurons in one layer link to a single neuron in the next layer, hence lowering the number of neurons in that layer. Pooling is one of the ways that neurons can communicate with one another.

    A hyperparameter is a kind of constant parameter, the value of which is determined before to the beginning of the learning process. The learning process is used to obtain the values of the parameters. Learning rate, the number of hidden layers, and batch size are all examples of hyperparameters. It's possible for the values of some hyperparameters to be contingent on the values of other hyperparameters. For instance, the thickness of some layers could change depending on the total amount of layers.

    Learning is the process of the network adapting itself to better tackle a job by taking sample observations into consideration. Learning entails making adjustments to the weights (and, in some cases, the thresholds) of the network in order to get a more accurate prediction. The observed mistakes need to be reduced as much as possible in order to achieve this. Learning is considered to be complete when looking at further data does not help to lower the error rate in any meaningful way. Even after being taught, the mistake rate does not usually become completely zero. In most cases, the network will need to be modified if it is determined, after learning, that the error rate is too high. In actual practice, this is accomplished by creating a cost function that is then assessed at regular intervals while the system is learning. The learning process will proceed so long as the output of the system continues to fall. The cost is typically characterized as a statistic whose value can only be guessed rather than being precisely determined. Because the outputs are numbers, the margin of error between the output, which is almost surely a cat, and the right response, which is also a cat, is rather tiny when the error rate is low. The goal of learning is to bring the total number of discrepancies between observations down as much as possible. The vast majority of learning models may be understood as a direct application of optimization theory and statistical estimation.

    The learning rate is the factor that determines the magnitude of the corrective actions that the model must perform in order to compensate for mistakes in each observation. The idea of momentum makes it possible to weight the equilibrium between the gradient and the prior change in such a way that the adjustment of the weight relies, at least to some degree, on the previous change. When the momentum is near to zero, the emphasis is on the gradient, whereas when it is close to one, the emphasis is on the most recent change.

    Although it is possible to define a cost function on an as-needed basis, the selection of a cost function is typically guided by either the function's desirable properties (such as convexity) or the fact that it emerges from the model itself (for instance, in a probabilistic model, the model's posterior probability can be used as an inverse cost).

    Backpropagation is a method that is used to alter the connection weights in order to correct for each mistake identified during learning. Backpropagation is a technique. The whole quantity of the mistake is efficiently distributed among all of the connections. Backprop is a technique that, technically speaking, computes the gradient, or the derivative, of the cost function that is associated with a particular state in relation to the weights. The weight updates may be performed using stochastic gradient descent or other approaches, such as Extreme Learning Machines and non-connectionist neural networks. Other methods include:.

    Learning may be broken down into three primary categories: supervised learning, unsupervised learning, and reinforcement learning. They are each associated with a distinct educational activity.

    In supervised learning, a collection of inputs and intended outputs are coupled together. The goal of the learning process is to create the output that is intended for each input. In this particular instance, the cost function is connected to the process of removing inaccurate deductions. The mean-squared error is a cost that is often employed, and its goal is to reduce the average squared error that exists between the output of the network and the output that is wanted. Pattern recognition, which is sometimes referred to as classification, and regression are both tasks that are ideal for supervised learning (also known as function approximation). The use of supervised learning may

    Enjoying the preview?
    Page 1 of 1