Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Artificial Neural Networks for Renewable Energy Systems and Real-World Applications
Artificial Neural Networks for Renewable Energy Systems and Real-World Applications
Artificial Neural Networks for Renewable Energy Systems and Real-World Applications
Ebook494 pages4 hours

Artificial Neural Networks for Renewable Energy Systems and Real-World Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Artificial Neural Networks for Renewable Energy Systems and Real-World Applications presents current trends for the solution of complex engineering problems in the application, modeling, analysis, and optimization of different energy systems and manufacturing processes. With growing research catering to the applications of neural networks in specific industrial applications, this reference provides a single resource catering to a broader perspective of ANN in renewable energy systems and manufacturing processes.

ANN-based methods have attracted the attention of scientists and researchers in different engineering and industrial disciplines, making this book a useful reference for all researchers and engineers interested in artificial networks, renewable energy systems, and manufacturing process analysis.

  • Includes illustrative examples on the design and development of ANNS for renewable and manufacturing applications
  • Features computer-aided simulations presented as algorithms, pseudocodes and flowcharts
  • Covers ANN theory for easy reference in subsequent technology specific sections
LanguageEnglish
Release dateSep 8, 2022
ISBN9780128231869
Artificial Neural Networks for Renewable Energy Systems and Real-World Applications

Related to Artificial Neural Networks for Renewable Energy Systems and Real-World Applications

Related ebooks

Power Resources For You

View More

Related articles

Reviews for Artificial Neural Networks for Renewable Energy Systems and Real-World Applications

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Artificial Neural Networks for Renewable Energy Systems and Real-World Applications - Ammar Hamed Elsheikh

    Chapter one

    Basics of artificial neural networks

    Rehab Ali Ibrahim¹, Ammar H. Elsheikh², Mohamed Elasyed Abd Elaziz¹ and Mohammed A.A. Al-qaness³,    ¹Department of Mathematics, Faculty of Science, Zagazig University, Zagazig, Egypt,    ²Production Engineering and Mechanical Design Department, Tanta University, Tanta, Egypt,    ³State Key Laboratory for Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan, P.R. China

    Abstract

    Artificial neural networks (ANNs) have been reported as useful predictive tools to model complex engineering systems. ANNs mimic the natural behavior of the human brain in handling different problems instead of solving intricate mathematical models. They are used as a black-box with excellent capabilities to learn the nonlinear relation between the inputs and outputs of a certain system. They have also enhanced the generalization capability to handle unseen data after the learning process. In this chapter, a review of the basics of ANNs is presented. In general, ANNs have received increased attention in recent years since they have been applied to numerous real-world applications. They have many advantages, such as simplicity and efficiency. In this chapter, the authors introduce the basic mathematical concepts of the multilayer perceptron, the wavelet neural network, radial basis function, and the Elman neural network.

    Keywords

    Artificial neural network (ANN); multilayer perceptron neural network; wavelet neural network (WNN)

    Contents

    Outline

    1.1 Artificial neural networks 1

    1.2 Types of neural networks 2

    1.2.1 Multilayer perceptron neural network 2

    1.2.2 Wavelet neural networks 4

    1.2.3 Radial basis function 5

    1.2.4 Elman neural network 6

    1.2.5 Statistical performance evaluation criteria 8

    1.3 Conclusion 9

    References 10

    1.1 Artificial neural networks

    Artificial neural networks (ANNs) are widely distributed processors made up of basic processing units called neurons [1]. They have a built-in capability for storing experimental knowledge that is suitable for use. High-speed information processing, routing capabilities, fault tolerance, adaptiveness, generalization, and robustness are all excellent characteristics of ANNs. These features make ANNs useful tools for modeling, optimizing, and predicting the performance of various engineering systems. As a result, they have being used to solve complex nonlinear engineering problems in a number of real-world applications with acceptable cost and efficient computing time [2–9].

    The neuron model used in many ANN models is made up of a series of links called synapses, each with its own weight, as shown Fig. 1.1. Each weight is multiplied by the xj input. Thereafter, all weighted inputs are summed and an externally applied bias bk is applied to lower or increase the summation’s output vk.

    Figure 1.1 Nonlinear model of a neuron [1].

    Also, the activation function φ(•) is employed to the output for decreasing the amplitude range for the output signal yk into a finite value. These sequences are formulated as follows:

    (1.1)

    where k represents a neuron, and j represents a synapse number.

    1.2 Types of neural networks

    In this section, we describe four ANN models, including the multilayer perceptron (MLP), wavelet neural network (WNN), radial basis function (RBF), and Elman neural network (ENN).

    1.2.1 Multilayer perceptron neural network

    The following are the fundamental principles of the MLP neural network. In general, this form of ANN has one input layer, multiple hidden layers, and one output layer [10,11], as shown in Fig. 1.2A. The input neuron is connected to the hidden layer neuron, and the hidden layer neuron is connected to the output neuron, but the neurons of the same layer are not connected. The neurons in the input layer obtain the data and transfer them on to the next layers before reaching the output layer.

    Figure 1.2 The structure of (A) MLP [12], (B) wavelet neural networks [13], (C) RBF [14], and (D) Elman networks [15].

    This problem can be defined as follows. There are M neurons in the input layer, and a k1-th neuron receives the xi input, where the output of the neuron can be obtained by:

    (1.2)

    Thus, the output is utilized as an input to the next hidden layer. The output of the neuron in the hidden layers is calculated as:

    (1.3)

    where denotes the activation function, and are the biases of the input and hidden layers. Also, ( ) denote the weights between the input and hidden layer neurons. is the number of inputs, and is the number of h hth hidden layer neurons, where represents the number of hidden layers.

    An output neuron can be considered as a weighted sum of all outputs of the neurons of the last hidden layer, as follows:

    (1.4)

    where L represents the number of neurons in the output layer, and represents the weights between neurons in hidden layers and the neurons in the output layer.

    Since the weights’ value has an effect on the final performance, backpropagation (BP) learning [16] is used to find these values. The BP method, on the other hand, takes a long time to complete because it requires an iterative training phase. Furthermore, the BP is most likely stuck in local minima.

    1.2.2 Wavelet neural networks

    WNNs, also known as wavelet networks (WNs), are a combination of wavelet theory and neural networks (NNs) [13]. The advantages of both the neural network and the wavelet transformation are inherited by WNNs. A WNN is made up of a feed-forward neural network and a hidden layer, the activation functions of which can be created using the orthonormal wavelet family [17]. The wavelet neurons are also called wavelons.

    As shown in Fig. 1.2B, the WNN’s simplest structure consists of an input layer and an output layer. The concealed layer of the WNN is made up of wavelons, with the wavelet dilation and translation parameters as input coefficients. Since the input is in a small area of the input space, the hidden layer’s wavelons result in an output that is not zero. A WNN’s output can be interpreted as a linear combination of the wavelet’s activation functions, and this combination is also weighted.

    Thus, the output can be formulated as the following equation:

    (1.5)

    where denotes the translation, and denotes the dilation coefficients.

    Fig. 1.2B portrays the general structure of a WNN with one input and one output. The WNN’s secret layer is made up of wavelons with the number M. The network’s output is a weighted sum of the wavelon’s outputs [18].

    (1.6)

    where the value deals with the functions that contain nonzero mean, and the wavelet function has a mean with zero value. Also, at the highest scale, the value of is considered instead of the function . In the WNN, coefficients are learned by an algorithm.

    1.2.3 Radial basis function

    The RBF neural network is one of the most efficient NNs, with three layers: input, hidden, and output. It is similar to MLP, as illustrated in Fig. 1.2C [14,19].

    The RBF’s input layer receives the input data and only passes it through the hidden layer. Following that, the hidden layer processes the obtained data and extracts relevant information before sending it to the output layer, which constructs the output data. However, the most important distinction between RBF and MLP is that tRBF is used by the hidden layer, which is defined by the Gaussian function, which is formulated as [14]:

    (1.7)

    where denotes the width of the neuron with a number j-th. xi represents the RBF input, and cj is the center of the RBF unit. Also, there are several functions that can be applied, such as:

    (1.8)

    (1.9)

    (1.10)

    Furthermore, another difference is the output layer using a linear function which is represented as the weighted sum of RBFs. This can be represented as:

    (1.11)

    where K is the outputs’ number, and wjk denotes the weight that connects the j-th node of the hidden layer and the k-th node of the output layer. Finally, j represents the number of nodes in the hidden layer.

    1.2.4 Elman neural network

    Generally, the ENN is a recurrent network (RNN), and a type of ANN in which connections between neurons form a guided cycle, which leads to dynamic temporal activity. RNNs vary from feedforward neural networks in that they handle input sequences using internal memory rather than external memory. This enables them to perform tasks such as unsegmented handwriting recognition and speech recognition.

    Elman [20] proposed the ENNs, where the output of hidden layers is fed back onto themselves using the recurrent layer, and the number of recurrent neurons is equal to the hidden neurons. Therefore, ENNs have high ability for learning and constructing temporal and spatial patterns. Also, they have dynamic function that can give a system better capability to deal with time-varying properties.

    The node in the hidden layer is connected only with one recurrent node using a fixed weight value.

    The structure of ENNs is shown in Fig. 1.2D. They have four layers (input, hidden, recurrent, and output). Let the input and the output layers consist of N and M, and the hidden neurons number be denoted by Nh.

    is the input of the neural network, where and are the outputs of the hidden and recurrent layers, respectively, as defined in the following equation:

    (1.12)

    where w1 represents the weight of the connection between the nodes in the input and hidden layers. w2 represents the weight of the connection between the hidden and recurrent layers, and f denotes the transfer function and is represented by the sigmoid function (as described in Table 1.1).

    Table 1.1

    After that, the hidden layer output is calculated, and the output of the neural network y(k) is calculated as:

    (1.13)

    where w3 denotes the weight between the hidden and output layers, and g represents the transfer function of the output layer.

    The back-propagation is employed in ENNs for the weights updating process, furthermore, the error of the network is calculated as [15]:

    (1.14)

    where d(k) denotes the desired output of the u(k) input.

    1.2.5 Statistical performance evaluation criteria

    To assess the quality of the prediction of NNs, a set of evaluation metrics are applied. These metrics are mean square error ( ), mean absolute error ( ), mean relative error ( ), and root mean square error (RMSE). In addition there are the correlation coefficient ( ), coefficient of variance ( ), efficiency coefficient , coefficient of determination ( ²), coefficient of residual mass ( ), and through the index of the performance of the model ( ). These measures are defined in Table 1.2, in which is the number of observations, is the observed values, and is the desired and prediction values. and are the maximum and the minimum observed values, respectively, and and represent the average of the observed and prediction values, respectively.

    Table 1.2

    1.3 Conclusion

    In this introductory chapter, different types of ANNs have been introduced. These ANN types include MLP, WNN, RBF, and ENN. The structure and mathematical notation of each ANN have been presented. It can be concluded that each of these ANNs has its own advantages that make it suitable for specific applications. In addition, most criteria that are used to assess the quality of the different ANN models have been presented.

    References

    1. Haykin SS, et al. Neural Networks and Learning Machines. Vol. 3 Upper Saddle River, NJ: Pearson; 2009.

    2. Abd Elaziz M, et al. Utilization of Random Vector Functional Link integrated with Marine Predators Algorithm for tensile behavior prediction of dissimilar friction stir welded aluminum alloy joints. Journal of Materials Research and Technology. 2020;9(5):11370–11381.

    3. Babikir HA, et al. Noise prediction of axial piston pump based on different valve materials using a modified artificial neural network model. Alexandria Engineering Journal 2019.

    4. Elaziz MA, Elsheikh AH, Sharshir SW. Improved prediction of oscillatory heat transfer coefficient for a thermoacoustic heat exchanger using modified adaptive neuro-fuzzy inference system. International Journal of Refrigeration 2019.

    5. El-Said EMS, Abd Elaziz M, Elsheikh AH, et al. Machine learning algorithms for improving the prediction of air injection effect on the thermohydraulic performance of shell and tube heat exchanger. Applied Thermal Engineering. 2021;185:116471.

    6. Elsheikh, et al. Prediction of laser cutting parameters for polymethylmethacrylate sheets using random vector functional link network integrated with equilibrium optimizer. Journal of Intelligent Manufacturing 2020.

    7. Elsheikh AH, et al. A new artificial neural network model integrated with a cat swarm optimization algorithm for predicting the emitted noise during axial piston pump operation. IOP Conference Series: Materials Science and Engineering. 2020;973:012035.

    8. Elsheikh AH, et al. Utilization of LSTM neural network for water production forecasting of a stepped solar still with a corrugated absorber plate. Process Safety and Environmental Protection. 2021;148:273–282.

    9. Elsheikh AH, et al. Modeling of solar energy systems using artificial neural network: a comprehensive review. Solar Energy. 2019;180:622–639.

    10. Atkinson PM, Tatnall A. Introduction neural networks in remote sensing. International Journal of Remote Sensing. 1997;18(4):699–709.

    11. Yan H, et al. A multilayer perceptron-based medical decision support system for heart disease diagnosis. Expert Systems with Applications. 2006;30(2):272–281.

    12. Ren Y, et al. Random vector functional link network for short-term electricity load demand forecasting. Information Sciences.

    Enjoying the preview?
    Page 1 of 1