Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Machine Learning: Fundamentals and Applications
Machine Learning: Fundamentals and Applications
Machine Learning: Fundamentals and Applications
Ebook151 pages1 hour

Machine Learning: Fundamentals and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is Machine Learning


Machine learning (ML) is a subfield of computer science that focuses on the study and development of methods that enable computers to "learn." These are methods that make use of data in order to enhance a computer's performance on a certain set of tasks.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Machine learning


Chapter 2: Big data


Chapter 3: Self-driving car


Chapter 4: Unsupervised learning


Chapter 5: Supervised learning


Chapter 6: Statistical learning theory


Chapter 7: Computational learning theory


Chapter 8: Automated machine learning


Chapter 9: Differentiable programming


Chapter 10: Reinforcement learning


(II) Answering the public top questions about machine learning.


(III) Real world examples for the usage of machine learning in many fields.


(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of machine learning' technologies.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of machine learning.

LanguageEnglish
Release dateJul 4, 2023
Machine Learning: Fundamentals and Applications

Related to Machine Learning

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Machine Learning

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Machine Learning - Fouad Sabry

    Chapter 1: Machine learning

    Machine learning (ML) is a subfield of computer science that focuses on the study and development of techniques that enable computers to learn, or more specifically, techniques that make use of data in order to enhance a computer's performance on a certain set of tasks.

    Machine learning, when used to the solution of business challenges, is also known as predictive analytics.

    Learning algorithms are based on the hypothesis that methods, algorithms, and judgments that were successful in the past are likely to continue to be successful in the future as well. These deductions might sometimes be self-evident, such as when one says that because the sun has risen every morning for the last 10,000 days, there is a good chance that it will rise again tomorrow morning. On other occasions, these explanations may be more subtle, such as X percent of families have geographically different species with color variations, which means there is a Y percent probability that unknown black swans exist..

    Arthur Samuel, a worker at IBM and a pioneer in the fields of computer games and artificial intelligence, is credited as being the one who first used the term machine learning in 1959.

    The search for artificial intelligence led to the development of the field of machine learning as a field of research (AI). During the early stages of AI's development as an academic field, a number of academics expressed an interest in teaching computers to learn from data. They attempted to solve the issue using a variety of symbolic methods, in addition to what were then known as neural networks. These neural networks mainly consisted of perceptrons and various other models that were later discovered to be reinventions of the generalized linear models of statistics.

    While machine learning and data mining frequently use the same methods and have a significant amount of overlap, the primary focus of machine learning is prediction, based on the known properties learned from the training data, whereas the primary focus of data mining is the discovery of (previously unknown) properties in the data (this is the analysis step of knowledge discovery in databases). Data mining makes use of numerous machine learning techniques, but its objectives are distinct from those of machine learning. On the other hand, machine learning makes use of data mining methods in the form of unsupervised learning or as a preprocessing step in order to increase learner accuracy. These two research groups (which do frequently have different conferences and separate publications, with ECML PKDD being a significant example) are often confused with one another as a result of the fundamental presumptions that both operate under, which are as follows: In the field of machine learning, performance is often measured in terms of an individual's capacity to repeat previously learned material, but in the field of knowledge discovery and data mining (KDD), the primary objective is to unearth information that was not previously known. An uninformed (unsupervised) technique will easily be surpassed by other supervised methods when evaluated with regard to existing information; however, supervised methods can't be employed in a typical KDD job since there isn't enough training data.

    Many learning issues are framed as the minimization of some loss function on a training set of examples, which is an example of one of the close linkages that exist between machine learning and optimization. The variance that exists between the predictions of the model that is being trained and the actual occurrences of the issue is represented by the loss function (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set of examples).

    The objective of generalization is what differentiates optimization from machine learning; whereas optimization techniques may reduce the loss on a training set, machine learning is concerned with decreasing the loss on samples that have not been seen before. Research is being conducted on a number of hot topics at the moment, one of which is characterizing the generalization of different learning algorithms, particularly deep learning algorithms.

    The primary objective of statistics is to draw conclusions about a population based on information gleaned from a sample, whereas the objective of machine learning is to identify generalizable predictive patterns. Although the two fields share a close relationship in terms of the methods they use, they are fundamentally different. whereby algorithmic model refers, more or less, to the machine learning algorithms such as Random Forest.

    Some statisticians have embraced techniques from the subject of machine learning, which has led to the creation of a hybrid discipline that these statisticians term statistical learning.

    Analytical and computational techniques that are derived from the deep-rooted physics of disordered systems can be extended to large-scale problems, such as those involving machine learning. For instance, these techniques can be used to analyze the weight space of deep neural networks. Moreover, these techniques can be applied to large-scale problems.

    To be able to draw broad conclusions from specific examples is one of the primary goals of a learner. Generalization is the capacity of a learning machine to perform properly on new instances or tasks after having encountered a learning data set. In this context, generalization refers to the ability of a learning machine to perform accurately. The training examples are taken from some generally unknown probability distribution, which is thought to be representative of the space of occurrences. The learner is tasked with developing a general model about this space, which will enable it to make predictions in new cases that are sufficiently accurate.

    Computational learning theory is a field of theoretical computer science that analyzes the performance of machine learning algorithms through the lens of the Probably Approximately Correct Learning (PAC) model. This kind of study is known as computational analysis of machine learning algorithms. In most cases, learning theory does not provide assurances on the performance of algorithms. This is due to the fact that training sets are limited and the future is unknowable. In its place, probabilistic limits on the performance are often used. One technique to measure generalization mistake is by the use of the bias–variance decomposition.

    It is important that the complexity of the hypothesis be comparable to the complexity of the function that is underlying the data in order to get the greatest performance possible in the context of generalization. In the event that the hypothesis is simpler than the function, it indicates that the model has inadequately matched the data. When the complexity of the model is raised in response, there is a corresponding reduction in the amount of training error. However, if the hypothesis is very complicated, the model may be prone to overfitting, and the results of the generalization will be less accurate.

    Learning theorists investigate a variety of topics, including performance constraints, the temporal complexity of learning, and the feasibility of learning. In the field of computational learning theory, the feasibility of a computation is determined by whether or not it can be completed in polynomial time. There are two distinct types of findings in terms of their temporal complexity: The successful completion of the experiment demonstrates that a certain class of functions may be learnt in a polynomial amount of time. According to the negative findings, some classes cannot be learnt in a polynomial amount of time.

    Traditional methods to machine learning may be broadly classified into one of three groups, which correspond to different learning paradigms. These categories are determined by the kind of signal or feedback that is made accessible to the learning system:

    The purpose of supervised learning is for the computer to learn a general rule that maps inputs to outputs by being shown examples of inputs and the intended outputs for those inputs. These examples are delivered to the computer by a teacher..

    Unsupervised learning is a kind of machine learning in which the learning algorithm is not provided with any labels and is instead left to discover structure on its own within the data it is fed. Discovering previously hidden patterns in data may be a purpose of unsupervised learning in and of itself, or it might be a means to an end (feature learning).

    Learning via reinforcement occurs when a computer program interacts with a dynamic environment in which it is required to accomplish a certain objective (such as driving a vehicle or playing a game against an opponent). The software is given input that is comparable to prizes as it works its way across the issue space, and it strives to make the most of these opportunities.

    A mathematical model of a collection of data is constructed using supervised learning algorithms. This model includes both the data inputs and the outputs that are intended. Classification algorithms are used in situations in which the outputs can only take on a certain set of values, while regression algorithms are used in situations in which the outputs may take on any numerical value within a given range. In the case of a classification algorithm that sorts incoming emails, for instance, the input would be an email that has just been received, and the output would be the name of the folder in which the email should be saved.

    Similarity learning is a subfield of supervised machine learning that is closely related to regression and classification. The objective of this subfield, however, is to learn from examples by employing a similarity function that evaluates the degree to which two things are comparable or related to one another. It may be used in ranking, recommendation systems, visual identification tracking, face verification, and speaker verification, among other applications.

    A collection of data that merely comprises inputs

    Enjoying the preview?
    Page 1 of 1