Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Support Vector Machine: Fundamentals and Applications
Support Vector Machine: Fundamentals and Applications
Support Vector Machine: Fundamentals and Applications
Ebook106 pages59 minutes

Support Vector Machine: Fundamentals and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is Support Vector Machine


In the field of machine learning, support vector machines are supervised learning models that examine data for classification and regression analysis. These models come with related learning algorithms. Vladimir Vapnik and his coworkers at AT&T Bell Laboratories were responsible for its creation. Because they are founded on statistical learning frameworks or the VC theory, which was developed by Vapnik and Chervonenkis (1974), support vector machines (SVMs) are among the most accurate prediction systems. A non-probabilistic binary linear classifier is what results when an SVM training algorithm is given a series of training examples, each of which is marked as belonging to one of two categories. The algorithm then develops a model that assigns subsequent examples to either one of the two categories or neither of them. The support vector machine (SVM) allocates training examples to points in space in such a way as to maximize the difference in size between the two categories. After that, new examples are mapped into that same space, and depending on which side of the gap they fall on, a prediction is made as to which category they belong to.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Support vector machine


Chapter 2: Linear classifier


Chapter 3: Perceptron


Chapter 4: Projection (linear algebra)


Chapter 5: Linear separability


Chapter 6: Kernel method


Chapter 7: Sequential minimal optimization


Chapter 8: Least-squares support vector machine


Chapter 9: Hinge loss


Chapter 10: Polynomial kernel


(II) Answering the public top questions about support vector machine.


(III) Real world examples for the usage of support vector machine in many fields.


(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of support vector machine' technologies.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of support vector machine.

LanguageEnglish
Release dateJun 23, 2023
Support Vector Machine: Fundamentals and Applications

Read more from Fouad Sabry

Related to Support Vector Machine

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Support Vector Machine

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Support Vector Machine - Fouad Sabry

    Chapter 1: Support vector machine

    Support vector machines, often known as support vector networks, are a kind of machine learning algorithm. SVMs are one of the most accurate prediction approaches since they are based on statistical learning frameworks or VC theory, both of which were introduced by Vapnik (1982, 1995) and Chervonenkis (1977). Vapnik et al., 1997) (1974). A non-probabilistic binary linear classifier is what results when a support vector machine (SVM) training algorithm is given a set of training examples, each of which is labeled as belonging to one of two categories. The model that the SVM training algorithm creates assigns new examples to either one of the two categories (although methods such as Platt scaling exist to use SVM in a probabilistic classification setting). The support vector machine (SVM) allocates training examples to points in space in such a way as to maximize the difference in size between the two categories. After then, fresh instances are mapped into the same space, and a prediction is made on which category they should belong to depending on which side of the gap they lie on.

    In addition to doing linear classification, support vector machines (SVMs) are also capable of performing non-linear classification in an effective manner by using a technique known as the kernel trick, which maps their inputs implicitly into high-dimensional feature spaces.

    In order to classify unlabeled data, Hava Siegelmann and Vladimir Vapnik devised an algorithm known as support vector clustering. This method makes use of the statistics of support vectors, which were first established in the support vector machines technique. These data sets need unsupervised learning algorithms, which look for natural grouping of the data into groups and then map new data according to these clusters. These clusters are then used to guide the mapping of new data.

    In machine learning, one of the most frequent tasks is data classification.

    Let's say we have certain data points and each one of them belongs to one of two categories, And the objective here is to determine which category a new data item will fall into.

    Regarding the use of support vector machines, a data point is viewed as a p -dimensional vector (a list of p numbers), and we want to know whether we can separate such points with a (p-1) -dimensional hyperplane.

    One may refer to this as a linear classifier.

    There are a great deal of hyperplanes that might be used to categorize the data.

    One feasible option for the hyperplane that should be used is the one that provides the greatest amount of separation, or margin, between the two different kinds of people.

    Therefore, we choose the hyperplane in such a way that the distance from it to the data point that is closest on either side is as great as possible.

    In the event that such a hyperplane does exist, The hyperplane that it specifies is known as the maximum-margin hyperplane, and the linear classifier that it generates is called a maximum-margin classifier; or equivalently, the perceptron that best represents ideal stability.

    In a more technical sense, a support vector machine builds a hyperplane or series of hyperplanes in a high- or infinite-dimensional space. These hyperplanes may be used for classification, regression, or other tasks including the identification of outliers.

    While it's possible that the initial issue may have been presented in a space of limited dimensions, the, It is quite common for the sets to be discriminated to not be linearly separable in the space being considered.

    Because of this fact, It was suggested that the initial space, which had limited dimensions, should be transferred into a space with much greater dimensions, It is likely that this will make the separation in that area simpler.

    In order to maintain a manageable level of computational burden, The mappings that are employed by SVM techniques are intended to guarantee that dot products of pairs of input data vectors may be simply calculated in terms of the variables that were present in the original space, by defining them in terms of a kernel function {\displaystyle k(x,y)} selected to suit the problem.

    The hyperplanes in the space with higher dimensions are defined as the set of points in that space whose dot product with a vector is constant, wherein such a collection of vectors constitutes an orthogonal and, hence, a minimum collection of vectors that forms a hyperplane.

    The vectors defining the hyperplanes can be chosen to be linear combinations with parameters \alpha _{i} of images of feature vectors x_{i} that occur in the data base.

    Because of the hyperplane that was selected here, the points x in the feature space that are mapped into the hyperplane are defined by the relation

    {\displaystyle \textstyle \sum _{i}\alpha _{i}k(x_{i},x)={\text{constant}}.}

    Note that if {\displaystyle k(x,y)} becomes small as y grows further away from x , each term in the sum measures the degree of closeness of the test point x to the corresponding data base point x_{i} .

    This being the case, The sum of kernels may be used to determine how close each test point is to the data points that originated in either of the two sets that need to be differentiated from each other.

    Note the fact that the set of points x mapped into

    Enjoying the preview?
    Page 1 of 1