Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

DATA MINING and MACHINE LEARNING: CLUSTER ANALYSIS and kNN CLASSIFIERS. Examples with MATLAB
DATA MINING and MACHINE LEARNING: CLUSTER ANALYSIS and kNN CLASSIFIERS. Examples with MATLAB
DATA MINING and MACHINE LEARNING: CLUSTER ANALYSIS and kNN CLASSIFIERS. Examples with MATLAB
Ebook344 pages2 hours

DATA MINING and MACHINE LEARNING: CLUSTER ANALYSIS and kNN CLASSIFIERS. Examples with MATLAB

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Data Mining an Machine Learning uses two types of techniques: predictive techniques (supervised learnig techniques) , which trains a model on known input and output data so that it can predict future outputs, and descriptive techniques (unsupervised learning techniques), which finds hidden patterns or intrinsic structures in input data. Descriptive techniques finds hidden patterns or intrinsic structures in data. It is used to draw inferences from datasets consisting of input data without labeled responses. Clustering is the most common descriptive technique. It is used for exploratory data analysis to find hidden patterns or groupings in data. Applications for clustering include gene sequence analysis, market research, and object recognition. This book develops classification descriptive techniques (unsupervised learning techniques) related to cluster analysis and kNN classifiers.
LanguageEnglish
PublisherLulu.com
Release dateOct 25, 2021
ISBN9781794891876
DATA MINING and MACHINE LEARNING: CLUSTER ANALYSIS and kNN CLASSIFIERS. Examples with MATLAB

Read more from César Pérez López

Related to DATA MINING and MACHINE LEARNING

Related ebooks

Mathematics For You

View More

Related articles

Reviews for DATA MINING and MACHINE LEARNING

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    DATA MINING and MACHINE LEARNING - César Pérez López

    DATA MINING AND MACHINE LEARNING: CLUSTER ANALYSIS AND kNN CLASSIFIERS.

    Examples with MATLAB

    César Pérez López

    CONTENTS

    DATA MINING ANd MACHINE LEARNING TECHNIQUES

    1.1 DATA MINING INTRODUCTION

    1.1.1 Data Mining and Machine Learning Techniques with Matlab

    1.1.2 Train Classification Models in Classification Learner App

    1.1.3 Train Regression Models in Regression Learner App

    1.1.4 Train Neural Networks for Deep Learning

    DESCRIPTIVE CLASSIFICATION TECHNIQUES. HIERARCHICAL CLUSTERING

    2.1 INTRODUCTION TO CLUSTER ANALYSYS

    2.2 Hierarchical Clustering

    2.2.1 Introduction to Hierarchical Clustering

    2.2.2 Algorithm Description

    2.2.3 Similarity Measures

    2.2.4 Linkages

    2.2.5 Dendrograms

    2.2.6 Verify the Cluster Tree

    2.2.7 Create Clusters

    2.3 FUNCTIONS FOR HIERARCHICAL CLUSTERING

    2.3.1 Functions

    2.3.2 cluster

    2.3.3 clusterdata

    2.3.4 cophenet

    2.3.5 inconsistent

    2.3.6 linkage

    2.3.7 pdist

    2.3.8 squareform

    DESCRIPTIVE CLASSIFICATION TECHNIQUES. NON HIERARCHICAL CLUSTERING

    3.1 INTRODUCTION TO NON HIERARCHICAL CLUSTERING

    3.2 k-Means Clustering

    3.2.1 Introduction to k-Means Clustering

    3.2.2 Create Clusters and Determine Separation

    3.2.3 Determine the Correct Number of Clusters

    3.2.4 Avoid Local Minima

    3.3 MATLAB Functions FOR NON HIERARCHICAL CLUSTERING

    3.3.1 kmeans

    3.3.2 kmedoids

    3.3.3 mahal

    CLUSTERING USING GAUSSIAN MIXTURE MODELS AND HIDDEN MARKOV MODELS

    4.1 Gaussian Mixture Models

    4.2 Clustering Using Gaussian Mixture Models

    4.2.1 How Gaussian Mixture Models Cluster Data

    4.2.2 Covariance Structure Options

    4.2.3 Effects of Initial Conditions

    4.2.4 When to Regularize

    4.3 Cluster Data from Mixture of Gaussian Distributions

    4.3.1 Simulate Data from a Mixture of Gaussian Distributions

    4.3.2 Fit the Simulated Data to a Gaussian Mixture Model

    4.3.3 Cluster the Data Using the Fitted GMM

    4.3.4 Estimate Cluster Membership Posterior Probabilities

    4.3.5 Assign New Data to Clusters

    4.4 Cluster Gaussian Mixture Data Using Soft Clustering

    4.5 Tune Gaussian Mixture Models

    4.6 Gaussian Mixture Models FUNCTIONS

    4.6.1 fitgmdist

    4.6.2 cluster

    4.6.3 posterior

    4.6.4 gmdistribution

    4.7 Markov Chains

    4.8 Hidden Markov Models (HMM)

    4.8.1 Introduction to Hidden Markov Models (HMM)

    4.8.2 Analyzing Hidden Markov Models

    DESCRIPTIVE CLASSIFICATION TECHNIQUES. NEAREST NEIGHBORS. KNN CLASSIFIERS

    5.1 Classification Using Nearest Neighbors

    5.1.1 Pairwise Distance Metrics

    5.1.2 k-Nearest Neighbor Search and Radius Search

    5.1.3 Classify Query Data

    5.1.4 Find Nearest Neighbors Using a Custom Distance Metric

    5.2 K-Nearest Neighbor Classification for Supervised Learning

    5.2.1 Construct KNN Classifier

    5.2.2 Examine Quality of KNN Classifier

    5.2.3 Predict Classification Using KNN Classifier

    5.2.4 Modify KNN Classifier

    5.3 Nearest Neighbors FUNCTIONS

    5.3.1 ExhaustiveSearcher

    5.3.2 KDTreeSearcher

    5.3.3 createns

    CLUSTER VISUALIZATION AND EVALUATION

    6.1 INTRODUCTION

    6.2 CLUSTER VISUALIZATION

    6.2.1 dendrogram

    6.2.2 optimalleaforder

    6.2.3 manovacluster

    6.2.4 silhouette

    6.3 CLUSTER EVALUATION

    6.3.1 evalclusters

    6.3.2 addK

    6.3.3 compact

    6.3.4 increaseB

    6.3.5 plot

    Cluster Data with NEURAL NETWORKS

    7.1 NEURAL NETWORK TOOLBOX

    7.2 Using Neural Network Toolbox

    7.3 Automatic Script Generation

    7.4 Neural Network Toolbox Applications

    7.5 Neural Network Design Steps

    7.6 INTRODUCTION TO CLUSTERING WITH NEURAL NETWORKS

    7.7 Using the Neural Network Clustering Tool

    7.8 Using Command-Line Functions

    Cluster with Self-Organizing Map Neural Network

    8.7.1 One-Dimensional Self-Organizing Map

    8.7.2 Two-Dimensional Self-Organizing Map

    8.7.3 Training with the Batch Algorithm

    DATA MINING ANd MACHINE LEARNING TECHNIQUES

    The availability of large volumes of data and the generalized use of computer tools has transformed research and data analysis, orienting it towards certain specialized techniques encompassed under the generic name of Analytics that includes Multivariate Data Analysis (MDA), Data Mining, Machine Learning and other Business Intelligence techniques.

    Data Mining (or Machine Learning) can be defined as a process of discovering new and significant relationships, patterns and trends when examining large amounts of data. The techniques of Data Mining pursue the automatic discovery of the knowledge contained in the information stored in an orderly manner in large databases. These techniques aim to discover patterns, profiles and trends through the analysis of data using advanced statistical techniques of multivariate data analysis.

    The goal is to allow the researcher-analyst to find a useful solution to the problem raised through a better understanding of the existing data.

    Data Mining an Machine Learning uses two types of techniques: predictive techniques (supervised learnig techniques) , which trains a model on known input and output data so that it can predict future outputs, and descriptive techniques (unsupervised learning techniques), which finds hidden patterns or intrinsic structures in input data.

    The aim of predictive techniques is to build a model that makes predictions based on evidence in the presence of uncertainty. A predictive algorithm takes a known set of input data and known responses to the data (output) and trains a model to generate reasonable predictions for the response to new data. Predictive techniques uses classification and regression techniques to develop predictive models.

    Classification techniques predict categorical responses, for example, whether an email is genuine or spam, or whether a tumor is cancerous or benign. Classification models classify input data into categories. Typical applications include medical imaging, image and speech recognition, and credit scoring.

    Regression techniques predict continuous responses, for example, changes in temperature or fluctuations in power demand. Typical applications include electricity load forecasting and algorithmic trading.

    Descriptive techniques finds hidden patterns or intrinsic structures in data. It is used to draw inferences from datasets consisting of input data without labeled responses. Clustering is the most common descriptive technique. It is used for exploratory data analysis to find hidden patterns or groupings in data. Applications for clustering include gene sequence analysis, market research, and object recognition. This book develops classification descriptive techniques.

    MATLAB provides tools to help you try out a variety of Data Mining models and choose the best. To find MATLAB apps and functions to help you solve Data Mining tasks, consult the following table. Some Data Mining tasks are made easier by using apps, and others use command-line features.

    The following systematic Data Mining workflow can help you tackle Data Mining challenges. You can complete the entire workflow in MATLAB.

    Descripción: http://es.mathworks.com/help/stats/machinelearningoverviewworkflow.jpg

    To integrate the best trained model into a production system, you can deploy Statistics and Machine Learning Toolbox machine learning models using MATLAB Compiler. For many models, you can generate C-code for prediction using MATLAB Coder.

    Use the Classification Learner app to train models to classify data using predictive Data Miming techniques. The app lets you explore predictive Data Mining interactively using various classifiers.

    Automatically train a selection of models and help you choose the best model. Model types include decision trees, discriminant analysis, support vector machines, logistic regression, nearest neighbors, and ensemble classification.

    Explore your data, select features, and visualize results.

    Export models to the workspace to make predictions with new data.

    Generate MATLAB code from the app to create scripts, train with new data, work with huge data sets, or modify the code for further analysis.

    By default, the app protects against overfitting by applying cross-validation. Alternatively, you can choose holdout validation.

    Descripción: http://es.mathworks.com/help/stats/mlapp_overview.png

    For more options, you can use the command-line interface. See Classification.

    Use the Regression Learner app to train models to predict continuous data using predicte Data Mining. The app lets you explore predictive Data Mininig techniques interactively using various regression models.

    Automatically train a selection of models and help you choose the best model. Model types include linear regression models, regression trees, Gaussian process regression models, support vector machines, and ensembles of regression trees.

    Explore your data, select features, and visualize results.

    Export models to the workspace to make predictions with new data.

    Generate MATLAB code from the app to create scripts, train with new data, work with huge data sets, or modify the code for further analysis.

    By default, the app protects against overfitting by applying cross-validation. Alternatively, you can choose holdout validation.

    Descripción: http://es.mathworks.com/help/stats/regressionlearneroverview17a.png

    Neural Network Toolbox (Deep Learning Toolbox from version 18) enables you to perform deep learning with convolutional neural networks for classification, regression, feature extraction, and transfer learning. The toolbox provides simple MATLAB commands for creating and interconnecting the layers of a deep neural network. Examples and pretrained networks make it easy to use MATLAB for deep learning, even without extensive knowledge of advanced computer vision algorithms or neural networks.

    DESCRIPTIVE CLASSIFICATION TECHNIQUES. HIERARCHICAL CLUSTERING

    Cluster analisys is a set of unsupervised learning techniques to find natural groupings and patterns in data. Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters). It is a main task of exploratory data mining, and a common technique for statistical data analysis, used in many fields, including machine learning,  pattern recognition, image analysis, information retrieval,  bioinformatics,  data compression, and computer graphics.

    Cluster analysis, also called segmentation analysis or taxonomy analysis, partitions sample data into groups or clusters. Clusters are formed such that objects in the same cluster are very similar, and objects in different clusters are very distinct. MATLAB Statistics and Machine Learning Toolbox provides several clustering techniques and measures of similarity (also called distance measures) to create the clusters. Additionally, cluster evaluation determines the optimal number of clusters for the data using different evaluation criteria. Cluster visualization  options include dendrograms and silhouette plots.

    Besides the term clustering, there are a number of terms with similar meanings, including automatic classification, numerical taxonomy, and typological analysis. The subtle differences are often in the usage of the results: while in data mining, the resulting groups are the matter of interest, in automatic classification the resulting discriminative power is of interest.

    gaussianmixturemodelsexample_04

    Cluster analysis, also called segmentation analysis or taxonomy analysis, creates groups, or clusters, of data. Clusters are formed in such a way that objects in the same cluster are very similar and objects in different clusters are very distinct. Measures of similarity depend on the application.

    Hierarchical Clustering groups data over a variety of scales by creating a cluster tree or dendrogram.

    The tree is not a single set of clusters, but rather a multilevel hierarchy, where clusters at one level are joined as clusters at the next level. This allows you to decide the level or scale of clustering that is most appropriate for your application. The Statistics and Machine Learning Toolbox function clusterdata performs all of the necessary steps for you. It incorporates the pdist, linkage and cluster functions, which may be used separately for more detailed analysis. The dendrogram function plots the cluster tree.

    k-Means Clustering is a partitioning method. The function kmeans partitions data into k mutually exclusive clusters, and returns the index of the cluster to which it has assigned each observation. Unlike hierarchical clustering, k-means clustering operates on actual observations (rather than the larger set of dissimilarity measures), and creates a single level of clusters. The distinctions mean that k-means clustering is often more suitable than hierarchical clustering for large amounts of data.

    Clustering Using Gaussian Mixture Models form clusters by representing the probability density function of observed variables as a mixture of multivariate normal densities. Mixture models of the gmdistribution class use an expectation maximization (EM) algorithm to fit data, which assigns posterior probabilities to each component density with respect to each observation. Clusters are assigned by selecting the component that maximizes the posterior probability. Clustering using Gaussian mixture models is sometimes considered a soft clustering method. The posterior probabilities for each point indicate that each data point has some probability of belonging to each cluster. Like k-means clustering, Gaussian mixture modeling uses an iterative algorithm that converges to a local optimum. Gaussian mixture modeling may be more appropriate than k-means clustering when clusters have different sizes and correlation within them.

    Hierarchical clustering groups data over a variety of scales by creating a cluster tree or dendrogram. The tree is not a single set of clusters, but rather a multilevel hierarchy, where clusters at one level are joined as clusters at the next level. This allows you to decide the level or scale of clustering that is most appropriate for your application. The Statistics and Machine Learning Toolbox function clusterdata  supports agglomerative clustering and performs all of the necessary steps for you. It incorporates the pdist, linkage, and cluster functions, which you can use separately for more detailed analysis. The dendrogram function plots the cluster tree.

    To perform agglomerative hierarchical cluster analysis on a data set using Statistics and Machine Learning Toolbox functions, follow this procedure:

    Find the similarity or dissimilarity between every pair of objects in the data set. In this step, you calculate the distance between objects using the pdist function. The pdist function supports many different ways to compute this measurement.

    Group the objects into a

    Enjoying the preview?
    Page 1 of 1