Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Data-Variant Kernel Analysis
Data-Variant Kernel Analysis
Data-Variant Kernel Analysis
Ebook482 pages4 hours

Data-Variant Kernel Analysis

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years

This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include data formations of offline, distributed, online, cloud, and longitudinal data, used for kernel analysis to classify and predict future state. 

Data-Variant Kernel Analysis:

  • Surveys the kernel analysis in the traditionally developed machine learning techniques, such as Neural Networks (NN), Support Vector Machines (SVM), and Principal Component Analysis (PCA)
  • Develops group kernel analysis with the distributed databases to compare speed and memory usages
  • Explores the possibility of real-time processes by synthesizing offline and online databases
  • Applies the assembled databases to compare cloud computing environments
  • Examines the prediction of longitudinal data with time-sequential configurations

Data-Variant Kernel Analysis is a detailed reference for graduate students as well as electrical and computer engineers interested in pattern analysis and its application in colon cancer detection.

LanguageEnglish
PublisherWiley
Release dateApr 27, 2015
ISBN9781119019343
Data-Variant Kernel Analysis

Related to Data-Variant Kernel Analysis

Titles in the series (13)

View More

Related ebooks

Programming For You

View More

Related articles

Reviews for Data-Variant Kernel Analysis

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Data-Variant Kernel Analysis - Yuichi Motai

    LIST OF FIGURES

    LIST OF TABLES

    PREFACE

    Kernel methods have been extensively studied in pattern classification and its applications for the past 20 years. Kernel may refer to diverse meanings in different areas such as Physical Science, Mathematics, Computer Science, and even Music/Business. For the area of Computer Science, the term kernel is used in different contexts (i) central component of most operating systems, (ii) scheme-like programming languages, and (iii) a function that executes on OpenCL devices. In machine learning and statistics, the term kernel is used for a pattern recognition algorithm. The kernel functions for pattern analysis, called kernel analysis (KA), is the central theme of this book. KA uses kernel trick to replace feature representation of data with similarities to other data. We will cover KA topics ranging from the fundamental theory of kernel functions to applications. The overall structure starts from Survey in Chapter 1. On the basis of the KA configurations, the remaining chapters consist of Offline KA in Chapter 2, Group KA in Chapter 3, Online KA in Chapter 4, Cloud KA in Chapter 5, and Predictive KA in Chapter 6. Finally, Chapter 7 concludes by summarizing these distinct algorithms.

    Chapter 1 surveys the current status, popular trends, and developments on KA studies, so that we can oversee functionalities and potentials in an organized manner:

    Utilize KA with different types of data configurations, such as offline, online, and distributed, for pattern analysis framework.

    Adapt KA into the traditionally developed machine learning techniques, such as neural networks (NN), support vector machines (SVM), and principal component analysis (PCA).

    Evaluate KA performance among those algorithms.

    Chapter 2 covers offline learning algorithms, in which KA does not change its approximation of the target function, once the initial training phase has been absolved. KA mainly deals with two major issues: (i) how to choose the appropriate kernels for offline learning during the learning phase, and (ii) how to adopt KA into the traditionally developed machine learning techniques such as NN, SVM, and PCA, where the (nonlinear) learning data-space is placed under the linear space via kernel tricks.

    Chapter 3 covers group KA as a data-distributed extension of offline learning algorithms. The data used for Chapter 3 is now extended into several databases. Group KA for distributed data is explored to demonstrate the big-data analysis with the comparable performance of speed and memory usages.

    Chapter 4 covers online learning algorithms, in which KA allows the feature space to be updated as the training proceeds with more data being fed into the algorithm. The feature space update can be incremental or nonincremental. In an incremental update, the feature space is augmented with new features extracted from the new data, with a possible expansion to the feature space if necessary. In a nonincremental update, the dimension of the feature space remains constant as the newly computed features may replace some of the existing ones. In this specific chapter, we also identify the following possibilities of online learning:

    Synthesize offline learning and online learning using KA, which suggests other connections and potential impact both on machine learning and on signal processing.

    Extend KA with different types of data configurations, from offline to online for pattern analysis framework.

    Apply KA into practical learning setting, such as biomedical image data.

    Chapter 5 covers cloud data configuration. The objective of this cloud network setting is to deliver an extension of distributed data. KA from offline and online learning aspects are carried out in the cloud to give more precise treatments to nonlinear pattern recognition without unnecessary computational complexity. This latest trend of big-data analysis may stimulate the emergence of cloud studies in KA to validate the efficiency using practical data.

    Chapter 6 covers longitudinal data to predict future state using KA. A time-transitional relationship between online learning and prediction techniques is explored, so that KA can be applied to adaptive prediction from online learning. The prediction performance over different time periods is evaluated in comparison to KA alternatives.

    Chapter 7 summarizes these distinct data formations used for KA. The data handling issues and potential advantages of data-variant KAs are listed. The supplemental material includes MATLAB® codes in Appendix.

    The book is not chronological, and therefore, the reader can start from any chapter. All the chapters were formed by themselves and are relevant to each other. The author has organized each chapter assuming the readers had not read the other chapters.

    ACKNOWLEDGMENTS

    This study was supported in part by the School of Engineering at Virginia Commonwealth University and the National Science Foundation.

    The author would like to thank his colleagues for the effort and time they spent for this study:

    Dr. Hiroyuki Yoshida for providing valuable colon cancer datasets for experimental results.

    Dr. Alen Docef for his discussion and comments dealing with joint proposal attempts.

    The work reported herein would not have been possible without the help of many of the past and present members of his research group, in particular:

    Dr. Awad Mariette, Lahiruka Winter, Dr. Xianhua Jiang, Sindhu Myla, Dr. Dingkun Ma, Eric Henderson, Nahian Alam Siddique, Ryan Meekins, and Jeff Miller.

    CHAPTER 1

    SURVEY¹

    1.1 INTRODUCTION OF KERNEL ANALYSIS

    Kernel methods have been widely studied for pattern classification and multidomain association tasks [1–3]. Kernel analysis (KA) enables kernel functions to operate in the feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the images of all pairs of data in the feature space [4, 5]. This operation is often less computational than the explicit computation of the coordinates [3, 6, 7]. This approach is called the kernel trick. Kernel functions have been introduced for sequence data, graphs, text, images, as well as vectors [8–14].

    Kernel feature analysis attracts significant attention in the fields of both machine learning and signal processing [10, 15]; thus there are demands to cover this state-of-the-art topic [16]. In this survey, we identify the following popular trends and developments in KA, so that we can visualize the merits and potentials in an organized manner:

    Yield nonlinear filters in the input space to open up many possibilities for optimum nonlinear system design.

    Adapt KA into the traditionally developed machine learning techniques for nonlinear optimal filter implementations.

    Explore kernel selection for distributed databases including solutions of heterogeneous issues.

    Constructing composite kernels is an anticipated solution for heterogeneous data problems. A composite kernel is more relevant for the dataset and adapts itself by adjusting its composed coefficient parameters, thus allowing more flexibility in the kernel choice [3, 8, 17–20].

    The key idea behind the KA method is to allow the feature space to be updated as the training proceeds with more data being fed into the algorithm [15, 21–26]. This feature space update can be incremental or nonincremental. In an incremental update, the feature space is augmented with new features extracted from the new data, with a possible expansion of the feature space if necessary [21–27]. In a nonincremental update, the dimension of the feature space remains constant, as the newly computed features may replace some of the existing ones [8, 19, 20]. In this survey, we also identify the following possibilities:

    A link between offline learning and online learning using KA framework, which suggests other connections and a potential impact on both machine learning and signal processing.

    A relationship between online learning and prediction techniques to merge them together for an adaptive prediction from online learning.

    An online novelty detection with KA as an extended application of prediction algorithms from online learning. These algorithms listed in this survey are capable of operating with kernels, including support vector machines (SVMs) [12, 28–34] Gaussian processes [35–38], Fisher's linear discriminant analysis (LDA) [19, 39], principal component analysis (PCA) [3, 9–11, 20, 22, 24–27, 40], spectral clustering [41–47], linear adaptive filters

    Enjoying the preview?
    Page 1 of 1