Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Eigenface: Exploring the Depths of Visual Recognition with Eigenface
Eigenface: Exploring the Depths of Visual Recognition with Eigenface
Eigenface: Exploring the Depths of Visual Recognition with Eigenface
Ebook205 pages2 hours

Eigenface: Exploring the Depths of Visual Recognition with Eigenface

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What is Eigenface


An eigenface is the name given to a set of eigenvectors when used in the computer vision problem of human face recognition. The approach of using eigenfaces for recognition was developed by Sirovich and Kirby and used by Matthew Turk and Alex Pentland in face classification. The eigenvectors are derived from the covariance matrix of the probability distribution over the high-dimensional vector space of face images. The eigenfaces themselves form a basis set of all images used to construct the covariance matrix. This produces dimension reduction by allowing the smaller set of basis images to represent the original training images. Classification can be achieved by comparing how faces are represented by the basis set.


How you will benefit


(I) Insights, and validations about the following topics:


Chapter 1: Eigenface


Chapter 2: Principal component analysis


Chapter 3: Singular value decomposition


Chapter 4: Eigenvalues and eigenvectors


Chapter 5: Eigendecomposition of a matrix


Chapter 6: Kernel principal component analysis


Chapter 7: Matrix analysis


Chapter 8: Linear dynamical system


Chapter 9: Multivariate normal distribution


Chapter 10: Modes of variation


(II) Answering the public top questions about eigenface.


(III) Real world examples for the usage of eigenface in many fields.


Who this book is for


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of Eigenface.

LanguageEnglish
Release dateMay 14, 2024
Eigenface: Exploring the Depths of Visual Recognition with Eigenface

Read more from Fouad Sabry

Related to Eigenface

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Eigenface

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Eigenface - Fouad Sabry

    Chapter 1: Eigenface

    An eigenface (/ˈaɪɡənˌfeɪs/) is the name given to a set of eigenvectors when used in the computer vision problem of human face recognition.

    The high-dimensional facial image vector space has a probability distribution whose covariance matrix provides the eigenvectors.

    The eigenfaces themselves are the basis set from which the covariance matrix is built.

    This results in dimension reduction since the original training images can be represented by a more compact set of basis images.

    Faces can be sorted by comparing their representations in the basis set.

    Finding a compact way to express facial features was the starting point for the eigenface method.

    Using principal component analysis, Sirovich and Kirby demonstrated how to create a foundation set of features from a group of facial photos.

    The foundational pictures, the eigenpictures, in the original training set may be linearly merged to generate pictures.

    Given an M-image training set,, N photos could be used as a starting point for principal component analysis, if N > M.

    When more eigenpictures are used, the reconstruction error decreases; however, Any required number is smaller than M.

    For example, For a given set of M training face images, it is necessary to produce N eigenfaces, you can say that each face image can be made up of proportions of all the K features or eigenfaces: Face image1 = (23% of E1) + (2% of E2) + (51% of E3) + ..

    + (1% En).

    In 1991, M. Turk and A. Pentland presented the eigenface method of facial recognition, which built on their earlier achievements. They demonstrated a method for computing the eigenvectors of a covariance matrix, which allowed for eigen-decomposition to be performed on a large number of face photos on computers of the time. Traditional principal component analysis proved difficult to apply to high-dimensional face picture datasets. Using matrices proportional to the number of images rather than the number of pixels, Turk and Pentland showed how to extract the eigenvectors.

    After its initial success, the eigenface method was developed further to incorporate preprocessing techniques for increased precision.

    Principal component analysis (PCA) is a mathematical procedure that may be applied to a large collection of face photos to produce a set of eigenfaces. Eigenfaces are created via statistical analysis of many different images of faces and can be thought of informally as a set of standardized face constituents. All human faces can be broken down into these basic building blocks. One's face may consist of the average face plus, say, 10% from eigenface 1, 55% from eigenface 2, and 3% from eigenface 3. Surprisingly, a good approximation of most faces may be obtained with only a small number of eigenfaces joined together. Additionally, far less space is taken for each person's face because it is not saved by a digital photograph, but rather as a list of values (one value for each eigenface in the database used).

    The resulting eigenfaces will have a pattern of contrasting light and dark parts. In this fashion, we may isolate individual facial characteristics for detailed analysis and scoring. Symmetry will be rated, as will the presence of facial hair, the position of the hairline, and the proportions of the nose and lips. Some eigenfaces have more difficult-to-detect patterns, and their resulting images may hardly resemble human faces at all.

    Besides face recognition, other applications of the method used to create eigenfaces and use them for recognition include handwriting recognition, lip reading, voice recognition, interpretation of sign language and hand gestures, and analysis of medical images. For this reason, there are others who prefer to use the term eigenimage instead of eigenface..

    Generating a collection of eigenfaces involves:

    Collect a sample of faces to use for training.

    All of the photos used in the training set should have been shot in the exact same lighting, and require normalization so that all photos have properly aligned eyes and mouths.

    They must also be all resampled to a common pixel resolution (r × c).

    One vector is considered one image, only by joining the columns of the source image's pixels, resulting in a single column with r × c elements.

    In this specific implementation, It is assumed that the training set images are all contained within a single matrix T, And each matrix column represents a different picture.

    Remove the mean. Each image in T needs to have the average image a subtracted from it.

    Determine the covariance matrix S's eigenvectors and eigenvalues. Each eigenvector has the same number of components (dimension) as the original photos, making it possible to treat it as another image. This covariance matrix's eigenvectors are referred to as eigenfaces. They point in the directions where the photos deviate most from the average. Since it is possible to efficiently compute the eigenvectors of S without ever computing S explicitly, eigenfaces have practical applications despite this potentially prohibitive computational step.

    Select the main factors.

    Order the eigenvalues and eigenvectors by decreasing magnitude.

    The number of principal components k is determined arbitrarily by setting a threshold ε on the total variance.

    Total variance {\displaystyle v=(\lambda _{1}+\lambda _{2}+...+\lambda _{n})} , n = number of components.

    k is the smallest number that satisfies {\displaystyle {\frac {(\lambda _{1}+\lambda _{2}+...+\lambda _{k})}{v}}>\epsilon }

    These eigenfaces can now be utilized to represent both previously seen and unseen faces by recording how a newly projected (mean-subtracted) image alters the shape of the eigenfaces.

    How much images in the training set deviate from the average is represented by the eigenvalues of the corresponding eigenfaces.

    When only some of the eigenvectors are used to project the image, some of the details are lost, nonetheless, losses are reduced by retaining the eigenfaces with the highest eigenvalues.

    For instance, working with a 100 × 100 image will produce 10,000 eigenvectors.

    In contexts where it matters, A projection on between 100 and 150 eigenfaces is usually sufficient for recognizing most faces, so that ten thousand eigenvectors can be eliminated with relative ease.

    Using the Extended Yale Face Database B, we can see an example of eigenface calculation.

    To avoid a shortage of processing power or data storage, the face images are sampled down by a factor 4×4=16.

    clear all; close all; load yalefaces

    h, w, n = size(yalefaces); d = h * w; % vectorize images

    x = reshape(yalefaces, d n]); x = double(x); % subtract mean

    mean_matrix = mean(x, 2); x = bsxfun(@minus, x, mean_matrix); % calculate covariance

    s = cov(x'); % obtain eigenvalue & eigenvector

    V, D = eig(s); eigval = diag(D); % sort eigenvalues in descending order

    eigval = eigval(end: - 1:1); V = fliplr(V); % show mean and 1st through 15th principal eigenvectors

    figure, subplot(4, 4, 1)

    imagesc(reshape(mean_matrix, h, w]))

    colormap gray

    for i = 1:15

    subplot(4, 4, i + 1)

    imagesc(reshape(V(:, i), h, w))

    end

    Although many eigenfaces are produced by the covariance matrix S, only a subset of those are actually needed to represent the vast majority of faces. The first 43 eigenfaces, for instance, may account for 95% of the diversity found in all face photos. Apply the following formula to your calculations::

    % evaluate the number of principal components needed to represent 95% Total variance.

    eigsum = sum(eigval); csum = 0; for i = 1:d

    csum = csum + eigval(i); tv = csum / eigsum; if tv > 0.95

    k95 = i; break

    end; end; It is often computationally prohibitive to do PCA directly on the covariance matrix of the images.

    If thumbnails are used, say 100 × 100 pixels, each image is a point in a 10,000-dimensional space and the covariance matrix S is a matrix of 10,000 × 10,000 = 10⁸ elements.

    However, if there are N training examples, then the covariance matrix's rank is bound to be at most N-1, There will be no more than N 1 eigenvectors whose eigenvalues aren't zero.

    If there are less training instances than there are image dimensions, then the model will struggle, Here's a simpler way to calculate the primary components:.

    The training examples matrix, T, has been preprocessed, where each column represents a single image after being mean-subtracted.

    The covariance matrix can then be computed as S = TTT and the eigenvector decomposition of S is given by

    \mathbf {Sv} _{i}=\mathbf {T} \mathbf {T} ^{T}\mathbf {v} _{i}=\lambda _{i}\mathbf {v} _{i}

    However TTT is a large matrix, in contrast, if we look at the eigenvalue decomposition of

    \mathbf {T} ^{T}\mathbf {T} \mathbf {u} _{i}=\lambda _{i}\mathbf {u} _{i}

    If we then pre-multiply both sides of the equation by T, we find that

    \mathbf {T} \mathbf {T} ^{T}\mathbf {T} \mathbf {u} _{i}=\lambda _{i}\mathbf {T} \mathbf {u} _{i}

    Meaning that, if ui is an eigenvector of TTT, then vi = Tui is an eigenvector of S.

    If we have a training set of 300 images of 100 × 100 pixels, the matrix TTT is a 300 × 300 matrix, which is much more manageable than the 10,000 × 10,000 covariance matrix.

    Notice however that the resulting vectors vi are not normalised; As an additional process, normalization may be used if necessary.

    Let X denote the d\times n data matrix with column x_{i} as the image vector with mean subtracted.

    Then, {\displaystyle \mathrm {covariance} (X)={\frac {XX^{T}}{n}}}

    Supposing X has an SVD, we can write it as:

    X=U{\Sigma }V^{T}

    Then the eigenvalue decomposition for XX^{T} is:

    XX^{T}=U{\Sigma }{{\Sigma }^{T}}U^{T}=U{\Lambda }U^{T}

    , where Λ=diag (eigenvalues of XX^{T} )

    As a result, it's clear as day that:

    The eigenfaces = the first k ( k\leq n ) columns of U associated with the nonzero singular values.

    The ith eigenvalue of XX^{T}={\frac {1}{n}}( ith singular value of X)^{2}

    When doing SVD on data matrix X, the actual covariance matrix is not required in order to obtain eigenfaces.

    The development of eigenfaces was spurred by the need for improved facial identification. When compared to alternative methods, eigenfaces perform better because of the system's speed and efficiency. Due of eigenface's focus on reducing dimensions, a large number of subjects can be represented using a tiny amount of information. It's also quite robust as a face-recognition system when photos are drastically shrunk; nevertheless, it starts to fail dramatically when there's a big difference between the observed images and the probe image.

    Face recognition, Images in the system's gallery are represented by sets of weights that characterize the relative importance of each eigenface in constructing that image.

    When a new face is added to the database for analysis, the system, the image is projected onto the set of eigenfaces to determine its own eigenvalues.

    This gives us a collection of values that characterize the probe's front.

    The gallery set weights are compared to these ones to determine the closest match.

    Finding the Euclidean distance between two vectors is as easy as finding their nearest neighbor, where the minimum can be classified as the closest subject.: 590

    The eigenface technique of facial identification works on the premise that query photos are projected into the face-space spanned by the eigenfaces generated, and the closest match to a face class

    Enjoying the preview?
    Page 1 of 1