Eigenface: Exploring the Depths of Visual Recognition with Eigenface
By Fouad Sabry
()
About this ebook
What is Eigenface
An eigenface is the name given to a set of eigenvectors when used in the computer vision problem of human face recognition. The approach of using eigenfaces for recognition was developed by Sirovich and Kirby and used by Matthew Turk and Alex Pentland in face classification. The eigenvectors are derived from the covariance matrix of the probability distribution over the high-dimensional vector space of face images. The eigenfaces themselves form a basis set of all images used to construct the covariance matrix. This produces dimension reduction by allowing the smaller set of basis images to represent the original training images. Classification can be achieved by comparing how faces are represented by the basis set.
How you will benefit
(I) Insights, and validations about the following topics:
Chapter 1: Eigenface
Chapter 2: Principal component analysis
Chapter 3: Singular value decomposition
Chapter 4: Eigenvalues and eigenvectors
Chapter 5: Eigendecomposition of a matrix
Chapter 6: Kernel principal component analysis
Chapter 7: Matrix analysis
Chapter 8: Linear dynamical system
Chapter 9: Multivariate normal distribution
Chapter 10: Modes of variation
(II) Answering the public top questions about eigenface.
(III) Real world examples for the usage of eigenface in many fields.
Who this book is for
Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of Eigenface.
Read more from Fouad Sabry
Related to Eigenface
Titles in the series (100)
Image Histogram: Unveiling Visual Insights, Exploring the Depths of Image Histograms in Computer Vision Rating: 0 out of 5 stars0 ratingsNoise Reduction: Enhancing Clarity, Advanced Techniques for Noise Reduction in Computer Vision Rating: 0 out of 5 stars0 ratingsGamma Correction: Enhancing Visual Clarity in Computer Vision: The Gamma Correction Technique Rating: 0 out of 5 stars0 ratingsUnderwater Computer Vision: Exploring the Depths of Computer Vision Beneath the Waves Rating: 0 out of 5 stars0 ratingsHuman Visual System Model: Understanding Perception and Processing Rating: 0 out of 5 stars0 ratingsColor Space: Exploring the Spectrum of Computer Vision Rating: 0 out of 5 stars0 ratingsRetinex: Unveiling the Secrets of Computational Vision with Retinex Rating: 0 out of 5 stars0 ratingsHomography: Homography: Transformations in Computer Vision Rating: 0 out of 5 stars0 ratingsInpainting: Bridging Gaps in Computer Vision Rating: 0 out of 5 stars0 ratingsAnisotropic Diffusion: Enhancing Image Analysis Through Anisotropic Diffusion Rating: 0 out of 5 stars0 ratingsComputer Vision: Exploring the Depths of Computer Vision Rating: 0 out of 5 stars0 ratingsActive Contour: Advancing Computer Vision with Active Contour Techniques Rating: 0 out of 5 stars0 ratingsTone Mapping: Tone Mapping: Illuminating Perspectives in Computer Vision Rating: 0 out of 5 stars0 ratingsContour Detection: Unveiling the Art of Visual Perception in Computer Vision Rating: 0 out of 5 stars0 ratingsVisual Perception: Insights into Computational Visual Processing Rating: 0 out of 5 stars0 ratingsAdaptive Filter: Enhancing Computer Vision Through Adaptive Filtering Rating: 0 out of 5 stars0 ratingsJoint Photographic Experts Group: Unlocking the Power of Visual Data with the JPEG Standard Rating: 0 out of 5 stars0 ratingsHistogram Equalization: Enhancing Image Contrast for Enhanced Visual Perception Rating: 0 out of 5 stars0 ratingsRadon Transform: Unveiling Hidden Patterns in Visual Data Rating: 0 out of 5 stars0 ratingsAffine Transformation: Unlocking Visual Perspectives: Exploring Affine Transformation in Computer Vision Rating: 0 out of 5 stars0 ratingsCanny Edge Detector: Unveiling the Art of Visual Perception Rating: 0 out of 5 stars0 ratingsComputer Stereo Vision: Exploring Depth Perception in Computer Vision Rating: 0 out of 5 stars0 ratingsFilter Bank: Insights into Computer Vision's Filter Bank Techniques Rating: 0 out of 5 stars0 ratingsColor Appearance Model: Understanding Perception and Representation in Computer Vision Rating: 0 out of 5 stars0 ratingsHough Transform: Unveiling the Magic of Hough Transform in Computer Vision Rating: 0 out of 5 stars0 ratingsColor Matching Function: Understanding Spectral Sensitivity in Computer Vision Rating: 0 out of 5 stars0 ratingsHadamard Transform: Unveiling the Power of Hadamard Transform in Computer Vision Rating: 0 out of 5 stars0 ratingsColor Model: Understanding the Spectrum of Computer Vision: Exploring Color Models Rating: 0 out of 5 stars0 ratingsRandom Sample Consensus: Robust Estimation in Computer Vision Rating: 0 out of 5 stars0 ratingsGeometric Hashing: Efficient Algorithms for Image Recognition and Matching Rating: 0 out of 5 stars0 ratings
Related ebooks
Image Based Modeling and Rendering: Exploring Visual Realism: Techniques in Computer Vision Rating: 0 out of 5 stars0 ratingsPyramid Image Processing: Exploring the Depths of Visual Analysis Rating: 0 out of 5 stars0 ratingsScale Invariant Feature Transform: Unveiling the Power of Scale Invariant Feature Transform in Computer Vision Rating: 0 out of 5 stars0 ratingsDocument Mosaicing: Unlocking Visual Insights through Document Mosaicing Rating: 0 out of 5 stars0 ratingsImage Processing And Acquisition Using Python Rating: 0 out of 5 stars0 ratingsContextual Image Classification: Understanding Visual Data for Effective Classification Rating: 0 out of 5 stars0 ratingsProcedural Surface: Exploring Texture Generation and Analysis in Computer Vision Rating: 0 out of 5 stars0 ratingsActive Appearance Model: Unlocking the Power of Active Appearance Models in Computer Vision Rating: 0 out of 5 stars0 ratingsComputer Stereo Vision: Exploring Depth Perception in Computer Vision Rating: 0 out of 5 stars0 ratingsGeometric Feature Learning: Unlocking Visual Insights through Geometric Feature Learning Rating: 0 out of 5 stars0 ratings3D Face Modeling, Analysis and Recognition Rating: 0 out of 5 stars0 ratingsMachine Learning Interview Questions Rating: 5 out of 5 stars5/5View Synthesis: Exploring Perspectives in Computer Vision Rating: 0 out of 5 stars0 ratingsLine Drawing Algorithm: Mastering Techniques for Precision Image Rendering Rating: 0 out of 5 stars0 ratingsModern Algorithms for Image Processing: Computer Imagery by Example Using C# Rating: 0 out of 5 stars0 ratingsArticulated Body Pose Estimation: Unlocking Human Motion in Computer Vision Rating: 0 out of 5 stars0 ratingsInpainting and Denoising Challenges Rating: 0 out of 5 stars0 ratingsHistogram Equalization: Enhancing Image Contrast for Enhanced Visual Perception Rating: 0 out of 5 stars0 ratingsProcess Performance Models: Statistical, Probabilistic & Simulation Rating: 0 out of 5 stars0 ratingsMachine Learning: Hands-On for Developers and Technical Professionals Rating: 0 out of 5 stars0 ratingsFacial Recognition System: Unlocking the Power of Visual Intelligence Rating: 0 out of 5 stars0 ratingsComputer Vision Graph Cuts: Exploring Graph Cuts in Computer Vision Rating: 0 out of 5 stars0 ratingsKernel Methods: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsFacial Recognition System: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsBag of Words Model: Unlocking Visual Intelligence with Bag of Words Rating: 0 out of 5 stars0 ratingsTone Mapping: Tone Mapping: Illuminating Perspectives in Computer Vision Rating: 0 out of 5 stars0 ratingsTwo Dimensional Computer Graphics: Exploring the Visual Realm: Two Dimensional Computer Graphics in Computer Vision Rating: 0 out of 5 stars0 ratingsMachine Learning in Python: Hands on Machine Learning with Python Tools, Concepts and Techniques Rating: 5 out of 5 stars5/5
Intelligence (AI) & Semantics For You
101 Midjourney Prompt Secrets Rating: 3 out of 5 stars3/5Midjourney Mastery - The Ultimate Handbook of Prompts Rating: 5 out of 5 stars5/5Mastering ChatGPT: 21 Prompts Templates for Effortless Writing Rating: 5 out of 5 stars5/5ChatGPT For Dummies Rating: 0 out of 5 stars0 ratingsThe Secrets of ChatGPT Prompt Engineering for Non-Developers Rating: 5 out of 5 stars5/5Killer ChatGPT Prompts: Harness the Power of AI for Success and Profit Rating: 2 out of 5 stars2/5Creating Online Courses with ChatGPT | A Step-by-Step Guide with Prompt Templates Rating: 4 out of 5 stars4/5ChatGPT Rating: 3 out of 5 stars3/5A Quickstart Guide To Becoming A ChatGPT Millionaire: The ChatGPT Book For Beginners (Lazy Money Series®) Rating: 4 out of 5 stars4/5ChatGPT Ultimate User Guide - How to Make Money Online Faster and More Precise Using AI Technology Rating: 0 out of 5 stars0 ratings10 Great Ways to Earn Money Through Artificial Intelligence(AI) Rating: 5 out of 5 stars5/5What Makes Us Human: An Artificial Intelligence Answers Life's Biggest Questions Rating: 5 out of 5 stars5/5AI for Educators: AI for Educators Rating: 5 out of 5 stars5/5Dancing with Qubits: How quantum computing works and how it can change the world Rating: 5 out of 5 stars5/5Chat-GPT Income Ideas: Pioneering Monetization Concepts Utilizing Conversational AI for Profitable Ventures Rating: 4 out of 5 stars4/5TensorFlow in 1 Day: Make your own Neural Network Rating: 4 out of 5 stars4/5Artificial Intelligence: A Guide for Thinking Humans Rating: 4 out of 5 stars4/5ChatGPT For Fiction Writing: AI for Authors Rating: 5 out of 5 stars5/5The Business Case for AI: A Leader's Guide to AI Strategies, Best Practices & Real-World Applications Rating: 0 out of 5 stars0 ratings
Reviews for Eigenface
0 ratings0 reviews
Book preview
Eigenface - Fouad Sabry
Chapter 1: Eigenface
An eigenface (/ˈaɪɡənˌfeɪs/) is the name given to a set of eigenvectors when used in the computer vision problem of human face recognition.
The high-dimensional facial image vector space has a probability distribution whose covariance matrix provides the eigenvectors.
The eigenfaces themselves are the basis set from which the covariance matrix is built.
This results in dimension reduction since the original training images can be represented by a more compact set of basis images.
Faces can be sorted by comparing their representations in the basis set.
Finding a compact way to express facial features was the starting point for the eigenface method.
Using principal component analysis, Sirovich and Kirby demonstrated how to create a foundation set of features from a group of facial photos.
The foundational pictures, the eigenpictures, in the original training set may be linearly merged to generate pictures.
Given an M-image training set,, N photos could be used as a starting point for principal component analysis, if N > M.
When more eigenpictures are used, the reconstruction error decreases; however, Any required number is smaller than M.
For example, For a given set of M training face images, it is necessary to produce N eigenfaces, you can say that each face image can be made up of proportions
of all the K features
or eigenfaces: Face image1 = (23% of E1) + (2% of E2) + (51% of E3) + ..
+ (1% En).
In 1991, M. Turk and A. Pentland presented the eigenface method of facial recognition, which built on their earlier achievements. They demonstrated a method for computing the eigenvectors of a covariance matrix, which allowed for eigen-decomposition to be performed on a large number of face photos on computers of the time. Traditional principal component analysis proved difficult to apply to high-dimensional face picture datasets. Using matrices proportional to the number of images rather than the number of pixels, Turk and Pentland showed how to extract the eigenvectors.
After its initial success, the eigenface method was developed further to incorporate preprocessing techniques for increased precision.
Principal component analysis (PCA) is a mathematical procedure that may be applied to a large collection of face photos to produce a set of eigenfaces. Eigenfaces are created via statistical analysis of many different images of faces and can be thought of informally as a set of standardized face constituents.
All human faces can be broken down into these basic building blocks. One's face may consist of the average face plus, say, 10% from eigenface 1, 55% from eigenface 2, and 3% from eigenface 3. Surprisingly, a good approximation of most faces may be obtained with only a small number of eigenfaces joined together. Additionally, far less space is taken for each person's face because it is not saved by a digital photograph, but rather as a list of values (one value for each eigenface in the database used).
The resulting eigenfaces will have a pattern of contrasting light and dark parts. In this fashion, we may isolate individual facial characteristics for detailed analysis and scoring. Symmetry will be rated, as will the presence of facial hair, the position of the hairline, and the proportions of the nose and lips. Some eigenfaces have more difficult-to-detect patterns, and their resulting images may hardly resemble human faces at all.
Besides face recognition, other applications of the method used to create eigenfaces and use them for recognition include handwriting recognition, lip reading, voice recognition, interpretation of sign language and hand gestures, and analysis of medical images. For this reason, there are others who prefer to use the term eigenimage
instead of eigenface.
.
Generating a collection of eigenfaces involves:
Collect a sample of faces to use for training.
All of the photos used in the training set should have been shot in the exact same lighting, and require normalization so that all photos have properly aligned eyes and mouths.
They must also be all resampled to a common pixel resolution (r × c).
One vector is considered one image, only by joining the columns of the source image's pixels, resulting in a single column with r × c elements.
In this specific implementation, It is assumed that the training set images are all contained within a single matrix T, And each matrix column represents a different picture.
Remove the mean. Each image in T needs to have the average image a subtracted from it.
Determine the covariance matrix S's eigenvectors and eigenvalues. Each eigenvector has the same number of components (dimension) as the original photos, making it possible to treat it as another image. This covariance matrix's eigenvectors are referred to as eigenfaces. They point in the directions where the photos deviate most from the average. Since it is possible to efficiently compute the eigenvectors of S without ever computing S explicitly, eigenfaces have practical applications despite this potentially prohibitive computational step.
Select the main factors.
Order the eigenvalues and eigenvectors by decreasing magnitude.
The number of principal components k is determined arbitrarily by setting a threshold ε on the total variance.
Total variance {\displaystyle v=(\lambda _{1}+\lambda _{2}+...+\lambda _{n})} , n = number of components.
k is the smallest number that satisfies {\displaystyle {\frac {(\lambda _{1}+\lambda _{2}+...+\lambda _{k})}{v}}>\epsilon }
These eigenfaces can now be utilized to represent both previously seen and unseen faces by recording how a newly projected (mean-subtracted) image alters the shape of the eigenfaces.
How much images in the training set deviate from the average is represented by the eigenvalues of the corresponding eigenfaces.
When only some of the eigenvectors are used to project the image, some of the details are lost, nonetheless, losses are reduced by retaining the eigenfaces with the highest eigenvalues.
For instance, working with a 100 × 100 image will produce 10,000 eigenvectors.
In contexts where it matters, A projection on between 100 and 150 eigenfaces is usually sufficient for recognizing most faces, so that ten thousand eigenvectors can be eliminated with relative ease.
Using the Extended Yale Face Database B, we can see an example of eigenface calculation.
To avoid a shortage of processing power or data storage, the face images are sampled down by a factor 4×4=16.
clear all; close all; load yalefaces
h, w, n = size(yalefaces); d = h * w; % vectorize images
x = reshape(yalefaces, d n]); x = double(x); % subtract mean
mean_matrix = mean(x, 2); x = bsxfun(@minus, x, mean_matrix); % calculate covariance
s = cov(x'); % obtain eigenvalue & eigenvector
V, D = eig(s); eigval = diag(D); % sort eigenvalues in descending order
eigval = eigval(end: - 1:1); V = fliplr(V); % show mean and 1st through 15th principal eigenvectors
figure, subplot(4, 4, 1)
imagesc(reshape(mean_matrix, h, w]))
colormap gray
for i = 1:15
subplot(4, 4, i + 1)
imagesc(reshape(V(:, i), h, w))
end
Although many eigenfaces are produced by the covariance matrix S, only a subset of those are actually needed to represent the vast majority of faces. The first 43 eigenfaces, for instance, may account for 95% of the diversity found in all face photos. Apply the following formula to your calculations::
% evaluate the number of principal components needed to represent 95% Total variance.
eigsum = sum(eigval); csum = 0; for i = 1:d
csum = csum + eigval(i); tv = csum / eigsum; if tv > 0.95
k95 = i; break
end; end; It is often computationally prohibitive to do PCA directly on the covariance matrix of the images.
If thumbnails are used, say 100 × 100 pixels, each image is a point in a 10,000-dimensional space and the covariance matrix S is a matrix of 10,000 × 10,000 = 10⁸ elements.
However, if there are N training examples, then the covariance matrix's rank is bound to be at most N-1, There will be no more than N 1 eigenvectors whose eigenvalues aren't zero.
If there are less training instances than there are image dimensions, then the model will struggle, Here's a simpler way to calculate the primary components:.
The training examples matrix, T, has been preprocessed, where each column represents a single image after being mean-subtracted.
The covariance matrix can then be computed as S = TTT and the eigenvector decomposition of S is given by
\mathbf {Sv} _{i}=\mathbf {T} \mathbf {T} ^{T}\mathbf {v} _{i}=\lambda _{i}\mathbf {v} _{i}However TTT is a large matrix, in contrast, if we look at the eigenvalue decomposition of
\mathbf {T} ^{T}\mathbf {T} \mathbf {u} _{i}=\lambda _{i}\mathbf {u} _{i}If we then pre-multiply both sides of the equation by T, we find that
\mathbf {T} \mathbf {T} ^{T}\mathbf {T} \mathbf {u} _{i}=\lambda _{i}\mathbf {T} \mathbf {u} _{i}Meaning that, if ui is an eigenvector of TTT, then vi = Tui is an eigenvector of S.
If we have a training set of 300 images of 100 × 100 pixels, the matrix TTT is a 300 × 300 matrix, which is much more manageable than the 10,000 × 10,000 covariance matrix.
Notice however that the resulting vectors vi are not normalised; As an additional process, normalization may be used if necessary.
Let X denote the d\times n data matrix with column x_{i} as the image vector with mean subtracted.
Then, {\displaystyle \mathrm {covariance} (X)={\frac {XX^{T}}{n}}}
Supposing X has an SVD, we can write it as:
X=U{\Sigma }V^{T}Then the eigenvalue decomposition for XX^{T} is:
XX^{T}=U{\Sigma }{{\Sigma }^{T}}U^{T}=U{\Lambda }U^{T}, where Λ=diag (eigenvalues of XX^{T} )
As a result, it's clear as day that:
The eigenfaces = the first k ( k\leq n ) columns of U associated with the nonzero singular values.
The ith eigenvalue of XX^{T}={\frac {1}{n}}( ith singular value of X)^{2}
When doing SVD on data matrix X, the actual covariance matrix is not required in order to obtain eigenfaces.
The development of eigenfaces was spurred by the need for improved facial identification. When compared to alternative methods, eigenfaces perform better because of the system's speed and efficiency. Due of eigenface's focus on reducing dimensions, a large number of subjects can be represented using a tiny amount of information. It's also quite robust as a face-recognition system when photos are drastically shrunk; nevertheless, it starts to fail dramatically when there's a big difference between the observed images and the probe image.
Face recognition, Images in the system's gallery are represented by sets of weights that characterize the relative importance of each eigenface in constructing that image.
When a new face is added to the database for analysis, the system, the image is projected onto the set of eigenfaces to determine its own eigenvalues.
This gives us a collection of values that characterize the probe's front.
The gallery set weights are compared to these ones to determine the closest match.
Finding the Euclidean distance between two vectors is as easy as finding their nearest neighbor, where the minimum can be classified as the closest subject.: 590
The eigenface technique of facial identification works on the premise that query photos are projected into the face-space spanned by the eigenfaces generated, and the closest match to a face class