Development and implementation of real time face recognition systems
Face plays a major role in defining one’s identity
and emotions as well. We humans have a fantastic ability to recognise and
differentiate hundreds of familiar faces even after years at a glance, despite
of changes in lightning conditions, expressions, age or hairstyle. This ability
of humans has motivated both philosophers and scientists to understand the
encoding and decoding of faces in human brain.
Face
recognition has a variety of applications like criminal identification,
security system, image and film processing, human machine interaction etc.
Now-a-days, cameras come with built in functions for detecting faces and even
expressions. Face detection is also required in film making for image
enhancement applications.
A lot of work has been done and is still in progress to approach this problem.
Within today’s environment
of increased importance of security and organization, identification and
authentication methods have become significant in various areas like entrance
control in buildings; access control for computers in general or in the
prominent field of criminal investigation. Such requirement for reliable
personal identification in computerized access control has resulted in an
increased interest in biometrics.
Biometric identification is the technique of automatically identifying or verifying an individual by a physical characteristic or personal trait. The term “automatically” means the biometric identification system must identify or verify a human characteristic or trait quickly with little or no intervention from the user. Biometric technology was developed for use in high-level security systems and law enforcement markets. The key element of biometric technology is its ability to identify a human being and enforce security.
Here,we design and implement a GUI based face recognition system capable of detecting faces in live acquired images and recognising the detected faces.
The overall system contains four blocks:
Image Acquisition block
This block is the first step in face recognition system applications
as it provides the input to the system. It
triggers the integrated camera (or a externally attached) via frame grabber.
Snapshot function from MATLAB’s image acquisition toolbox is used to serve
the purpose. This image is the provided to the face detection block.
Face Detection Block:
Face detection performs locating and extracting face image operations for face recognition system. Face detection flowchart is given in Figure 2. Skin color segmentation is applied as a first step for face detection, as it reduces computational time for searching whole image. While segmentation is applied, only segmented region is searched weather the segment includes any face or not.
As
seen from the steps given above, the RGB image is first converted into HSV and
YCbCr both the color spaces. The
YCbCr space segments
the image into a luminosity component and color
components, whereas an HSV spacedivides the image into the three components of
hue, saturation and color value. The effect of luminosity can be reduced as we
ignore the Y component in further processing.
The following conversions are used to convert the RGB
image into YCbCr
Y = 0.257* R + 0.504* G + 0.098 * B
+ 16
cb = 0.148* R -
0.291* G + 0.439 * B + 128
cr = 0.439 * R - 0.368* G -0.071 * B + 128
The Original image is also converted into HSV image as hue value is further used for thresholding. It uses a MATLAB inbuilt function rgb2hsv for conversion. The obtained HSV image is as follows,
Experimentally it was verified (as given in [3] and
[4]) that thresholding performed using a combination
of Cb, Cr and hue values produced better
results for segmentation. The following relation was used for thresholding:
120<=Cr<=195
140<=Cb<=195
0.01<=hue<=0.1
The
results obtained are as follows
The thresholding
operation is so performed that the pixel satisfying the criteria is assigned a
value 1 otherwise its kept 0, thereby producing a binary image.
As it is evident from the results of
thresholding, that black region appears at eyes and some other parts of face as
well. So a series of morphological operations are performed to erase those
blocks. A structural element of size 30-by-30 is used for the closing
operation. The morphological close operation is a dilation followed by an erosion,
using the same structuring element for both operations. The result of the
closing operation is shown below
The connected regions are separated
using a area open operation. The result of bwareaopen function are shown below.
Now the binary image is multiplied with
the original image to extract the required region from the original image. The
result of multiplication is as follows.
The x-y co-ordinates of the centroid of
above region is obtained using the following
x=regions.Centroid(1,1)
y=regions.Centroid(1,2)
Finally the
region of size 180-by-120 is cropped. This region can be used for training or
for testing (recognition purpose) depending on the user input from GUI. If user
selects it to be a training image, a copy of this cropped image is
automatically saved in the train database with the consecutive serial
number.jpg as its name, (for example 50.jpg if the last image saved was 49.jpg)
Face
Recognition Algorithm and Implementation:
The detected
face now needs to be identified, and this part is called face recognition.
Various methods as discussed in the literature survey, can be used to
accomplish the task, like neural networks, Template matching,Facial
feature based approach, model based approach etc. In this work we choose
information theory based approach due to its simplicity and reliability[Turk
and Pentland]. Here,we extract the information content in the face by capturing
the variations in the collection of face images, encode it efficiently and
compare it with the face models encoded similarly.
Therefore, we wish to find Principal
Components of the distribution of faces or eigenvectors of covariance matrix of
the set of face images. The eigenvectors can be thought of as a set of features
that together characterize the variation between face images. Each image
location contributes to each eigen vector, which form a ghostly face called
eigenface. Each eigenface deviated from uniform gray where some facial feature
differs among the set of training
faces,forming a map of variation between the faces. Each face can be
represented exactly in terms of linear combination of eigen faces.
This approach involves the steps as shown in figure
1. 2. These
images are first converted to grayscale, then vectorized in the form of coulomb
vectors of size 180x200-by-70 (70 being the size of training database) to make
the further computation easy. This is done using reshape command in
MATLAB.
2. 3. Average
face of the training database set is then calculated. The calculated face is
shown in figure below. This gives a 180-by-200 image. Suppose there are I1,I2,I3,------
Im images in the training set. Then the average face is given
by
It is implemented directly by using a mean command in
MATLAB.
Φi = Ii – ѱ
Where
Ii is the original set of images and ѱ is the mean image.
5. Covariance
matrix of the mean subtracted data is then calculated using
C
= AAT
Where A=[Φ1
,Φ2 ,Φ3 ,………Φn ]
2. 6. Eigenvectors of the covariance matrix are obtained
using the inbuilt function, eig(), which returns
diagonal matrix D of eigenvalues and matrix V whose columns
are the corresponding right eigenvectors, D and V both are 70-by-70-by-1
matrices. which are further used to obtain Eigen faces as
Eigenfaces(ω) = A*V
Where again A=[Φ1 ,Φ2 ,Φ3 ,………Φn
] and V are the eigen vectors obtained using the inbuilt function.
ω is of size
180*120-by-70, where each column represents eigenface of the corresponding
image in the database. The figure below
shows few sample Eigen Faces
1. 7. The feature vector of the training database is
obtained using the relation
Ω= ωT*A
Where A=[Φ1 ,Φ2 ,Φ3
,………Φn ]
This feature vector describes the contribution of each
eigen face in representing the input face image. Ω is a 70-by-70 matrix.
For Test Image
1.
For
classification, the test image is first subjected to the detection algorithm
and the converted to grayscale.
2.
Mean is
subtracted from the test image and the feature vector of it is obtained using
Ωtest= ωT*Atest
Where
ω is the same eigenfaces matrix
obtained above and Atest is the mean subtracted test image. Ωtest
is a 70-by-1 vector.
The Graphical
User Interface
Graphical User Interface
makes the system user friendly and easy to use. It hides the long MATLAB code
and brings in few buttons and a interactive interface for the user.
The GUI for the project is built using pushbuttons to give inputs, axes to display outputs. It looks as below
Pushbutton 1 (Train Data): It’s a button,
which triggers the integrated camera of the laptop (can also be the webcam by
changing the settings in MATLAB functions) to capture train images. After
triggering the integrated webcam, the image is passed on to the detection
block, wherein the image is processed accordingly. The final cropped image is displayed
in the accompanied axes window and it is automatically saved in the specified
folder (Train database folder) with consecutive number.
Pushbutton 2 (Test data): It’s again a
button, triggering the camera for test image. This image is also subjected to
the detection algorithm for taking the region of interest out and the cropped
face is displayed in the accompanied axes
window. Then the cropped image is subjected to the recognition algorithm
for obtaining the equivalent image from the train database.
The equivalent image is displayed in the
third axes.
MATLAB codes for the implementation of Real Time Face Recognition System can be found here
Comments
Post a Comment