Human Face Recognition 6

advertisement
International Journal of Engineering Trends and Technology (IJETT) – Volume17 Number6–Nov2014
Human Face Recognition using Elman Networks
V.Ramya #1, V.Kavitha *2, P.Sivagamasundhari #3
#1,#2,#3
Assistant Professor & Department of ECE & Saranathan Engineering College
Trichy- 620004, Tamil Nadu, India
Abstract— Face recognition from the images is challenging due to
the unpredictability in face appearances and the complexity of
the image background. This paper proposes a different approach
in order to recognize human faces. The face recognition is done
by comparing the characteristics of the captured face to that of
known database. This paper suggests a face detection and
localization unit to extract mouth end points and eyeballs and an
algorithm to calculate distance between eyeballs and mouth end
points. It proposes Elman Neural Network to recognise the face.
The recognition performance of the proposed method is
tabulated based on the experiments performed on a number of
images.
Keywords— Face Detection, Face Localization,
Extraction, Neural Networks, Elman Networks.
Feature
I. INTRODUCTION
Face recognition is an interesting and most
successful application of pattern recognition and
image analysis. Facial images are essential for
intelligent vision-based human computer interaction.
In that the face processing is based on the fact that
the information about a user’s identity can be
extracted from the images and the computers can
act accordingly.
Face detection has many applications such as
entertainment, Information security, and Biometrics
[1]. Fu-Che Wu presents a method [2] in which the
features to be extracted are the two eyes and a
nostril of the nose. In this method, the face region is
first located, and possible locations of features are
found. Stan.z.Li proposes a nearest feature line
method [4] where any two-feature points of the
same class (person) are generalized by the feature
line (FL) passing through the two points. The FL
can capture more variations of face images than the
original points and thus expands the capacity of the
available database.
Kai Chuan introduces a method [3] where the
feature extraction methods are classified into two
categories called Face based and Constituent based.
The face based method uses global information
instead of local information. Due to the variation of
orientation, facial expression and illumination
direction, single feature is usually not enough to
represent the human faces. So the performance of
ISSN: 2231-5381
this approach is quite limited. The constituent based
approach is based on the relationship between
extracting structural facial features, such as mouth,
nose, eyes etc. This constituent based method deal
with local information instead of global information.
Therefore these methods can provide flexibility in
dealing facial features, such as eyes and mouth and
they are not affected by irrelevant information in an
image.
This paper proposes a face recognition method
where local features such as eyeballs and mouth
end points are given as the input to the neural
network. The function of neural network is to
compare the calculated facial local features with the
existing database.
In the proposed method the face region is first
extracted from the image by applying various preprocessing activities like denoising. The
preprocessing is followed by face localization step
which is the method of locating the face region. The
distance calculation algorithm is used to calculate
the local features. In elaborate the algorithm
calculates the distance values between the left eye
ball and the left mouth end point, the right eye ball
and the right mouth end point, the left eye ball and
the right mouth end point, the right eye ball and the
left mouth end point. These values are given as the
inputs to the neural network for finding the
matching from the database. Elman algorithm is
used for training the values and is simulated using
the features taken from the test set of images. The
output from the Elman network is considered as the
recognition result.
II. FACE RECOGNITION SYSTEM
The proposed system consists of face localization,
a feature extraction and a neural network. The block
diagram is shown in Fig.1. Input image is captured
by taking photographs using camera. Images are
taken in color mode and saved in JPG format.
http://www.ijettjournal.org
Page 293
International Journal of Engineering Trends and Technology (IJETT) – Volume17 Number6–Nov2014
However, the proposed method is suitable for
working with any file format.
Input image
Face
Localization
Feature
Extraction
Face
Recognition
Recognition
Result
Where Imm = Intensity of mapped image
Imd = Intensity of the dilated image and
Img =Intensity of the gray scale image
The output of dilation is shown in fig.3
Fig. 1. Block diagram of face recognition system
The input image is directed to face localization
part which locates the face region. The output of
face localization is forwarded to feature extraction
unit. The determined local features are given to
neutral network to find the recognition result.
Fig. 3. Dilation in Face localization phase
3) Image Cropping: The mapped image can be
A. Face Localization
Face localization aims to determine the image
position of a face. It is a simple detection problem
with the assumption that an input image contains
only one face. The procedure below explains the
proposed face localization technique.
converted into binary image and the required
face region is cropped from the binary image.
The output of image cropping step is shown
in Fig. 4
1) Image Conversion: The input image is first
converted into the gray-scale image. It is then
converted into binary form. The execution
sequence of this step is shown in Fig. 2
Fig. 4. Image cropping in face localization phase
B. Feature Extraction
Fig. 2. Image conversion in face localization phase
2) Dilation: The dilation process removes the
noise encountered in the binary image.
Therefore the dilation is performed on the
obtained binary image. Then, the dilated
image is mapped on to the gray scale image
using intensity calculation formula as in (1).
The proposed method uses feature based
approach to process the input image and extract
unique facial features such as the eyes, mouth etc.,
and estimate the geometric correlations among
those facial points .It converts the input facial
image to a vector of geometric features. Neural
networks are then employed to match faces with the
existing database to yield the result.
The feature based extraction methods are
insensitive to image position variations, size and
lighting.
The flowchart in fig.5 shows proposed feature
extraction algorithm.
ISSN: 2231-5381
http://www.ijettjournal.org
Page 294
International Journal of Engineering Trends and Technology (IJETT) – Volume17 Number6–Nov2014
Start
Input: Cropped image from face localization step
The Elman network is a two-layer network. It has
feedback from the first-layer output to the first layer
input hence it allows to detect and generate timevarying patterns. Elman network has tansig neurons
and purelin neurons in the hidden layer and output
layer respectively. This is the special combination,
in that two-layer networks with these transfer
functions can approximate any function with
accuracy. The only requirement is that the hidden
layer must have enough neurons. More hidden
neurons are needed as the complex functions.
Divide the localized face column
wise into two equal parts
Let (x1,y1) and (x2,y2) be the
the first black pixels
encountered on either side
For each
row
Calculate the distance
between those dark points
After extracting the features, a recognizer is
needed to identify the face image from the stored
database. Fuzzy logic and neural network can be
applied for such problems [5 and 6]. This paper
proposes a recognition method, which uses
recurrent neural network called Elman Network
Obtain two sets of non-zero distance
values corresponding to eye balls and
mouth end points
Find the maximum distances from each set of non
zero values ,which represent the distance between
the eyeballs and the distance between the mouth
end points
From the pixels corresponding to that maximum distance , Calculate
1. Distance from the left eyeball to the right eyeball
2.Distance from the left mouth end point to the right mouth end point
3.Distance from the left eyeball to the left mouth end point.
4.Distance from the right eyeball to the right mouth end point
5.Distance from the left eyeball to the right mouth end point
6.Distance from the right eyeball to the left mouth end point
Output : The features extracted from the above step
The Elman network differs from conventional
two-layer networks in that the first layer has a
recurrent connection. The delay occurring due the
context layer in this connection stores values from
the earlier time step, which can be used in the
present time step. Thus, even if identical inputs are
given at a time for two different Elman networks
with same biases and weights, the output of the
networks can differ due to different feedback states.
Since the network can store information for further
reference, it will learn temporal as well as spatial
patterns.
In the proposed system, the Elman network
trained to respond and to generate, both temporal
and spatial patterns.Fig.6.shows the combined
framework of Elman network.
Stop
Fig.5.Flowchart for the proposed algorithm
The algorithm uses the distance formula
Fig. 6. Combined framework of ELMAN network
The features extracted from this stage are given
as the inputs to the neural network recognizer.
C. Face Recognition Using Neural Network
ISSN: 2231-5381
The notations used in the figure are
a1 = tansig(IW*p+LW+b1)
Y = purelin(LW*a1+b2)
http://www.ijettjournal.org
Page 295
International Journal of Engineering Trends and Technology (IJETT) – Volume17 Number6–Nov2014
P = Set of input neurons
b1 = bias to the input layer
b2 = bias to the hidden layer
IW = Weight between Input and hidden layers
LW = Weight between hidden and Output
layers
D = Delay
Y = Output of Elman
III. RESULTS
The effectiveness of the proposed face
localization method and the distance calculation
algorithm are demonstrated using MATLAB. The
face database consists of 60 images. Out of 60
images, 42 images are taken for training the
networks. Then the neural networks are tested with
the remaining images. The Error rate versus the
number of epochs graph is shown in Fig.7
V. CONCLUSION
In this paper, a new face localization technique
and a feature extraction algorithm is proposed for
face recognition. The neural network model is used
for recognizing the frontal or nearly frontal faces
and the results are tabulated. From the results
obtained, it can be concluded that, recognition
accuracy achieved by this method is very high. This
method can be suitably extended for moving images
and the images with varying background
REFERENCES
[1] w.Zhao, R.Chellapa, P.J.Phillips, and A.Rosenfeld, “Face Recognition:
A Literature Survey,” Technical Report CART-TR-948. University of
Maryland, Aug.2002
[2] Fu-Che Wu, Tzong-Jer Yang, Ming Ouhyoung, “Automatic Feature
Extraction and Face Synthesis in Facial Image Coding, ” Proc. Of
Pacific Graphics’98 (POSTER), pp. 218-219, Singapore,1998.
[3] Kai Chuan Chu, and Dzulkifli Mohamad, “ Development of a Face
Recognition System using Artificial Intelligent Techniques based on
Hybrid Feature Selection, ” Proc. Of the second Intl. Conference on
Artificial Intelligence in Engineering and Technology, Malaysia, pp.
365-370, August 3-5, 2003.
[4] Stan Z.Li and Juwei Lu, “ Face Recognition using the Nearest Feature
Line method, ” IEEE Transactions on Neural Networks, vol.10, No.2,
pp.439-443, March 1998.
[5] S. Lawrence, C. L. Giles, A. C. Tsoi and A. D. Back, “Face
Recognition: A Convolutional Neural Networks Approach”, IEEE
Trans. on Neural Networks, Special Issue on Neural Networks and
Pattern Recognition, 8(1), 98-113, 1997
[6] J. Haddadnia, K. Faez, Neural network human face recognition based
on moment invariants, Proceeding of IEEE International Conference on
Image Processing, Thessaloniki, Greece, 1018-1021, 7-10 October
2001.
Fig.7. Error rate versus number of Epochs
[7]
The Elman network recognizes all the faces
available in database and it accepts 3 unknown
faces. The time consumption and the recognition
rate are tabulated in Table I
H. A. Rowley, S. Baluja and T. Kanade, “Neural Network based Face
detection”, IEEE Trans. On Pattern Recognition and Machine
Intelligence, 20(1),23-28, 1998.
[8] Raphaël Féraud, Oliver J.Bernier, Jean-Emmanuel Viallet, and Michel
Collobert, “A Fast and Accurate Face Detector Based on Neural
Networks”, IEEE Trans. On Pattern Analysis and Machine Intelligence,
Vol.23, No.1, pp.42-53, January 2001
TABLE I
RESULT USING ELMAN NETWORK
Network
Elman
Total
Images
Training
+Testing
time(in
seconds)
60
ISSN: 2231-5381
3.6549
False
Acceptance
3
Recognition
Rate (in %)
95.00
http://www.ijettjournal.org
Page 296
Download