International Journal of Science, Engineering and Technology Research (IJSETR)
Volume 3, Issue 3, March 2014
Gender Recognition from Face Images with
Weber Local Descriptor
D.G.Agrawal , Pranoti M. Jangale

Abstract: Gender Recognition by Face is an application of
computer vision techniques to the problem of gender recognition
that means the problem regarding genders of people presented
in images or videos. This problem is solved by a 2-step process.
The first step is to detect and localise human faces. This is
achieved by a face detection algorithm. The second step is then
to determine genders of those detected faces i.e. to separate his
faces or her faces and is achieved by a gender classification
algorithm
Key words: dynamic textures, WLD descriptor, neural network,
minimum distance measurement
orientation. we represent an input image (or image region)
with a histogram by combining the WLD feature per pixel.
We call a WLD histogram hereinafter. Hence We call WLD;a
dense descriptor. The proposed WLD descriptor employs the
advantages of SIFT using the gradient and its orientation in
computing the histogram, smaller support regions. and those
of LBP in computational efficiency. But WLD differs from
Local Binary Pattern and SIFT.There are 2 main steps
involved in recognizing genders of humans presented in an
image. These are face detection and gender classification,
which are applied consecutively.
I. Int ro duct io n
II. Material and Methodology
Over the past decades, there have been significant advances in
facial image processing, especially, in a face detection area
where a number of fast and robust algorithms have been
proposed for practical applications. As a result, a number of
research areas attempting to extend the works have been
emerging, face recognition, facial expression recognition and
gender recognition, for example.
Since gender recognition can be considered as an extended
work to face detection, this is why most research on gender
recognition has focused on gender classification aspect and
assumed the existence of face detection tools.
With regard to gender classification, the techniques, tools and
algorithms employed originate from fields such as computer
vision, pattern recognition, statistics and machine learning.
Weber Local Descriptor is a psychological law. It states that
the change of a stimulus (such as lighting, sound) that we just
notice is a constant ratio of the original stimulus. A human
being would recognize it as background noise rather than a
valid signal, When the change is smaller than this constant
ratio of the original stimulus. The differential excitation
component of the proposed Weber Local Descriptor (WLD)
is computed for a given pixel. It is the ratio between the two
terms: first is the intensity of the current pixel; the second is
the relative intensity differences of a current pixel against its
neighbours (e.g., 3 X3) square regions. We attempt to extract
the local salient patterns in the input image, with the
differential excitation component. In addition to this, current
pixel’s gradient orientation is also computed. For each pixel
of the input image, we compute two components of the WLD
feature which are differential excitation and gradient
2.1 Face Detection
The task achieved by face detection systems is to be
understand using following steps.
To know how to exploit uniqueness of faces in name
recognition, the first step is to detect and localize those faces
in the images.
One of popular research areas is face detection in which many
algorithms have been proposed for it. Considering the face
detection as a binary classification task, most of them are
based on the same idea. The task is to decide whether it is a
face or not, given a part of image This is achieved by first
transforming the given region into features and then using
classifier trained on example images to decide if these
features represent a human face. As faces show themselves
having various sizes, appear in various locations and we also
employ window-sliding technique. The idea in which the
classifier classifies the portions of an image, at scales and all
location, whether it is face or non-face.
2.2Gender Classification
After faces are detected by face detection algorithm, they
need to be decided if they are his or her faces. This is the task
achieved by gender classification systems.
Similar to the face detection task, the gender classification
task is also considered as a binary classification problem but
now with the result being male or female instead of face or
non-face.
1
All Rights Reserved © 2012 IJSETR
International Journal of Science, Engineering and Technology Research (IJSETR)
Volume 1, Issue 1, July 2012
Essentially, gender classification consists of 4 main steps:
pre-processing, feature detection, feature selection and
classification.
2.2.1Pre-Processing
Since, in real-life, it is unlikely that people will face directly
and frontally towards the camera, face images often consist of
some in-plane and out-of-plane rotations. Moreover, it is also
unlikely that the light condition will be the same for all
images. These variations greatly affect an accuracy of gender
classifiers. The purpose of pre-processing step is thus to
remove these variations as much as possible.
Since not all the detected features are useful, the feature
selection (or dimensionality reduction) module is employed
here to choose only a subset of representative features. Doing
feature selection not only gives us the relevant features and
thus the more accurate result but also give us an additional
advantage of faster computation time as the dimensionality of
data is reduced.
The popular feature selection techniques often employed in
gender classification task are Prinicipal Component Analysis
(PCA), Independent Component Analysis(ICA), Adaboost
and Genetic Algorithm.
2.2.4Classification
As with other computer vision applications, there is no unique
solution to this problem. The common techniques involved in
pre-processing step are face alignment, and light
normalisation. Face alignment tries to align faces such that
they are closed to a common or specified pose of face as much
as possible, whereas light normalisation tries to get rid of the
variation in illumination. One of the common employed
normalisation techniques in the gender classification field is
histogram equalisation.
2.2.2Feature Detection
Working directly on raw pixel values can be very slow as one
small face image can contain a thousand of pixels.
Furthermore, not all the pixels will be useful. There can be an
underlying structure that describes the differences between
male and female faces better. Thus the feature detection
module is employed here.
Generally there are two types of features presented in the
gender classification context, geometric-based features and
appearance-based features.
With all necessary features have been extracted, the final task
is to decide whether or not those features represent female or
male face. As there are obviously two decisions to make this is
essentially binary classification task, that is, the classifier is
trained on the female and male example face images so that it
learns the decision boundary between these two classes. After
that it uses what it learn to make a decision on the given face
images.
Among the binary classifiers, the most popular classifiers
which give better performance than the others are a variation
of Support Vector Machine (SVM), a variant of Adaboost and
different Neural Network architectures. And among these
classifiers, a number of comparative studies have been carried
out and have suggested the best performance is obtained from
the SVM.
III. Results and Tables
3.1Results and analysis:
Geometric-based features (also called local features) came
from psychophysical explorations. They represent high-level
face descriptions such as distances between nose, eyes and
mouth, face width, face length, eyebrow thickness and so on
Appearance-based features (also called global features) use
low-level information about face image areas based on pixel
values.Among appearance-based features the popular ones
are:
various texture features: e.g. Local Binary Pattern (LBP),
Local Directional Pattern (LDP) and Pixel-Pattern-Based
Texture Feature (PPBTF).
histogram of gradients:
Transform (SIFT)
e.g.
Scale-Invariant
Fig: 1 Data Base Creation
Feature
coefficients of wavelet transformation of image: e.g. Gabor
wavelet and Haar wavelet
2.2.3Feature Selection
2
All Rights Reserved © 2012 IJSETR
International Journal of Science, Engineering and Technology Research (IJSETR)
Volume 3, Issue 3, March 2014
Fig:5 Gender Classified
Fig: 2 Browse Image
3.2Probabilistic Neural Network
A PNN is predominantly a classifier since it can map any
input pattern to a number of classifications. PNN is a fast
training process and an inherently parallel structure that is
guaranteed to converge to an optimal classifier as the size of
the representative training set increases and training samples
can be added or removed without extensive retraining. A
consequence of a large network structure is that the classifier
tends to be oversensitive to the training data and is likely to
exhibit poor generalization capacities to the unseen data. In
this paper, Probabilistic Neural Network is used to compare
the features of input image with data base image which are
obtained from the Local Binary Pattern.
Fig: 3 Recognition Person Original/Fake Identification
Fig: 4 Genders Classified for Template Data Base
Conclusion
In this paper we have implemented two techniques, one is to
verify the signature whether authorized or unauthorized by
measuring the Euclidean distance of both input image and
data base images and we compared this results using principle
component analysis (PCA). The second one is name
identification using local binary pattern (LBP) and
probabilistic neural network (PNN). Defining the effective
features which results in minimum deviation for an signature
instance may aid to further improvement of the system
accuracy. An extension to the approach would be
implementation of more accurate distance measurement
techniques like minimum distance to verify the signature
sample instead of Euclidean distance measure.
Reference
i.H. Cheng, Z. Liu, N. Zheng, and J. Yang, “A Deformable
Local Image Descriptor,” Proc. IEEE Int’l Conf. Computer
Vision and Pattern Recognition, 2008
ii. W. Zhao, R. Chellappa, P. J. Phillips and A. Rosenfeld,
“Face recognition: a literature survey,” ACM Computing
Surveys, vol. 35, pp. 399–458, December 2003.
iii. M. Heikkila¨, M. Pietika¨inen, and C. Schmid,
“Description of Interest Regions with Local Binary
Patterns,” Pattern Recognition, vol. 42, no. 3, pp. 425-436,
2009
iv. M. S. Bartlett, J. R. Movellan, and T. J. Sejnowski, “Face
recognition by independent component analysis,” IEEE
Trans. Neural Networks, vol. 13, pp. 1450–1464, November
2002.
v. J. F. Vargas, M. A. Ferrer, C. M. Travieso, J. B. Alonso:
"Off-line signature verification based on grey level
information using texture features," Pattern Recognition, vol.
44, no.2, pp. 375-385, 2011.
vi. D. Bertolini, L.S. Oliveira, E. Justino, R. Sabourin,
"Reducing forgeries in writer-independent off-line signature
verification through ensemble of classifiers", Pattern
Recognition, Vol. 43 pp.387-396, 2010.
vii. A. Gilperez, F. Alonso-Fernandez, S. Pecharroman, J.
Fierrez, J. Ortega- Garcia, "Off-line signature verification
using contour features", Proceedings of the International
Conference on Frontiers in Handwriting Recognition,
ICFHR, 2008.
viii .I. Siddiqi, N. Vincent, "Combining Contour Based
Orientation and Curvature Features for Writer Recognition",
3
All Rights Reserved © 2012 IJSETR
International Journal of Science, Engineering and Technology Research (IJSETR)
Volume 1, Issue 1, July 2012
Lecture Notes in Computer Science, Volume 5702, pp.
245-252, 2009
ix.M. Szummer and R.W. Picard, “Temporal Texture
Modeling,” Proc. IEEE Conf. Image Processing, vol. 3, pp.
823-826, 1996
x. J. Chen, G. Zhao, and M. Pietikäinen, “An improved local
descriptor and threshold learning for unsupervised dynamic
texture segmentation,” in Proc. 12th IEEE Int. Conf. Comput.
Vis. Workshop, Oct. 2009, pp. 460–467.
.
.
4
All Rights Reserved © 2012 IJSETR