UNIVERSITI TEKNOLOGI MALAYSIA GENDER ESTIMATION BASED ON FACIAL IMAGE AZLIN BT YAJID

advertisement
GENDER ESTIMATION BASED ON FACIAL IMAGE
AZLIN BT YAJID
UNIVERSITI TEKNOLOGI MALAYSIA
PSZ 19:16 (PIND. 1/97)
UNIVERSITI TEKNOLOGI MALAYSIA
BORANG PENGESAHAN STATUS TESIS♦
JUDUL: GENDER ESTIMATION BASED ON FACIAL IMAGE
SESI PENGAJIAN: 2004 / 2005
Saya
.
AZLIN BINTI YAJID
(HURUF
BESAR)
mengaku membenarkan tesis (PSM/Sarjana/Doktor Falsafah)* ini disimpan di perpustakaan
Universiti Teknologi Malaysia dengan syarat-syarat kegunaan seperti berikut:
1. Tesis adalah hakmilik Universiti Teknologi Malaysia.
2. Perpustakaan Universiti Teknologi Malaysia dibenarkan membuat salinan untuk
tujuan pengajian sahaja.
3. Perpustakaan dibenarkan membuat salinan tesis ini sebagai bahan pertukaran antara
institusi pengajian tinggi.
4. ** Sila tandakan (√)
√
SULIT
Mengandungi maklumat yang berdarjah keselamatan atau
kepentingan Malaysia seperti yang termaktub di dalam
AKTA RAHSIA RASMI 1972)
TERHAD
(Mengandungi maklumat TERHAD yang telah ditentukan
oleh organisasi/badan di mana penyelidikan dijalankan)
TIDAK TERHAD
Disahkan oleh
__________________________________
(TANDATANGAN PENULIS)
___________________________________
(TANDATANGAN PENYELIA)
Alamat Tetap:
PT 825, JLN KURNIA JAYA 27,
TMN KURNIA JAYA,PENG. CHEPA
16100 KOTA BHARU,KELANTAN.
Tarikh:
CATATAN:
4th April 2005
.
PM DR. SYED ABD. RAHMAN AL-ATTAS
Nama Penyelia
Tarikh:
4th April 2005
.
* Potong yang tidak berkenaan.
** Jika Kertas Projek ini SULIT atau TERHAD, sila lampirkan surat daripada
pihak berkuasa/organisasi berkenaan dengan menyatakan sekali sebab dan
tempoh kertas projek ini perlu dikelaskan sebagai SULIT dan TERHAD.
♦ Tesis dimaksudkan sebagai tesis bagi Ijazah Doktor Falsafah dan Sarjana
secara penyelidikan atau disertai bagi pengajian secara kerja kursus dan
penyelidikan atau Laporan Projek Sarjana Muda (PSM).
“I hereby declare that I have read this thesis and in my
opinion this thesis is sufficient in terms of scope and quality for the
award of the degree of Master of Engineering
(Electrical-Electronics & Telecommunication)”
Signature
:
Name of Supervisor I :
Date
PM DR. SYED ABD. RAHMAN AL-ATTAS
: 4th of April 2004
GENDER ESTIMATION BASED ON FACIAL IMAGE
AZLIN BINTI YAJID
A dissertation submitted in partial fulfillment
of the requirements for the award of the degree
of Master of Engineering
(Electrical-Electronics & Telecommunication)
Faculty of Electrical Engineering
Universiti Teknologi Malaysia
APRIL, 2005
ii
“I declare that this thesis entitled GENDER ESTIMATION BASED ON FACIAL
IMAGE is the results of my own research except as cited in references.
This thesis has not been accepted for any degree and is not concurrently submitted in
candidature of any degree.”
Signature
: …………………
Name of Candidate : Azlin binti Yajid
Date
: 4th April 2005
iii
Specially dedicated to my family for their supports and eternal love.
iv
ACKNOWLEDGEMENTS
Praise to Allah, the Most Gracious and Most Merciful, Who has created the
mankind with knowledge, wisdom and power.
First of all, the author would like to express his deepest gratitude to Associate
Professor Dr. Syed Abd. Rahman Al-Attas for his continuous support, ideas, supervision
and encouragement during the course of this project. The author would not have
completed this project successfully without his assistance.
The author is thankful to Mr Anuar Zaini and wife, Mr. Mohamad Nansah, Ms.
Syakira, Ms. Norasiah and Ms. Ismahani for advice and helpful cooperation during the
period of this research. Appreciation is also acknowledged to those who have
contributed directly or indirectly in the completion of this project.
The author would also like to extend his appreciation to his family members,
for their support, patience and endless love.
v
ABSTRACT
Although gender classification has attracted much attention in psychological
literature, relatively few machine vision methods has been proposed. However it has
been extensively studied in the context of surveillance applications and biometrics. This
project is mainly concern with gender classification using purely image processing
technique. The way of doing this is by extracting the differences between male and
female facial features. Obviously the classification base on a single feature is not
adequate since humans share many facial properties even within different gender group.
So multilayer processing is needed. This project is working as expected with specified
scope of project. Although not many varieties of facial images have been considered like
colored hair the basic techniques should be just the same. The proposed methods can be
extended to various purposes especially in speeding up the processing time in database
searching. The refinement of this project in other hand can lead to more accurate and
reliable result by considering other facial properties like eyes, nose and eyebrows.
vi
ABSTRAK
Bidang pengecaman jantina telah menjadi satu topik yang diberikan perhatian
dalam pengajian psikologi. Namun begitu yang sedikit pendekatan melalui teknik
pengelihatanyang telah diperkenalkan. Bidang ini sebenarnya telah dipelajari secara
mendalam dalam konteks keselamatan dan biometrik. Projek ini adalah berkisar tentang
pengecaman jantina melalui teknik pemprosesan imej semata-mata. Ini dilakukan
dengan mengenalpasti perbezaan di antara ciri-ciri muka lelaki dengan perempuan.
Adalah terbukti bahawa pengkelasan berdasarkan satu ciri sahaja adalah tidah tepat
memandangkan manusia mempunyai ciri-ciri muka yang hampir sama walaupun dari
kelas jantina yang berbeza. Oleh kerana itu pengkelasan secara berperingkat diperlukan.
Projek ini berjaya sepertimana yang diharapkan; berdasarkan skop yang telah ditetapkan.
Walaupun tidak banyak jenis-jenis muka yang diambil kira seperti warna rambut yang
berlainan dari asal, teknik yang digunakan sepatutnya masih lagi sama. Kegunaan projek
ini boleh dikembangkan kepada pelbagai tujuan terumanya untuk mempercepatkan
process pencarian dalam pangkalan data. Dengan sedikit pengubahsusian, projek ini
semestinya akan menghasilkan satu system yang lebih tepat; dengan mengambil kira
ciri-ciri muka manusia yang lain seperti mata, hidung dan kening.
vii
LIST OF CONTENTS
CHAPTER
CHAPTER I
CONTENT
PAGE
DECLARATION
ii
DEDICATION
iii
ACKNOWLEDGEMENTS
iv
ABSTRACT
v
ABSTRAK
vi
LIST OF CONTENTS
vii
LIST OF TABLES
x
LIST OF FIGURES
xi
LIST OF NOTATIONS
xii
LIST OF EQUATIONS
xiii
LIST OF ABREVIATIONS
xiv
LIST OF APPENDICES
xv
INTRODUCTION
1
1.1 Introduction to Face Recognition
1
1.2 Problem in Face Recognition System
2
1.3 Introduction to Gender Estimation
2
1.4 Objective
3
viii
CHAPTER II
1.5 Scope of Project
3
1.6 Project Outline
4
LITERATURE REVIEW
5
2.1 Introduction
5
2.2 Gender Estimation
5
2.3 Proposed Processing Techniques
5
2.4 Physical Differences Between
9
Genders
2.5 Basic of Image Processing
CHAPTER III
12
2.5.1 Histogram Equalization
13
2.5.2 Correlation
15
2.5.3 Grayscalling
16
2.5.4 Image Arithmetic Operation
16
METHODOLOGY
20
3.1 Introduction
20
3.2 Overall System
20
3.3 Development Process
21
3.4 Project Flow
22
3.4.1 Hair Detection
23
3.4.2 Ear Detection
25
3.4.3 Template Matching Based on
27
Hairline Shape
3.4.4 Template Matching Based on
29
Average Image
3.4.5 Template Matching Based on
31
ix
Facial Shape
CHAPTER IV
CHAPTER V
3.5 GUI Development
33
RESULTS AND DISCUSSIONS
36
4.1 Introduction
36
4.2 Testing on The Images
36
4.3 False Result
37
4.4 Analysis on Overall Result
39
4.5 Processing Time
40
4.6 Discussion
40
CONCLUSION AND SUMMARY
42
5.1 Summary
42
5.2 Conclusion
43
5.3 Recommendation and Future Works
43
REFERENCES
45
APPENDICES
47-65
x
LIST OF TABLES
TABLE
2.1
TITLE
Feature differences between male and female
PAGE
9
face
4.1
False detection on hair analysis
38
4.2
Overall result of gender estimation system
39
xi
LIST OF FIGURES
FIGURE
TITLE
PAGE
2.1
Lower part of face
12
2.2
Histogram equalization
14
3.1
Block diagram of project
22
3.2
Hair detection for male
24
3.3
Hair detection for female with scarf
25
3.4
Detection of ear for bald man
26
3.5
Detection of ear for female with white scarf
26
3.6
‘m’ shape male hairline
28
3.7
‘m’ shape detection for female
29
3.8
Average image template
31
3.9
Steps in skin color segmentation for
32
template selection
3.10
Template for facial shape matching
33
3.11
Flowchart of GUI
34
3.12
Design of GUI figure
35
xii
LIST OF NOTATIONS
ςi
ίth Gaussian basis function
ci
Center
σ2
Variance
b
Bias term
ω
Weight coefficient
T(x,y)
Template of an image
S(x,y)
Region within the image
W
Width dimension
H
Height dimension
µT
Mean value of the template
µs
Mean value of the sub image
M
Mask
xiii
LIST OF EQUATIONS
FIGURE
TITLE
PAGE
2.1
Gaussian Basis Function
8
2.2
Correlation Coefficient
15
2.3
Image Addition
17
2.4
Image Substraction
18
2.5
Absolute Difference of two Images
18
2.6
Image Multiplication
18
3.1
Mean
30
xiv
LIST OF ABBREVIATIONS
GUI
Graphical User Interface
xv
LIST OF APPENDICES
APPENDIX
TITLE
PAGE
A
Matlab Codes
47
B
Function find_color
62
C
Function getcolor and make_rgb
64
CHAPTER I
INTRODUCTION
1.1
Introduction to Face Recognition
Face is one of the most important biometric features of a human. A human can
recognize different faces without difficulty. Yet it is a challenging task to design a robust
computer system for face identification. The inadequacy of automated face recognition
systems is especially apparent when compared to our own innate face recognition
ability. Human perform face recognition, an extremely complex visual task, almost
instantaneously and our own recognition ability is far more robust than any computer's
can hope to be. Human can recognize a familiar individual under very adverse lighting
conditions, from varying angles or viewpoints.
While research into this area dates back to the 1960's, it is only very recently that
acceptable results have been obtained. However, face recognition is still an area of
active research since a completely successful approach or model has not been proposed
to solve the face recognition problem. The next generation surveillance systems are
2
expected to take human face as input pattern and extract useful information such as
gender information from it.
1.2
Problem in Face Recognition System
To date, most face recognition systems have had at most a few hundred faces.
This could be a problem when the size of database increases. Larger database means
longer computational and processing time. The identification of gender can help the face
recognition system to focus more on the identity related features, and limit the number
of entries to be searched in a large database, improving the search speed In other words
estimation will be done on the input image and recognition of image is done only in the
estimation group. Theoretically this method will cut the processing time almost to half.
1.3
Introduction to Gender Estimation
Gender classification based on facial images is difficult mostly because of the
inherent variability of the image formation process in terms of image quality and
photometry, geometry, and/or occlusion, change, and disguise. Few attempts have been
made to perform gender classification starting in the early 1990s where various neural
network techniques were employed for classifying the gender of a (frontal) face.
The interest on gender estimation has two folds. First, one can apply the gender
3
estimation procedure prior to face recognition in order to split the face space into two.
Second, because of the nature of the problem, one can apply same methodology to other
class specific face processing tasks like race and age estimation. Thus, by arriving at a
robust gender estimation scheme, one can hope to propose solutions to similar face tasks
as well.
1.4
Objective
One of the most challenging tasks for visual form (’shape’) analysis and object
recognition is the understanding of how people process and recognize each other’s face,
and the development of corresponding computational models. The objective of this
project is therefore to write a Matlab code in such a way that it can recognize the gender
of a person from given frontal image. The algorithm will be a combination of various
proposed method along with some other features . Finally, this project hopefully can be
a relatively good gender classifier as other proposed methods.
1.5
Scope of Project
Gender classification of a person based on only a frontal view image is
something a human can easily accomplish. It can be decided by the person’s hair, nose,
eyes, mouth and other properties with relatively high degree of accuracy. However this
will be a problem when it comes to automating the processing using a computer
program. This project therefore is to solve this matter. The gender estimation algorithm
4
will be done via Matlab image processing tools. In this project it is assumed that the
background of the facial image is not complex and there is only a single face on it.
Further each image is assumed in a same size, the image quality and resolution is
assumed to be sufficient enough, the illumination is uniformed and the input images are
colour images. Transvestite (male/female that change the appearance to opposite sex) is
not considered in this project. However no restriction on wears, glasses, make-up,
hairstyle, beard, etc imposed
1.6
Project Outline
The project is organized into six chapters. The outline is as follows;
Chapter 1 - Introduction
This chapter discusses the objectives and scope of the project and gives a
general introduction to facial recognition and gender estimation technology.
Chapter 2 - Literature Review
This chapter is about previous work regarding the facial detection, facial feature
extraction and gender estimation. A few techniques will be reviewed briefly.
Major differences between male and female facial feature will be described.
Lastly some of important image processing technique will be discussed.
Chapter 3- Methodology
Chapter 3 elaborates the techniques and steps taken to complete the task. A few
algorithms is proposed to be applied in this project.
5
Chapter 4- Results
The final result of this project are shown and discussed in this chapter. Some
analysis of the results and each algorithm applied are also included.
Chapter 5-Conclussion
This chapter consists of conclusion for this project. It also describe the problems
arises and suggestion for future improvement and works.
CHAPTER II
LITERATURE REVIEW
2.1
Introduction
There are a few numbers of proposed methods for gender estimation. They do
not consist of image processing techniques only but also applying other type of
approach. These will be described briefly in this chapter.
2.2
Gender Estimation
Considering face as an image pattern, it is a challenge to detect faces and
segment into its specific features, since faces can be very different yet still have the
same basic structure and content. Gender classification based on facial images is
difficult mostly because of the inherent variability of the image formation process in
terms of image quality and photometry and facial features itself. Few attempts have been
made to perform gender classification. The most common way of doing this is through
these 2 schemes [2]:
7
a. Template based approach
This simple recognition technique is based on the whole image gray-level
templates. The most direct of the matching procedures is the correlation
technique. When attempting recognition, the unclassified image is compared
with all the database images, returning a vector of matching score. The
unknown person is then classified as the one giving the highest cumulative
score.
b. Feature based approach
Procedures that utilize various properties of a face (such as facial topology,
hair, etc) as the code for face. The idea is to extract relative position and
other parameters of distinctive features such as eyes, mouth, nose and
chin.[1,2]
2.3
Proposed Processing Techniques
By making use the above approaches various processing techniques have been
proposed in previous work. Some well-known method will be described briefly.
a. Support Vector Machine (SVM)
A Support Vector Machine is a learning algorithm for pattern classification,
regression and density estimation. The basic training principle behind SVM’s
is finding the optimal linear hyper plane such that the expected classification
error for unseen test samples is minimized –i.e., good generalization
performance. [3]
8
b. Radial Basis Function Networks
A Radial Basis Function (RBF) network is also a kernel-based technique for
improved generalization, but it is based instead on regularization theory. A
typical RBF network with K Gaussian basis functions is given by
K
f ( x ) = ∑ ωiς ( x; ci , σ i2 ) + b
(2.1)
i
Where the ς i is the ίth Gaussian basis function with center ci and variance σ2i
and b is a bias term. The weight coefficient ωi combine the basis functions
into single scalar output value, with b as a bias term.[3]
c. Principle Component Analysis PCA
The main idea in this method is getting the features of the face in
mathematical sense instead of the physical face feature by using
mathematical transforms [15]. In brief, this algorithm finds the principle
components of the covariance matrix of a set of face images. These vectors
can be thought of a set of features, which together characterize the variations
between human face images. Each face image in the training set can be
represented exactly in terms of a linear combination of the eigenfaces.
9
2.4
Physical Differences Between Genders
Based on medical research and human anatomy it is known that males and
females have some distinctions in their physical appearance. However, there also exists
some overlapping features. Many handsome actors on close examination have some
feminine facial characteristics, while many supermodels have some very male
characteristics.
Table 2.1 summarize the basic differences between male and female faces but of
course the degree of masculinity or femininity varies from person to person. It is
important to remember that no single feature makes a face male or female, it is the
number of masculine or feminine features that counts [4,9].
Table 2.1: Feature differences between male and female face
Facial features
Differences
Hairline
The male hairline is usually higher than the female and tends
to have an “M” shape that recedes at the temples. High
hairlines are often seen in beautiful women but again, this
must be balanced against the number and degree of any other
masculinity.
Forehead
This is one of the most important gender markers. The ridge of
bone that runs right across the forehead just above the eyes
(often referred to as “brow-bossing”) is almost always far
more pronounced in males. In fact many genetic women have
almost no discernable brow-bossing at all – their foreheads
tend to be gently rounded overall with a fairly flat front and in
profile they tend to be vertical rather than backwards-sloping.
The bossing above the eyes is called the orbital rim and is
10
made of solid bone but the bulge towards the middle of the
brow ridge covers the hollow area of the frontal sinuses and
the bone here can vary a lot in thickness.
Eyebrows
Male eyebrows tend to be fairly straight and thick and sit right
on or just under the orbital rims. Female eyebrows generally
sit higher, just above the rims and they usually have a more
arched shape.
Nose
The female nose tends to be smaller, shorter and have a
narrower bridge and nostrils than the male one. Also, female
noses often have a more concave profile (like a ski jump) and
tend to be blunter at the tip but noses can vary a great deal
according ethnic background (see “Ethnic variations” below).
Cheeks
Female cheeks tend to be fuller and more rounded than male
ones and the cheekbone itself tends to stand a little bit higher
and further forward. Men often have hollow cheeks - this is
partly because of the flatter cheekbones but also because their
cheeks carry less fat. The effect can be emphasised by the fact
that the male upper jaw often protrudes forward.
Top lip length
The distance between the base of the nose and the top lip is
usually longer in males (sometimes a lot longer). When a
woman’s mouth is relaxed and slightly open she usually shows
2 – 4 mm of her top teeth. This is considered attractive and
also lends a youthful appearance to the face. The whole section
of skin between the top lip and nose often has a more
backward slope in females.
Lip shape
Female lips are often fuller than male ones and tend to be
bigger in proportion to the rest of the face. Female mouths also
tend to be narrower.
Chin
Female chins tend to be rounded while male chins tend to be
wider with a flat base and two corners to form a more square
shape. Male chins are also a lot taller and heavier and are more
11
likely to have a vertical cleft in the middle.
Jaw
Figure 2.1 shows male jawbone is usually much more heavily
built throughout than the female one. If one looks at a male
face from the front, the bottom third tends to be wider. This is
partly because the jawbone itself is wider and partly because
the muscles that attach to the corners of the jawbone are much
bigger in males. The line of the jaw in females tends to run in a
gentle curve from the earlobe to the chin but in males it tends
to drop down straight from the ear and then turn at a sharp
angle towards the chin giving a square appearance.
Adam’s apple
The Adam’s apple is an important male marker because while
it is rarely visible in females, it is usually visible in males and
can often be very prominent.
Overall shape
An attractive female face tends to be roughly heart shaped with
the two rounded corners of the hairline at the top coming down
to the single point of the chin. The lower third of the male face
is usually longer because of the long top lip and the tall chin.
Their faces overall have a more square appearance with the
two corners of the “M” shaped hairline at the top coming down
to the wide, square cornered jaw at the bottom. In profile the
female face tends to be fairly flat while in men the forehead
will often slope backwards and the lower half of the face will
often protrude forward.
Ethnic variations
Generally the differences between male and female faces
mentioned above apply to all ethnic groups. However different
ethnic backgrounds can give different advantages and
disadvantages for determination of feminisation. For example,
people of African descent are likely to already have full lips
but may also have a strong brow ridge while people of East
Asian descent will often only have a moderate brow ridge but
may also have very prominent jaw angles. As mentioned
12
before, noses can be very variable between different ethnic
groups and this is something that should particularly watch out
for.
Figure 2.1 : Lower part of face
2.5
Basic of Image Processing
This part focuses on some of important image processing techniques applied to
this project. Besides there are a few things that need to be take into consideration. One
of them is the feature of the image itself. Size, colour, and shape are some commonly
used features. Things to be considered are:
•
Feature extraction
The features of facial image to be detected and how reliable it is. Most
13
features can be computed in two-dimensional images but they are
related to three-dimensional characteristics of objects Due to the nature of
the image formation process, some features are easily computed reliably
while others are very difficult.
•
Feature matching
One of the best methods in gender estimation is to do matching based on a
model from samples. In most object recognition tasks, there are many
features and numerous objects. An exhaustive matching approach will
solve the recognition problem but may be too slow to be useful.
Effectiveness of features and efficiency of a matching technique must be
considered in developing a matching approach
2.5.1
Histogram Equalization
Histogram equalization is one of the most widely used techniques for many
image processing applications. It improves the contrast and the goal of histogram
equalization is to obtain a uniform histogram. This technique can be used on a whole
image or just on a part of an image.
Histogram equalization will not "flatten" a histogram. It redistributes
intensity distributions. If the histogram of any image has many peaks and valleys, it will
still have peaks and valley after equalization, but peaks and valley will be shifted.
Because of this, "spreading" is a better term than "flattening" to describe histogram
equalization.
14
Because histogram equalization is a point process, new intensities will not be
introduced into the image. Existing values will be mapped to new values but the actual
number of intensities in the resulting image will be equal or less than the original
number of intensities. Figure 2.7 shows the histogram equalization processed on an
image.
Figure 2.2 : Histogram equalization (a) Original image; (b) Histogram of original
image; (c) Equalized image; (d) Histogram of equalized image.
15
2.5.2
Correlation
Correlation can be used to determine the existence of a known shape in an image.
Classical correlation takes into account the mean of the template and image area under
the template as well as the spread of values in both template and image area. That is
correlation techniques are applied widely to template matching.
Template matching itself is a fundamental approach to face detection. The basic
idea of the approach is the computation of statistical similarity between a predefined
template of the face and regions within the image .A useful statistical measure is the
correlation between the template and a region in the image, which is determined by
computing a correlation coefficient. Let T ( x, y ) denote the template and S ( x, y ) denote
the region within the image I ( x, y ) at which the correlation is computed. T ( x, y ) and
S ( x, y ) have the same dimensions, namely, width W and height H. The correlation
coefficient R is given by the following equation:
W
R=
H
∑∑ (T ( x, y) − µ
x =0 y =0
W
H
∑∑ (T ( x, y) − µT ) 2
x =0 y =0
T
)( S ( x, y ) − µ s )
W
H
∑∑ (S ( x, y) − µ S ) 2
(2.2)
x =0 y =0
where µT is the mean value of the template and µ s is the mean value of the sub image.
Regions in the image where a high value of R is obtained are determined to be candidate
face regions. Template Matching is a simple approach to face detection, but suffers from
the following disadvantages:
16
•
A single template cannot model the large range of variations of the human
face. This requires the use of multiple templates, which increases the
complexity of the approach.
•
Template Matching is scale-dependent, i.e., it works best when the face
region to be detected has a spatial extent very similar to that of the
template. If detection at multiple scales is desired, the template needs to be
resized, which affects detection performance. If templates at multiple
scales are employed to avoid resizing, the computational complexity of
the method increases.
2.5.3
Grayscaling
Grayscaling is done to remove the color values of the image by eliminating the
hue and saturation information while retaining the luminance. Therefore grayscale can
also be called an intensity image. Most of the image processing via Matlab required the
image to be in grayscale. Besides grayscale was chosen since it drastically increased the
speed of the algorithm.
2.5.4
Image Arithmetic Operation
Image arithmetic is the implementation of standard arithmetic operations, such as
addition, subtraction, multiplication, and division, on images. Image arithmetic has
17
many uses in image processing both as a preliminary step in more complex operations
and by itself.
Examples of commonly-used arithmetic operators used between two (or more)
images include:
•
Addition (double exposures)
Double exposures are a commonly used trick in photography. The effect
can be simulated by adding two images together.
C [x, y ] = A[x, y ] + B[x, y ]
(2.3)
There are two common ways of doing this. One is to simply add the two
together directly; the other is to add some fraction of one to some fraction
of the other. If two images are added together, it’s not uncommon for
many of the values to exceed the image range (e.g., 255). These values are
simply clipped to the maximum value allowed. To avoid this, another
approach is to add half of one image to half of the other image. This
ensures that the result stays within the range of the image. This allows
control to relative weighting of the two images.
•
Subtraction (image differences)
A common task in image processing is to determine the difference between
two images A and B. This can often be done by subtracting the images:
18
C [x, y ] = A[x, y ] − B[x, y ]
(2.4)
Of course, this can produce negative numbers, so a more useful form is to
calculate the absolute difference of the two (subtract and then apply an
absolute-value operator to the result):
C [x, y ] = A[x, y ] − B[ x, y ]
•
(2.5)
Multiplication (masking, transparency)
Another arithmetic operation is to multiply one image by another. If one
image consists of 0 and 1 value, it acts as a mask M(x,y) on the other
image. Those pixels with 0 values produce a black result, and those with a
1value transfer the pixel value for the first image to the output. This can be
used to “cut out” portions of an image. Masking can be combined with
addition by using the mask on one image and the inverse mask on the
other:
C [x, y ] = M [x, y ]A[x, y ] + (1 − M [x, y ])B[x, y ]
(2.6)
This allows the insertion of part of one image into another. By allowing
values between 0 and 1 (i.e., a floating-point image) for the mask values ,
this allows blending of the two images at the edges of the masked areas to
reduce the artifacts of the composition. This approach is called alpha
blending because the mask value is usually written as α .
19
•
Averaging multiple images (noise reduction)
An extension of image addition is to average multiple images together.
This is useful when the images are of the same stationary scene. The
changes are the noise .If the noise is itself zero-mean, we can average
multiple frames together to reduce the effects of noise. Statistically, this is
the same as using the average of multiple samples to provide a better
estimate of a population mean
CHAPTER III
METHODOLOGY
3.1
Introduction
This chapter focuses mainly on algorithms used and its implementation in this
project. Also included are the review on system requirements and software
implementation.
3.2
Overall System
This project deals with the software development using Matlab Version 6.5.1 on
Windows XP platform. The reason of using Matlab is its simplicity and wide range of
image processing implementations. The image processing toolbox will be used most.
This toolbox is a collection of functions that extend the capability of the Matlab
21
numeric-computing environment. The other advantage is that Matlab comes with
Graphical User Interface (GUI) toolbox that makes the program implementation easy to
use. Other additional software used is Adobe Photoshop Version 6.0 .The system
requirement for this program to work in acceptable processing time is Pentium II with
128MB RAM or above with at least 4Ghz of hard disk space on Windows platform
(Windows 98 and above). Obviously the processing time will vary depending on the
processor speed and available memory.
3.3
Development Process
The first step of the development part is sampling or facial image collecting. The
images are restricted only to facial area with empty background. However no restriction
on wears, glasses, make-up, hairstyle, beard, etc are imposed. The images were resized
and the backgrounds were removed using Adobe Photoshop. Then only the code writing
part begins. At first this program was written based on 40 (20 males and 20 females)
facial images and then were tested with additional images that are different from
collected images. Details on program development are described in the next part of this
chapter.
22
3.4
Project Flow
Start
Colour facial Image
RGB to grayscale
Histogram equalization
Hair detection
No
No
Ear detection
Hair
area>50
Hair
Yes
Average image matching
Yes
Female with scarf
No
Male
Yes
Bald
Matching base on hairline shape
Facial shape matching
No
Male
Template Matching
Female with
scarf
Classifier
Done
Figure 3.1: Block diagram of project
23
Figure 3.1 shows the block diagram of the project. The shaded boxes show the
implemented algorithms for classification. To differentiate between female with scarf
and male or free hair female is quite an easy task. It becomes a challenge to differentiate
between free hair female and male. Therefore multi layer classification is needed as
indicated by above figure. Each algorithm applied will be described below.
Template matching is chosen as the primary technique in this gender estimation
project. Evidently a suitable template is needed which is in effect a ‘general’ face that
has feature of a face without having lots of details. The matching processes consist of
many parts as indicated in Figure 3.6 below. The next section will describe briefly each
matching technique applied.
3.4.1
Hair Detection
This part is meant to detect presence of hair, which is focused on the upper part
of a facial image. To begin this testing the RGB format image is converted to greyscale
and then equalized. The processed image is cropped and focused on the upper part of
facial image as shown in Figure 3.2 and Figure 3.3. Based on the observation on the
samples taken, the color of interest, which is the hair color, is selected and returns a
binary image. Pixel value ‘1’ indicates selected pixel and the rest will be ‘0’.
Area of ‘on’ with ‘1’ value pixel is calculated. The greater the value, the more
probability of hair area is detected. Again, observation is done to set the threshold in
order to differentiate between male and free hair female with female with scarf. The
selected threshold is as below.
24
Area of ‘on’ pixel > 1300 = male
Area of ‘on’ pixel < 1300 = female with scarf
Area of ‘on’ pixel <50 = bald or female with white scarf
(b)
(a)
(c)
Figure 3.2 : Hair detection for male (a) Male grayscaled image; (b) Region of
interest; (c) Detected hair area
25
(b)
(a)
(b)
Figure 3.3 : Hair detection for female with scarf (a) Female grayscaled image;
(b) Region of interest; (c) Detected hair area. Note that less ‘on’ pixel compared
to Figure 3.2(c).
3.4.2
Ear Detection
The importance of this part is to differentiate between a female with white scarf
and a bald person. There are no probabilities of hair area detected for both cases. So
some kind of method is needed to classify them. The easiest way is by doing the ear
detection. Obviously no ear will be detected for female with scarf. This part is done
simply by selecting small region on the left or right of the image and detect for skin area.
This is shown in Figure 3.4 and 3.5.
26
(b)
(c)
(a)
Figure 3.4 : Detection of ear for bald man (a) Bald man grayscaled image; (b)
Region of interest for ear detection; (c) Detected ear area
(b)
(c)
(a)
Figure 3.5 : Detection of ear for female with white scarf (a) Female with white
scarf grayscaled image; (b) Region of interest for ear detection; (c) No ear
detected
27
3.4.3
Template Matching Based on Hairline Shape
Male hairline is also considered in this matching process since most of male has
‘m-shaped’ hairline. Region or window of interest is set at the hairline area where the
processing of the image will be done only to this area. Hair area is selected and the
pixels areas are marked as ‘1’. Instead of calculating the area, matching process will be
done with the hairline template.
The template is taken from one of the male samples that obviously show the ‘m’
shape of hairline. Figure 3.6 shows some of male hairlines. Note that their hairlines are
somehow look like an ‘m’ shape. However this property is not applied to all male facial
shape. Meanwhile Figure 3.7 shows the female hairline is indeed different from the male
hairline and of them shows a false detection that classified as male. The result of this
matching is in terms of normalized cross correlation of template and facial image. Some
threshold is fixed to do classification. Based on the block diagram on Figure 3.1, also
note that this algorithm is not applied to female with scarf.
28
Figure 3.6: ‘m’ shape male hairline
29
(a)
(b)
(c)
Figure 3.7: ‘m’ shape detection for female (a) and (b) Female hairline; (c) False
detection and classified as male.
3.4.4
Template Matching Based on Average Image
This part is to classify between males and free hair females. For that purpose, a
few male facial images were used. Averaging technique was applied here by using
average image as a template. Mean of the samples were calculated, using the standard
definition of average value:
30
mean =
1
N
N
∑m
n =1
n
(3.1)
where N is the number of samples. Average filter is used to remove details such as eyes,
nose and mouth.
Figure 3.8 shows the ‘normal-looking’ male images used to form the average
image as a template. Figure 3.8(d) is the average results of those images. The details has
been removed and left with short hair shape and some eyebrows curvature. By applying
the correlation or matching process, this method can fulfilled the mentioned task.
31
(a)
(b)
(c)
(d)
Figure 3.8 : Average image template (a)-(c) Male facial image to perform average
image; (d) Male average image
3.4.4
Template Matching Based On Facial Shape
This matching technique is needed to refine the classification between male and
females. This shape is chosen purely based on observation and “try and error” of skin
part area as a template. The chosen shape is a nice oval shape, which can be found on
32
most of female with scarf images. However the free hair females does not have this kind
of shape since the hairline that resides at their temples (upper part of head) is more to ‘n’
shape rather than ‘m’ shape that is found on man.
Figure 3.9 shows steps taken to perform skin color segmentation in order to
produce the skin area images. Only then the template is chosen as shown in Figure 3.10.
Create
skin
color
model
RGB to
YCbCr
Noise
removing
(low
pass)
Skin
region
model
Skin area
detection
on image
Skin
area
Skin Segmentation
Template
Matching
Template
selection base
on
observation of
collected
images
Figure 3.9: Steps in skin color segmentation for template selection
33
Figure 3.10: Template for facial shape matching
3.5
GUI Development
The utmost importance in this project is the ability to classify images based on
its gender by going through multilayer classification. However the end user does not
need to see the codes or algorithms performed before being presented with the end
result. The best way to present the result is by graphical user interface (GUI); which was
also built using Matlab 6.5.1. The toolbox used is known as Graphical User Interface
Development Environment or simply known as GUIDE. Figure 3.11 shows flowchart of
the GUI program and Figure 3.12 shows the layout of GUI.
34
Start
Select
image
Display name of
selected image
Load
image
Display selected
image
Estimate
gender
Display result
Close
End
Figure 3.11 : Flowchart of GUI program
35
1
2
4
3
5
6
1
Figure 3.12 : Design of GUI figure
The function of each element is as follows:
1. Image selection from saved images
2. Selected image file will be displayed here
3. Load selected image to image axes (4)
4. Display selected image
5. Invoke gender estimation program
6. Estimated gender will be displayed here
7. Close program
CHAPTER IV
RESULTS AND DISCUSSIONS
4.1
Introduction
To prove that this system is working as expected, the experiment is done using as
many facial pictures as possible within mentioned scope. Only then the degree of
accuracy of this system can be known. This chapter shows the results of the testing with
one by one proposed algorithm. Also it includes results of the overall system when
tested with quite a few numbers of images.
4.2
Testing on the Images
All of collected facial images were taken either by a digital camera or a scanner.
37
These images were resized to 1.6 inches x 2.7 inches and then were tested to the system.
Tests were also carried out to the images that were taken from video and television but
somehow give unacceptable results. This might be due to the poor illumination and
image resolution itself. Therefore only images from digital camera or scanner are taken
into consideration.
4.3
False Result
Unlike other methods proposed by previous, researchers this project is mainly
based on observation of collected samples. Values were recorded then only some
comparison will be done and finally threshold is set for classification purpose. However
it is not as easy as it sounds. Some samples are harder to classify because of the mixing
of male and female features. Some of the false classifications for each test are as
follows:
i. Hair detection
Figure 4.1 shows the examples of false classification based on hair detection. In
most of the facial images, this algorithm cannot classify between free hair female
and female. The main concept is to do classification by setting up the threshold
value as described in Chapter III.
38
Table 4.1 : False detection on hair analysis
Categories
(a) Free hair
Test Image
Processed Image
Result
Male
female
(b) Bald man
Female
(c) Free hair
Male
female
39
ii. Ear Detection
The application of this algorithm is very minima, which is applied only to bald
and female with white scarf. Since the appearance of the ear can easily detect for
bald man, the error is very minima.
iii. Matching
There are three types of type of matching mechanism in this system. There are
matching based on average image, matching based on facial shape and matching
based on shape of hairline. All are applied using correlation. Each of them has
problem in classifying between free hair females and males. Figure 3.7(c) for
example shows the false detection on female hairline. Instead of male, the ‘m’
shape was also found on this female facial image.
4.4
Analysis On Overall Result
Table 4.3 shows the overall result of the system. The system combines all the
proposed algorithms. Note that with this combination a relatively good classifier for
gender estimation system has been developed. Out of 56 facial images, this classifier has
able to correctly classify 55 facial images, which is 98% of the tested images.
Table 4.2 : Overall result of gender estimation system
Gender Group
Male
Free Hair Female
Female With Scarf
Total Images
20
16
20
False Estimation
1
0
0
Accuracy
95%
100%
100%
40
4.5
Processing time
Time taken to successfully recognize and display the result of this system varies
depending on what type of algorithms applied to the image. The average estimation time
is in the range of 17 to 20 second for every image. With respect to database search
system, this program is not so good in terms of processing time. Minimization of the
code might be a good solution. Moreover the processing time is also influenced by the
machine speeds.
4.6
Discussion
There are few problem arise during development of this system. The first and
foremost is the data collection part. Most of the samples are taken from UTM students
which consist of males, free hair females and females with scarf; taken by digital
camera. Due to this, no bald images and no blonde-hair or white hair images were
considered. This can be considered as a drawback of this system since there is less
variety in samples collection. Nevertheless, considering Asian people the system is
adequate.
Capturing images from cd or television is not helping much since the quality of
the images is not sufficient enough to be implemented. Due to poor illumination, a
significant part of the face may not be visible. Internet on other hand is a good source
but still some restriction need to be fulfilled like images must be in color and must have
a perfect frontal view. Moreover, the quality of some of the downloadable images are
not sufficient enough to be applied in this project.
41
Studies have shown that facial structure of human face differs between males and
females. This founding is known as golden ratio rules. There are some ratio between
male and female facial features that differentiate between them. However the
implementation of this theory is impossible since slightly different measurement can
lead to very much different results. So the best way is by focusing on the obvious
features like hair, scarf wearing and facial shape. It seems that estimating woman faces
are more difficult than man faces. But the actual problem is to estimate the gender
between free hair female and male. The use of special property of male faces like ‘m’
shape is really helpful in this case.
CHAPTER V
CONCLUSSION AND SUMMARY
5.1
Summary
Gender estimation required multilayer processing. Estimation based on single
feature is not adequate. Various image processing techniques were applied and the
important of all is the matching process.
What had been done in this project is combination of new idea and proposed
method by researchers. Unlike the previous one, this project focuses on hairline area and
the hair or scarf presence. This will be a great advantage in the sense that it indicates the
obvious properties of a gender.
Several issues must be addressed like false detection and failure in feature
detection. To solve this, various techniques, processing steps and other feature analysis
are needed. This is not as easy as it sounds since humans tends to have same features
43
property even within the same gender class. Despite all that, gender estimation system is
very helpful in enhancing facial recognition system that is already being applied today.
5.2
Conclusions
In general, the objectives of this project have been achieved. However the most
important thing to be considered is a good image quality. The accuracy of this system
can be achieved up to 98%.
Although the algorithms proposed in this project have been designed for a
specific application, its philosophy is quite general and can be extended easily for other
application. There is still room for algorithms refinement, particularly on using other
facial features for gender estimation
5.3
Recommendation for Future Works
This system has a great potential in sense that it can be extended to various
usages especially in image recognition and security area. However the weakness of this
program is it is very sensitive to image quality and size since most of algorithms concern
with matching process.
44
Instead of applying this project to static facial image, it seems to be much more
useful if it can be extended to real time application where there is no restriction to image
angle or background images. Some modification therefore is needed especially in
automatic detection of facial features like hairlines or hair. Same algorithms can still be
applied to achieve a desired result. Although this project produces quite a good
classifier, the error will increase as number of test images increase. Perhaps the best
solution is by considering other facial features to classify the images.
Finally, to broaden the usage of this project other human facial features also need
to be considered. For example, having white, coloured or blond hair and wearing caps.
Perhaps cases of transvestite can be seen as new part of research by doing some analysis
on various features in order to estimate the gender.
45
REFERENCES
[1] Laurenz Wiskott et al. “Face Recognition and Gender Determination”
[2] Brunelli and Poggio, “ Face Reconition: Features Versus Templates”, IEEE
Transaction on Pattern Analysis and Machine Intellegence, Vol 15,No 10,October
1993
[3] B. Moghaddam and M.H. Yang’ “Learning Gender With Support Faces”, IEEE
Transaction on Pattern Analysis and Machine Intellegence, Vol 24,No 5,May 2002
[4] http://files.frashii.com/~lisa/annierichards.coolfreepage.com/skeleton.htm
[5] Selin Baskan,M.Mete Bulkun,Volkan Atalay,” Projection Based Method For
Segmentation Of Human Face And Its Evaluation”, Pattern Recognition Letters 23,
2002
[6] Chellappa, R., Wilson, C.L., Sirohey, S., “Human And Machine Recognition Of
Faces: A Survey”,Proc. IEEE 83, 705–740,1995
[7] Forchheimer, R., Mu, F., Li, H., “Automatic Extraction Of Human Facial Features”,
Signal Process. Imag. Comm. 8, 309–332, 1996.
[8] J.Hayashi, M.Yasumoto, H.Ito, Y.Niwa, H.Koshimizu, “Age and Gender Estimation
46
from Facial Image Processing”, SICE 2003, 5-7,2002
[9] http://www.virtualffs.co.uk
47
APPENDIX A
Matlab Codes
function varargout = gui(varargin)
% GUI M-file for gui.fig
%
GUI, by itself, creates a new GUI or raises the existing
%
singleton*.
%
%
H = GUI returns the handle to a new GUI or the handle to
%
the existing singleton*.
%
%
GUI('CALLBACK',hObject,eventData,handles,...) calls the local
%
function named CALLBACK in GUI.M with the given input arguments.
%
%
GUI('Property','Value',...) creates a new GUI or raises the
%
existing singleton*. Starting from the left, property value pairs are
%
applied to the GUI before gui_OpeningFunction gets called. An
%
unrecognized property name or invalid value makes property application
%
stop. All inputs are passed to gui_OpeningFcn via varargin.
%
%
*See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one
%
instance to run (singleton)".
48
%
% See also: GUIDE, GUIDATA, GUIHANDLES
% Edit the above text to modify the response to help gui
% Last Modified by GUIDE v2.5 17-Feb-2005 16:01:10
% Begin initialization code - DO NOT EDIT
gui_Singleton = 1;
gui_State = struct('gui_Name',
mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn', @gui_OpeningFcn, ...
'gui_OutputFcn', @gui_OutputFcn, ...
'gui_LayoutFcn', [] , ...
'gui_Callback', []);
if nargin & isstr(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT
% --- Executes just before gui is made visible.
function gui_OpeningFcn(hObject, eventdata, handles, varargin)
set(gcf,'Color',[0.502,0,0.251])
% This function has no output args, see OutputFcn.
% hObject handle to figure
49
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to gui (see VARARGIN)
image_file = get(handles.inputedit,'String');
if ~isempty(image_file)
im_original=imread(char(image_file)); %Using Matlab function to read the image.
set(handles.imageaxes,'HandleVisibility','ON'); %Active the axes for image viewing
axes(handles.imageaxes); % Show the image in the axes
image(im_original); %normal Matlab function to show image
axis equal; %sets the aspect ratio. (Show the image in its right ratio)
axis tight; %Sets the axis limits to the arrange of the data.
axis off; % Turn off all axis labeling
%After showed the image, let orgIm to be unchangeable until the commands setit 'on'
set(handles.imageaxes,'HandleVisibility','OFF');
end; % pair with 'if'
% Choose default command line output for gui
handles.output = hObject;
% Update handles structure
guidata(hObject, handles);
% UIWAIT makes gui wait for user response (see UIRESUME)
% uiwait(handles.figure1);
% --- Outputs from this function are returned to the command line.
function varargout = gui_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
50
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% Get default command line output from handles structure
varargout{1} = handles.output;
% --- Executes on button press in selectpush.
function selectpush_Callback(hObject, eventdata, handles)
str = uigetfile('*.jpg', 'Pick an image'); %display .jpg files
set (handles.inputedit,'String',str);
% hObject handle to selectpush (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% --- Executes during object creation, after setting all properties.
function inputedit_CreateFcn(hObject, eventdata, handles)
% hObject handle to inputedit (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called
% Hint: edit controls usually have a white background on Windows.
%
See ISPC and COMPUTER.
if ispc
set(hObject,'BackgroundColor','white');
else
set(hObject,'BackgroundColor',get(0,'defaultUicontrolBackgroundColor'));
end
51
function inputedit_Callback(hObject, eventdata, handles)
% hObject handle to inputedit (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% Hints: get(hObject,'String') returns contents of inputedit as text
%
str2double(get(hObject,'String')) returns contents of inputedit as a double
% --- Executes on button press in loadpush.
function loadpush_Callback(hObject, eventdata, handles)
% hObject handle to loadpush (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
global im_original;
image_file = get(handles.inputedit,'String'); % Get the string inputs in the inputedit
im_original=imread(char(image_file)); %read the image
if ~isempty(image_file)
set(handles.imageaxes,'HandleVisibility','ON');
axes(handles.imageaxes);
image(im_original);
axis equal;
axis tight;
axis off;
set(handles.imageaxes,'HandleVisibility','OFF');
end;
52
% --- Executes on button press in estimatepush.
function estimatepush_Callback(hObject, eventdata, handles)
% hObject handle to estimatepush (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
global ANSWER;
str=ANSWER;
set (handles.answeredit, 'String',str);
% --- Executes during object creation, after setting all properties.
function answeredit_CreateFcn(hObject, eventdata, handles)
% hObject handle to answeredit (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called
% Hint: edit controls usually have a white background on Windows.
%
See ISPC and COMPUTER.
if ispc
set(hObject,'BackgroundColor','white');
else
set(hObject,'BackgroundColor',get(0,'defaultUicontrolBackgroundColor'));
end
function answeredit_Callback(hObject, eventdata, handles)
% hObject handle to answeredit (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% Hints: get(hObject,'String') returns contents of answeredit as text
%
str2double(get(hObject,'String')) returns contents of answeredit as a double
ans= get(handles.estimatepush,'String');
53
% --- Executes on button press in closepush.
function closepush_Callback(hObject, eventdata, handles)
% hObject handle to closepush (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
close all;
% --- Executes on button press in pushbutton5.
function pushbutton5_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton5 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
%select image
global im_original %set image as global
input=im_original;
%******************************************************
%1. test for upper part, check presence of hair or scarf
%******************************************************
inpgray=rgb2gray (input); %rgb to grayscale conversion
histinp=histeq(inpgray); %histogram equalization
%define window of interest
rectx =[1 1 115 48 ];
roix =imcrop(histinp,rectx);
%select color of interest from predefine window
%in this case from value 0-60 (to select hair area)
bwroix = ROICOLOR(roix,0,60) ;
54
result2=bwarea(bwroix);
result2;
% %setting threshold for grouping
% if result2>1300 male else bald man or female with scarf
% %*************************************************
% %2.Skin segmentation
% %*************************************************
%get 3 skin sample (patch small area from different sample)
% convert each of the 3 skin samples to chromatic components and low pass filter to
remove noise.
im = imread('skin3.jpg');
imycc = rgb2ycbcr(im); %convert to ycbcr color space
lpf = 1/9 * ones(3); %average of graylevels of pixels in 3x3 neighbourhood
cb1 = imycc(:,:,2);
cb1 = filter2(lpf, cb1);
cb1 = reshape(cb1, 1, prod(size(cb1)));
cr1 = imycc(:,:,3);
cr1 = filter2(lpf, cr1);
cr1 = reshape(cr1, 1, prod(size(cr1)));
im = imread('skin2.jpg');
imycc = rgb2ycbcr(im);
lpf = 1/9 * ones(3);
cb2 = imycc(:,:,2);
cb2 = filter2(lpf, cb2);
55
cb2 = reshape(cb2, 1, prod(size(cb2)));
cr2 = imycc(:,:,3);
cr2 = filter2(lpf, cr2);
cr2 = reshape(cr2, 1, prod(size(cr2)));
im = imread('skin.jpg');
imycc = rgb2ycbcr(im);
lpf = 1/9 * ones(3);
cb3 = imycc(:,:,2);
cb3 = filter2(lpf, cb3);
cb3 = reshape(cb3, 1, prod(size(cb3)));
cr3 = imycc(:,:,3);
cr3 = filter2(lpf, cr3);
cr3 = reshape(cr3, 1, prod(size(cr3)));
% Create a matrix of all the skin samples.
cb = [cb1 cb2 cb3 ];
cr = [cr1 cr2 cr3 ];
% Find the mean and convergence of the r and b chromatic components
bmean = mean(cb);
rmean = mean(cr);
brcov = cov(cb,cr);
% Create a Gaussian distribution of the chromatic skin model.
colorchart = zeros(256);
for b = 0:255
for r = 0:255
x = [(b - bmean); (r - rmean)];
colorchart(b+1,r+1) = exp(-0.5* x'*inv(brcov)* x);
end
56
end
%
% %********************************************
% %3.Test with shape (best one)-observation base
% %********************************************
%
% Find the most likely skin regions in the image.
%select b3.jpg as template-differentiate between male and female with scarf
imtest = imread('b3.jpg');
imycbcr = rgb2ycbcr(imtest); %convert to imycbcr colorspace
dim = size(imtest);
skin1 = zeros(dim(1), dim(2));
inverse = inv(brcov);
for i = 1:dim(1)
for j = 1:dim(2)
cb = double(imycbcr(i,j,2));
cr = double(imycbcr(i,j,3));
x = [(cb-bmean); (cr-rmean)];
skin1(i,j) = exp(-0.5*x'*inverse*x);
end
end
lpf= 1/9*ones(3); %LPF -(Gonzalez pg 120)
skin1 = filter2(lpf,skin1);
skin1 = skin1./max(max(skin1));
% Adaptive Thresholding
previousSkin2 = zeros(i,j);
changelist = [];
for threshold = 0.55:-0.1:0.05
57
skin2 = zeros(i,j);
skin2(find(skin1>threshold)) = 1;
change = sum(sum(skin2 - previousSkin2));
changelist = [changelist change];
previousSkin2 = skin2;
end
% Finding the optimal threshold value
[C, I] = min(changelist);
optimalThreshold = (7-I)*0.1;
skin2 = zeros(i,j);
skin2(find(skin1>optimalThreshold)) = 1;
% %filling holes
Ifill = imfill(skin2,4,'holes');
%template matching using correlation
result3 = normxcorr2(Ifill,inpgray);
% max(result3(:))
% %setting the threshold base on test done
% if ans>0.5 female else male
%********************************************
% %4.Averaging and template matching
% %********************************************
%do image averaging for few male samples
%then do template matching to other images
%good in classifiying male(free hair woman) with woman with scarf
58
%set threshold for classification
%not yet done.(classification)
x1=imread('a1.jpg');
y1=rgb2gray(x1);
K1 = filter2(fspecial('average',5),y1)/255;
x2=imread('b1.jpg');
y2=rgb2gray(x2);
K2 = filter2(fspecial('average',5),y2)/255;
x3=imread('c1.jpg');
y3=rgb2gray(x3);
K3 = filter2(fspecial('average',5),y3)/255;
add1= imadd(K1,K2); %image arithmetic operation
add2=imadd (add1,K3);
result1=normxcorr2(add2,inpgray);
% %********************************************
% %6.Test with shape (m-shape)
% %********************************************
m=rgb2gray (input);
rect =[1 1 115 156 ]; %select region of interest
roi =imcrop(m,rect);
BW = ROICOLOR(roi,0,60);
59
m=imread('mshape.jpg');
BWhairline = im2bw(m);
result4=normxcorr2(BWhairline,BW);
% if max(result4(:))>0.6 male else female
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% 7. Ear detection
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%to differentiate between bald man and woman with white scarf
%select one side only and detect for skin area
%use color selection method
rect =[1 25 15 127 ];
ear =imcrop(input,rect);
[image,ind]=find_color(ear,[],70,[144 113 93]);
bwear=1-(im2bw(image));
result5=bwarea(bwear);
%%%%%%%%%%%%%%%%%%%%%%%%
%% CLASSIFIER
%%%%%%%%%%%%%%%%%%%%%%%%
global ANSWER;
if result2> 2e+003
if max(result1(:))<0.5
%
fprintf('Free hair female');
60
ANSWER='Female'
else
if max(result4(:))>0.6
%
fprintf('male');
ANSWER='Male'
else
%
fprintf ('FHF');
ANSWER='Female'
end;
end;
elseif result2<50
if result5 >50
%
fprintf('bald');
ANSWER='Bald Man'
else
%
fprintf('FWWS');
ANSWER='Female'
end;
else
if result3<0.5
%
fprintf('Free hair female');
%
fprintf('Female');
ANSWER='Female'
else
%
fprintf('Female with scarf');
%
fprintf('Female');
ANSWER='Female'
end;
61
end;
global ANSWER;
str=ANSWER;
set (handles.answeredit, 'String',str);
62
APPENDIX B
Function find_color
function [image,index]=find_color(image,map,tr,colors)
%find color, return to 'image'
%map-if image is index.if not put []
%tr=threshold
%color-color component selected.to select with mouse enter []
I=make_rgb(image,map);
if isempty(colors)
disp('choose any color from image:') ;
colors=impixel(I);
close
end
[i j]=size(colors);
if i>1,
colors=colors(1,:);
63
end
[image,index]= getcolor(I,colors,tr);
64
APPENDIX C
Function getcolor and make_rgb
function [image,index]=getcolor(I,color,tr)
[i j k]=size(I);
R=color(1,1);
G=color(1,2);
B=color(1,3);
I=double(I);
mask=( abs(I(:,:,1)-R) <tr ) & ( abs(I(:,:,2)-G) <tr ) &( abs(I(:,:,3)-B) <tr );
I(:,:,1)=I(:,:,1).*(~mask);
I(:,:,2)=I(:,:,2).*(~mask);
I(:,:,3)=I(:,:,3).*(~mask);
image=uint8(I);
index=find(mask==1);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function I=make_rgb(image,map)
[i j k]=size(image);
65
if (~isempty(map)),
I=ind2rgb(image,map);
I=im2uint8(I);
elseif (isempty(k)),
I(:,:,2)=I;
I(:,:,3)=I(:,:,1);
else
I=image;
end
Download