Icon - Sacramento

advertisement
LIVE VIDEO FACE DETECTION AND RECOGNITION
Jalwinder Pingal
B.Tech, Kurukshetra University, 2004
PROJECT
Submitted in partial satisfaction of
the requirements for the degree of
MASTER OF SCIENCE
in
COMPUTER ENGINEERING
at
CALIFORNIA STATE UNIVERSITY, SACRAMENTO
SPRING
2010
LIVE VIDEO FACE DETECTION AND RECOGNITION
A Project
by
Jalwinder Pingal
Approved by:
__________________________________, Committee Chair
Suresh Vadhva, Ph.D.
__________________________________, Second Reader
Preetham B. Kumar, Ph.D.
____________________________
Date
ii
Student: Jalwinder Pingal
I certify that this student has met the requirements for format contained in the University
format manual, and that this project is suitable for shelving in the Library and credit is to
be awarded for the Project.
__________________________, Graduate Coordinator
Suresh Vadhva, Ph.D.
Department of Computer Engineering
iii
________________
Date
Abstract
of
LIVE VIDEO FACE DETECTION AND RECOGNITION
by
Jalwinder Pingal
Face recognition is one of the most important fields of the modern biometric
applications. Face recognition system uses two sub-systems named face detection system
and image database system. Face recognition can be of feature based and image based.
Feature based method uses features like skin color, eyes, nose and mouth to detect and
recognize human face whereas image based method utilizes some preprocessed image
sets for detection [1]. The project implements feature based face recognition system
which first finds any face or faces in the color image and then matches it against the
database to recognize the individuals [2]. Here, the skin color pixels are used to filter out
the interesting regions of human skin from other non- interesting regions. Once the skin
regions are located, facial features like mouth, eyebrow and nose are extracted to locate
the human face. Then, the detected face from image will be compared with the database
of training images to find a match. The project also uses the videos from web cam and
the video file database to locate human faces. The live video face detection framework
works on each and every frame of the video to implement the detection system. The
recognition of human beings is also being performed on the video stream coming from
either store video or live video from web-cam.
_______________________, Committee Chair
Suresh Vadhva, Ph.D.
_______________________
Date
iv
ACKNOWLEDGEMENTS
First of all I want to express my deepest thanks to Dr. Suresh Vadhva who
permitted me for this project under his supervision. He provided me the guidance and
supervision needed for the successful completion of the project. Despite of his busy
schedule, he always gave his support and help at all the times.
I would also like to thank Dr. Preetham Kumar for being the second reader and
providing his valuable suggestions on the project report. In addition, I am also thankful
to the faculty of Computer Science and Electrical Engineering in helping me achieving
my Master degree from the university.
Last, but not least, I am very thankful to my family in supporting and encouraging
me throughout my education. Without their motivation and support, it would not have
been possible for me to complete my education.
v
TABLE OF CONTENTS
Page
Acknowledgements ............................................................................................................. v
List of Figures .................................................................................................................... ix
Chapter
1. INTRODUCTION .......................................................................................................... 1
2. CHALLENGES AND VIDEO/IMAGING CONCEPTS ............................................... 5
2.1 Challenges in Face Recognition................................................................................. 5
2.1.1 Difference in Lighting Conditions ........................................................................ 5
2.1.2 Skin Color Variations ........................................................................................... 6
2.1.3 Variation in Face Angle or Orientation Variation ................................................ 6
2.2 Important Concepts of Video and Imaging Applications .......................................... 6
2.2.1 Image Processing .................................................................................................. 6
2.2.2 Color Space ........................................................................................................... 7
2.2.3 RGB Color Space .................................................................................................. 7
3. SOFTWARE TOOLS USED .......................................................................................... 9
3.1 Java Programming Language ..................................................................................... 9
3.2 Java Advanced Imaging (JAI) ................................................................................. 10
vi
3.3 Java Media Framework (JMF) ................................................................................. 10
3.4 Java Development Kit (JDK) ................................................................................... 11
3.5 NetBeans IDE .......................................................................................................... 11
4. IMAGE FACE DETECTION ....................................................................................... 13
4.1 Reading the Image ................................................................................................... 15
4.2 Implement Feature-Based Algorithm ...................................................................... 17
4.2.1 Identify Pixels of Interest .................................................................................... 17
4.2.2 Edge Detection .................................................................................................... 17
4.2.2.1 Laplacian Edge Detection ............................................................................... 18
4.2.2.2 Gradient Edge Detection ................................................................................. 18
4.2.2.2.1 Sobel Operator ............................................................................................ 19
4.2.3 Region Grouping ................................................................................................. 21
4.2.4 Facial Feature Extraction .................................................................................... 21
4.2.4.1 Eyebrows Detection ........................................................................................ 22
4.2.4.2 Nose Detection ................................................................................................ 23
4.2.4.3 Mouth Detection ............................................................................................. 24
4.3 Marking the Face with Boundary ............................................................................ 25
5. FACE RECOGNITION ON STILL IMAGES ............................................................. 26
vii
6. FACE DETECTION FROM VIDEO ........................................................................... 30
6.1 Face Detection On the Video From Video File ....................................................... 30
6.2 Face Detection On Live Video ................................................................................ 32
7. FACE RECOGNITION ON VIDEO ............................................................................ 34
8. IMPLEMENTATION RESULTS ................................................................................ 40
9. CONCLUSION ............................................................................................................. 41
Bibliography ..................................................................................................................... 42
Attachments ...................................................................................................................... 44
viii
LIST OF FIGURES
Page
Figure 2.2.3: RGB Color Model [9] ................................................................................... 8
Figure 4.1.1: Project Run Output Window ....................................................................... 15
Figure 4.1.2: Loaded Image and The Path Selection ........................................................ 16
Figure 4.2.2.2.1: Value of Convolution Masks [19] ......................................................... 19
Figure 4.2.2.2.2 Results of The Edge Detection ............................................................... 20
Figure 4.2.4.1 Eyebrows Detection................................................................................... 22
Figure 4.2.4.2: Nose Detection ......................................................................................... 23
Figure 4.2.4.3: Mouth Detection ....................................................................................... 24
Figure 4.3: Marking The Face With Boundary................................................................. 25
Figure 5.1: Selected Image and The Database With The Selected Image ........................ 27
Figure 5.2 Output After Recognition Triggered ............................................................... 28
Figure 5.3: No Matching Output Result ........................................................................... 29
Figure 6.1: Face Detection On The Video From Video File ............................................ 31
Figure 6.2: Face Detection On Web-Cam (Live) Video ................................................... 32
Figure 7.1: Video Face Recognition On Person 1(Sam)................................................... 35
Figure 7.2: Video Face Recognition On Person 2(Amy) .................................................. 36
ix
Figure 7.3: No Matching Results On Video File’s Video ................................................ 37
Figure 7.4: Web-Cam Recognition Results ...................................................................... 38
Figure 7.5: Live Video Face Recognition With No Matching Results ............................. 39
x
1
Chapter 1
INTRODUCTION
Face recognition has gained substantial attention over in past decades due to its
increasing demand in security applications like video surveillance and biometric
surveillance. Modern facilities like hospitals, airports, banks and many more another
organizations are being equipped with security systems including face recognition
capability. Despite of current success, there is still an ongoing research in this field to
make facial recognition system faster and accurate [3].
The accuracy of any face
recognition system strongly depends on the face detection system. The stronger the face
detection system the better the recognition system would be. A face detection system can
successfully detect human face from a given image containing face/faces and from live
video involving human presence. The main methods used in these days for face detection
are feature based and image based. Feature based method separates human features like
skin color and facial features whereas image based method used some face patterns and
processed training images to distinguish between face and non faces. Feature based
method has been chosen because it is faster than image based method and its’
implementation is far more simplified [1] [4]. Face detection from an image and live
video is achieved through image processing. Locating the faces from images is not a
trivial task; because images not just contain human faces but also non-face objects in
clutter scenes.
Moreover, there are other issues in face recognition like lighting
conditions, face orientations and skin colors. Due to these reasons, the accuracy of any
2
face recognition system cannot be 100%. The objective of this project is to implement a
face recognition system which first detects the faces present in either single image or
multiple video frames; and then identifies the particular person by comparing the detected
face with image database.
The report is organized in 6 chapters. With introduction already being explained
in chapter 1, chapter 2; Section 2.1 focuses on the potential issues in the any face
recognition system in the form of difference in the lighting conditions in which the same
picture appears differently and the variations in skin color and pose. Explaining chapter
2 further in section 2.2 we will discuss about the important concepts of video and
Imaging applications. In any image and video processing application it is important to
understand the image processing concepts and the color space in which the current
application is being implemented. With the good knowledge of these concepts the image
pixels can be modified in a way suitable for the need of any application.
Chapter 3 talks about the software tools used in the project mainly related to Java
programming language. These tools are Java Advanced Imaging (JAI), Java Media
Framework (JMF), Java Development Kit (JDK) and NetBeans IDE. JAI, JMF and JDK
are so significant and flexible in the usage that any video and imaging application can be
implemented with the ease. It is not wrong to say that any multimedia application cannot
be implemented without these tools.
3
In chapter 4 the face detection from image is explained. Chapter 4 mentions the
steps used in the project in reading an image to final face detection with boundary around
the face.
Here, the image is being read from source and then the face detection
algorithm is being applied to identify the presence of human being. In the face detection
algorithm, the image has to go for number of steps like skin detection, edge detection,
region grouping and the features extraction before the face is detected and marked with
rectangular boundary. After the face is located in the image we will identify this face.
Chapter 5 explains about the implementation of recognize the face or faces present in the
image by comparing it with the database. Here, the images are already stored in the
database. When any face is detected then we compare it with the images present in the
database. If there is a match then recognize returns as true otherwise false. When the
detection and recognize on single image is implemented the next advancement is to do
the same operation on videos.
The detection on the video is implemented in chapter 6. Here the video is coming
from stored video file and the live video from web-cam. The frames in either case are
buffered and feed it to the image face detection system. If there is any face present in the
current frame then it is marked with the boundary and injected back to the display. If
there is no face present the video then the unmodified frame is sent back to the display.
In this way the video file is being played at the output with modified and unmodified
frames. Next is to recognize the human being present in the video file. This part is
described in chapter 7. The recognition on video is similar in concept wise to that of
4
image recognition; but difficult in implementation due to continues frames keep coming
in video. Here, whoever is present in the video will get recognized if his/her image is
present in the database otherwise not.
Chapter 8 mentions the results of the implementation of various parts of the
project. Then we conclude the project by providing the future work that can be done for
future applications in chapter 9.
5
Chapter 2
CHALLENGES AND VIDEO/IMAGING CONCEPTS
This chapter focuses on the ongoing confronts in the field of the recognition and
some basic concepts of image and video applications. These challenges and the imaging
concepts are described in detail in sections 2.1 and 2.2 respectively.
2.1 Challenges in face recognition
Over the years, face recognition has gained rapid success with the development
of new approaches and techniques. Due to this success, the rate of successful face
recognition has been increased to well above 90%. Despite of all this success, all face
recognition techniques usually suffer from common challenges of image visibility. These
challenges are lighting conditions variations, skin variations and face angle variations [5].
The challenges are explained in descriptive manner below.
2.1.1 Difference in Lighting Conditions
The lighting conditions in which the pictures are taken are not always similar
because the variations in time and place. The example of lighting variations could be the
pictures taken inside the room and the pictures taken outside. Due to these variations, a
same person with similar facial expressions may appear differently in different pictures.
As a result, if the person has single image store in the database of face recognition,
matching could be difficult with the face detection under different lighting conditions [6].
6
2.1.2 Skin Color Variations
Another challenge for skin based face recognition system is the difference in the
skin color due to the difference in the races of the people.
Due to this variation,
sometimes the true skin pixels are filtered out along with the noise present in an image.
In addition, the false skin background/non-background is not completely filtered out
during noise filtering process. So, it is a tough task is to choose the filter which will
cover the entire skin tones of different people and kick out false skin noise [5].
2.1.3 Variation in face angle or Orientation variation
The angle of the human face from camera can be different in different situations.
A frontal face detection algorithm can’t work on non-frontal faces present in the image
because the geometry of facial features in frontal view is always different than the
geometry of facial features in non-frontal view. This is why orientation variation remains
the difficult challenge in face detection system [6].
2.2 Important concepts of video and Imaging applications
2.2.1 Image Processing
Image processing is a method of processing the image values, more precisely, the
pixels in case of digital images. The purpose of image processing is to modify the input
image such that the output image may change parametrically such as in colors and
7
representation. Image processing is the basic part of the face recognition involving
digital images. The processing can change the image representation from one color space
to another color space. It can also assign different color values to targeted pixels for the
purpose of keeping areas of interest in output image. Image processing is also used to
increase or decrease image brightness, contrast and other morphological operations [7].
2.2.2 Color Space
Color space is the representation of image colors in two or more color
components. Typical examples of color spaces are RGB, YCbCr and HSI color spaces.
In each of these color spaces the color of a pixel at any point in an image is the
combination of three color components.
These color components vary from 0 to
maximum value and this maximum value depends on the bits per pixel. The different
values in the range give different colors from black (0) to white (255) with 8 bits per
pixel. [8].
2.2.3 RGB color Space
RGB color space is the combination of red, green and blue color components.
For the 24 bits per pixel, the range of R, G and B varies from 0 to 255. If R, G and B are
all 0 then the resulted color will be black. If R, G and B are all 255 then the output color
will be white. The concept of the RGB color space is specified in figure 2.b.3. Here the
x-axis represents blue color range, y-axis represents green color range and Z-axis
represents red color range.
As explained above, we can see that black color is
8
represented at the origin and white color is represented at the other corner where red,
green and blue are 255 each. Similarly we can have other color values at different
corners of cube corresponding to different RGB values [9].
Figure 2.2.3: RGB color model [9]
9
Chapter 3
SOFTWARE TOOLS USED
The face detection algorithm is implemented in the Java programming language.
The front-end and the back-end parts consist of the reading the image and the displaying
output results are also implemented in Java. The platform used to accomplish the project
is NetBeans IDE which is the framework to develop the multimedia applications in Java.
The explanation of Java and the supporting tools are explained below:
3.1 Java Programming Language
Java is an object oriented programming language developed by SUN
Microsystems. JAVA contains object and classes need to develop any simple to complex
applications with much of ease as comparison to C and C++ programming language.
Java inherits the programming syntax from C and C++ languages. Java applications are
comprises of classes each corresponding to the particular task and can have relationship
among each other with the properties of inheritance.
It is a platform independent
language means it can be run on any architecture with the help of Java Virtual Machine.
By that means the Java code is compiled by the Java Virtual Machine and converts it to
the Java byte code. Java virtual machine does the similar task of language specific
compiler but the compiled output is such that it can be run on any Java supported
platform. The drawback of Java byte code compiled programs is that these are slower in
running environment as compared to specific compiler complied programs. To run Java
10
specific applications on any machine Java Runtime Environment (JRE) has to be installed
on it. For the development of any application Java has some other associated and
supported tools.
These tools are named as Java Advanced Imaging, Java Media
Framework and Java Development Kit [10]. The following is the description of these
tools:3.2 Java Advanced Imaging (JAI)
Java Advanced Imaging is a Java supported application programming interface
(API) that provides the advanced libraries of image processing.
For all the image
processing tasks the programmer includes the JAI reference to use built in library
functions. Since JAI contains all the image processing routines the developer is not
required to purchase additional image processing software while working in the Java
environment.
With that being said, the usage of JAI is free of charge from SUN
Microsystems [11].
Java advanced imaging package also contains encoding and
decoding classes for image manipulation. Various popular image formats such as JPG,
BMP and PNG are supported in these codec classes [12].
3.3 Java Media Framework (JMF)
Java Media Framework is the additional tool provided with Java to enable the use
of audio, video and other time critical multimedia files. JMF performs all the operations
from capturing the media file to outputting the file with or without the processing. Java
media framework receives media data from several sources such as camcorder and stores
11
it temporarily before processing. It then performs the application required operations
such as transcoding, streaming and visualization effects to media file. The output of JMF
can either be displayed on display device or can be stored on a disk [13].
3.4 Java Development Kit (JDK)
Java development kit is the Java software development kit containing various
software tools necessary to build Java applications. The tools are Java loader, Java
compiler, Java debugger, applet viewer and many other development tools. The Java
loader is used to load Java programming. Java complier converts hand typed code to
machine specific language. Debugger debugs the code step by step to find any code
malfunction or the output/value each line of code may have. Without installing JDK the
programmer cannot install any framework for developing Java applications [14].
3.5 NetBeans IDE
NetBeans is Integrated Development Environment (IDE) which is written in Java
programming language and use for creating Java contained desktop and multimedia
applications. NetBeans is fully able to support Java Development Kit features. The
advantage of using the IDE is to provide full support to user by saving the time in terms
of accessing data and apply setting for any application being developed in Java. The
Graphical User Interface (GUI) of NetBeans makes it easy to view, add and delete from
application being run on the platform and its debug features are able to find and locate the
errors and their causes in the lines of code. NetBeans has several tools and available
12
features necessary to build applications with ease including the editor of source code,
graphical user interface and the support for many databases. Source code editor has the
support of text styles and background colors, line indentions, text matching, previous
saving, code changes features and other class supported features.
Graphical User
Interface is the visual screen in NetBeans which has the support for current and other
forms, buttons and labels with zoom ability, internet connection support, installing and
applying features and other important features.
NetBeans successfully works with
databases such as SQL, Cloudscape, Oracle and Microsoft SQL server. It has also the
ability to view and edit the data stored in these databases. It has the provision for writing
and executing the databases commands.
Just like other language editors NetBeans open the saved project or create new
project for Java based and web based applications. It shows the loaded project and its
classes (files) in the form of the tree and sub trees. In the code editor the code that we
write is being highlighted with the help description for the completion of it.
One
interesting feature of NetBeans is the called refactoring which is the way to arrange the
user written code in such a way so that it is easier to read and modify [15].
13
Chapter 4
IMAGE FACE DETECTION
Face detection is the process of finding the human face from image and, if
present, returning the location of it. It is the special case of the object detection. Objects
can be anything present in the image including human and non human things like trees,
buildings, car and the chair. But, other than a human being itself objects are least likely
utilized in advanced applications. So, to find and locate the human face in the image is
an interesting and important application of modern time. However, to locate the face is
not an easy task in the image since images do not contain only faces but other objects too.
Moreover, some of the scenes’ are very complex and to filter out the unwanted
information is remain the tough task. Face detection can work on a single image and
more than one image in the form of the video. When the presence and the location of the
face is found than this information is utilized to implement more sophisticated
applications such as recognition and video surveillance implementation.
Hence, the
success of these applications depends heavily on the detection rate of image face
detection system [16]. Face detection from an image can be done by two methods named
image based and feature based. Image Based methods treats the whole image as the
group of patterns and so each region is classified as the face or non-face. In this method
a window is scanned against the parts of the image.
On every scan the output is
computed and the threshold value is compared with the output value of the window on
14
current part of the image. If it is above than the threshold value then that current part of
the image is considered as the face. The size of this window is fixed and chosen
according to some experiments with the help of training images size. The advantage of
image based methods is the higher percentage of face detection hit rate; however it is
slow in terms of computation as compared to feature based methods. Eigenfaces and the
neural networks are the two examples of image based approach.
In feature based
methods features such as skin color, eyes, nose and mouth are separated first from the
rest of the image regions. With the extraction of these features the non-interested regions
of the image are not required to process further and therefore the processing time is
significantly reduced. In feature based methods the skin color pixels are separated first
because the color processing is faster which results in the separation of other features.
The advantage of the feature based methods is the fast results but less accurate than
image based methods. Another advantage is the ease of implementation in real time
applications [1].
In the project, the feature based method is implemented. The initial step is to get
the image as input and then feature based algorithm is applied to detect the face or faces.
Once the face region is determined the next is to mark the boundary around it. The
detailed operation of image face detection is explained by following steps:
15
4.1 Reading the image
Reading the image is basically loading the image into the face detection
framework. For the project, the image is to be read from the computer disk.
Figure 4.1.1: Project run output window
There are also a collection of frames of video which are captured from web Cam and feed
as input to live video face detection framework. The frames are store temporarily in the
buffer before being further processed. We can also load the chunk of video stored in the
hard disk to video face detection framework. When the project is going to be run the
generated output window will appear which has the provision to open the location of the
JPEG image file as shown in the figure 4.1.1.
16
Now we select the location of the image by “Browse File” button and the image is loaded
in the face detection interface from the selected location. The location of the path and the
loaded image is shown in the figure 4.1.2. The filter is applied to choose the image of
JPEG type. Other types of images can also be selected by providing the modification in
the implementation.
Figure 4.1.2: Loaded image and the path selection
17
4.2 Implement feature-based algorithm
After reading the image, the project implements number of steps on the image
which extract the facial features necessary to locate the human face. It includes the
removal of the non-skin like pixels, removal of the false skin pixels and to identify other
features like nose, mouth and eyes. The explanation of these steps is as follows:
4.2.1 Identify pixels of interest
For any feature based face detection approach; our first step is to identify the skin
regions from the image. This is necessary because only skin color is going to be utilized
for further image processing. As mentioned earlier an image can contained one or more
objects and so we process our image in such a way that non human regions should filter
out from skin containing regions. The remaining regions in the image are the likelihood
of the skin pixels and skin pixels are the possibility of the facial features. In the process,
we convert our colored image to gray-scale image. After converting to gray scale image,
skin like regions can be separated from other regions because skin regions are brighter
than other regions of the image. This process leaves only skin likelihood regions by
removing other non skin regions present in the image [17].
4.2.2 Edge Detection
Skin Identified pixels in last step are pure skin pixels if the image has only skin
regions.
Since image may contain another skin like objects including skin like
18
background color; there is a strong chance that we may end up getting the false skin
regions. These impure skin pixels need to be removed first before further processing.
These non skin pixels are removed by the use of the edge detection.
Edge detection is a process of marking the boundaries of the objects present in the
image by finding the differences in the intensities of the pixels. Due to the differences in
the brightness of the pixels there are discontinues path exists from one object present in
the image to another. Edge detection helps in image processing by removing the noise
like impure skin regions and thus saves incredible time in further processing of the
image. Edge Detection can be performed by several methods and are categorizing in two
main categories: Laplacian and gradient [18]. These methods are explained as follows:
4.2.2.1 Laplacian Edge Detection
In laplacian edge detection the signal undergoes first derivation and then second
derivation with respect to time. When second derivation is performed then the signal
drops down to zero and below values. Here only zero values are marked to accomplish
edge detection [18].
4.2.2.2 Gradient Edge Detection
In this method, also the first derivative of the signal is calculated with respect to
time and the maximum value is compare against some threshold value to find the
possibility of the image. This category includes methods like Sobel, Roberts and Prewitt
19
operators. Sobel is most widely used method because it is fast and the edge detected
image has clear and connected edges [18].
4.2.2.2.1 Sobel Operator
Sobel operator calculates the gradient of the pixel intensity by applying the
convolution filters in two directions. These convolution filters are 3* 3 matrices and are
used to calculate the gradient in X and Y direction. Two convolution masks are used
because our image is two dimensional and therefore one convolution is applied for rows
and another convolution is applied for columns. These masks are called Gx and Gy. The
formula and the values of Gx and Gy are as follows:
The values of convolution masks are given below:
Figure 4.2.2.2.1: Value of convolution masks [18]
Gradient magnitude is given by the following formula:
20
The approximate value of this magnitude is calculated by formula given below:
|G| = |Gx| + |Gy|
The result of the edge detection is displayed in the figure 4.2.2.2.2 which shows the
boundaries in the image due to the differences in the pixel intensities of the objects
present in the image. Due to discontinuities in the image the noise can be removed to
eliminate the false face detection and reduce the image processing time.
Figure 4.2.2.2.2: Results of the Edge Detection
21
4.2.3 Region Grouping
Edge detection eliminated the noise in the image and thus image now has only
pure skin regions. These regions could be hand, arm, leg and face. These regions can be
overlapped with each others in most of the scenes so it is essential to separate these
regions from each other. This task is also called region grouping. Region grouping is
performed by connected-neighborhood or connectivity analysis method. In connectivity
analysis method a pixel is compared with 8 of its neighbored pixels ; and if any of these
pixels has the same value as of the original pixel then these pixels belong to same region
otherwise these are from another region. By using this method all of the skin regions are
separated from each other [19].
4.2.4 Facial Feature Extraction
When all of the skin regions are separated then we look for the facial features in
each and every skin region. The possible facial features are mouth, eyebrows and nose.
So we process our regions to extract these facial features. Out of different skin regions,
only one region is of our interest which is the possible face region and only this region
has facial features. With the proper information on distances of facial features, the facial
features can be detected. The process used to identify facial features is explained below:
22
Figure 4.2.4.1: Eyebrows Detection
4.2.4.1 Eyebrows Detection
In every region we will look for two small curves separated by certain pixels and
in different color to skin region. The color of eyebrows can either be lighter or darker
than the skin color. The detection system will look for these two uniform regions and if
found than mark these eyebrows with the boundary. The result of the eyebrow detection
is shown in the figure 4.2.4.1 where the image is loaded first into interface and then the
same image is displayed with the small rectangular boundary on each of the eyebrow
[20].
23
Figure 4.2.4.2: Nose Detection
4.2.4.2 Nose Detection
We have already detected the eyebrows. Next is to locate and mark the nose in
the facial region. Here the distance between the centers of the starting of eyebrows are
taken to determine the location of the nose. The nose is parallel to the eyebrows and at a
perpendicular distance to it. The hit rate of nose detection can also be increased by
searching of the pixels at the bottom of nose in non-skin color. The black shadow under
24
the nose holes and the curve shape is beneficial to achieve better results. The output of
the nose detection interface is shown in the figure 4.2.4.2 [20].
Figure 4.2.4.3: Mouth Detection
4.2.4.3 Mouth Detection
The determination of the eyebrows and the nose helps in determining the location
of the mouth. The distance in pixels from the nose and eyebrows is taken to find the
possibility of the mouth. Since the lips are also in the curve shape, the gradient variation
region under the nose is searched to find the location of the mouth. If the person is
25
smiling, the teeth region can be eliminated in the skin detected image with the black
pixels. Here the average approximations of lips pixels are considered to find the location
of mouth. Figure 4.2.4.3 is visualizing the results of the mouth detection [20].
Figure 4.3: Marking the face with boundary
4.3 Marking the face with boundary
Once all of the features are detected on the facial region, the other skin regions are
eliminated. Now that the face region is detected, next is to mark the boundary around it.
To accomplish this, a rectangle is drawn around the detected face. As shown in the figure
4.3 the original image is displayed with the white color rectangle around the face.
26
Chapter 5
FACE RECOGNITION ON STILL IMAGES
Recognition is the process of identifying the particular person present in the
image. Recognition is the most powerful and interesting application of image processing
and has gained rapid success for over 30 years. Basically, the face detection system
locates the face present in the image and after comparing with the database, if present, the
identity of that person is revealed. For this purpose, sample images are already stored in
the database.
The location of the database is set in the recognition system so that each and
every image present in the database gets searched. If there is a match then the image is
taken from the database and display at the output along with the face detected image. In
addition, the name of the matching person will also be displayed. On the other hand, if
there is no hit then image is not displayed and a no match message is rather displayed.
The method used in the recognition is the Image Search method. It simply takes
an input image where a face was found and a directory of images as input. A signature is
calculated for the input image. Then for each image in the search directory a signature is
also calculated. Basically, the digital signature attempts to assign a unique value to each
image based on the contents of the image. A distance is then calculated for each image in
the search directory in relation to the input image. The distance is the difference between
the signature of the source image and the signature of the image it is being checking
27
against. The image with the smallest distance is returned as the matching image and
displayed at the output.
Figure 5.1: Selected Image and the database with the selected Image
The results of the face recognition on still image are displayed in the figures 5.1
and 5.2. As we can see in figure 5.1 the image is loaded from the database. The location
of the database from which the picture is selected is also selected by using “choose search
folder” button.
28
Figure 5.2: Output after recognition triggered
The next step is to trigger the recognition system to see if the recognition
implement successfully. So, as shown in the figure 5.2, we click on detect and search
button to initiates the recognition process. Now, the system will search the database
against the selected picture to determine if there is any identical image present in
database. If present, the matched image is displayed at the output with the rectangular
29
boundary around both of the images. The name of the matching person is also displayed
at the output.
Figure 5.3: No matching output result
To show the output of the face recognition with no matching results, we are using the
figure 5.3 where the picture is selected from one folder and the database is selected as the
different folder. When the recognition is initiated the system searches the database
against the selected picture. Since the selected image is not present in the searched
folder, no recognition takes place and instead “Nobody was identified as a match”
massage displayed at the output
30
Chapter 6
FACE DETECTION FROM VIDEO
Since the face detection on the still images has been implemented successfully,
we will move into the next advanced step of locating and marking the face regions on the
video stream. The operation on the frames of video is much difficult as compared to the
single image. It is because the frames keep coming from the video source and the objects
present in the image are contained in more than frame for visibility. In this project, we
have two video sources, video file stored on the hard disk of computer and live video
coming from the connected Web-cam. In both of the cases the process is to control
continues frames and pass these frames to the face detection system. The explanation of
both the cases is explained below:6.1 Face detection on the video from video file
Here, the path of video file is selected according to the location of the video
already stored in the memory. The video file will be open and started using the Java
Media Framework. Using the FrameAccess process, the video will be played in the face
detection system where FrameAccess acts as a processor to handle each individual frame.
This process also instantiates the pass thru codec which is PreAccess Codec and also the
PostAccess codec to intercept the data flow. To acquire each video frame before it is
modified to the screen, the pass thru codec is used. With the pass through codec, we
attach the callback to every frame of the video being viewed. This codec is call every
31
time the frame is changing in the video file. Once the codec operation is done the control
will be returned to the player.
In order to perform facial detection each frame is
intercepted before it was rendered and in the event that a face was detected injects the
altered frame back into the video stream. The pass thru codec is also called PreAccess
codec.
Figure 6.1: Face detection on the video from video file
In PreAccess codec two buffers are used named Buffer in and Buffer out. Buffer
in is the incoming frame, Buffer out is the next frame to display. PreAccess Codec calls
facial detection system on each frame of the video. If a face is found in the individual
frame, then the output buffer will be given the modified image with rectangles drawn
32
around the face to be displayed at the output. On the other hand, of no face/faces found
then the input buffer is not changed and output Buffer is set to this unmodified input
Buffer. It then passes control back to the player which displays the output Buffer.
PostAccess Codec is also included to implement the callback to access each frame of the
video after it is rendered to the screen. In this manner the video file will be selected in
the face detection system. Then the detection will be triggered to display the video at the
output with the boundary around each face if present in the video stream.
The result of the output of the face detection from the video file is shown in the
figure 6.1. Here the location of the video file is selected. When the “Video File” button
is clicked then the video from selected video file will be played. If any face is present in
the frames then it will be displayed with the boundary around it.
6.2 Face detection on live video
Figure 6.2: Face detection on Web-Cam (Live) video
33
The concept of processing of live video frames is same as that of processing the frames
on stored Video file. In this case the video is being taken directly from the web-cam.
The web-cam connected to the computer is configured in the face detection system using
the JMStudio which is the part of the Java Media Framework. Again, upon selected the
web-cam, the face detection is triggered to display the live video with the detection
boundary around the face.
One thing to keep in mind with video processing is that what we see with the human eye
can be misleading. Not every frame in a video will render an image with a recognizable
face. Some frames are transitional and it may take several frames for an image to come
into full focus and for the features to be recognized. This is why some frames that we
think should have a face don't have one. This is not always indicative of a failure in the
recognition it is just due to the nature of videos.
Figure 6.2 shows the results of the face detection on the video from web-cam
which is also called the live video. As mentioned earlier, the connected web-cam is
already configured into the JMStudio. Now, we click on the webcam button which
shows the webcam video at the output. Again, if any face is present in the video then that
face is displayed with the boundary around it. If no face is present then the live video
will be displayed as is.
34
Chapter 7
FACE RECOGNITION ON VIDEO
In this part of the project, we capture the video either from the video file or
directly from the web-cam and recognize the particular person present in the video
frames. The concept is similar to one used in the face recognition on still images. Here,
we also store some sample images in the database. The location of this database is set in
the system. When the video from any source runs in the face detection system then each
and every frame of video stream is going to be compared with the images present in the
database. If there is a match then the results will be displayed as a matched name. For
the purpose of recognition, the signature of each and every frame of the video file is
calculated. The signature of the stored image is also calculated. The distance of two
images is calculated just as in the case of image face recognition. But here the threshold
value is set up already. The threshold value is taken as 100. If the distance is less than
equal to the threshold value than the result is the hit and the image stored in the database
is identified as the match corresponding to the frame displaying at the output. If the
signature is greater than the threshold value than the there is no match and no
identification is revealed.
The visual results of the recognition on the video of video file are displayed in the
figure 7.1 and figure 7.2. To recognize, the frames of the video to be recognized are
taken and stored in the folder and then the same folder is selected as a matching folder.
35
Figure 7.1: Video Face Recognition on person 1(Sam)
After the locations of the video file and the corresponding database are selected then we
check the box of “Search for Image”. After that we click on the “Video File” button. By
doing that the video will be displayed from the selected video file and the face
recognition system matches each and every frame of the video to the images stored in the
database. Again the matched results are displayed with the name of the person to be
identified. As we can see in figure 7.1 the person name is Sam whose picture is stored in
the database.
After matching the results are displayed with name “Sam” at the output.
Similarly, the second person present in the video whose name is “Amy” is being
36
recognized as shown in the figure 7.2. Similar results can be displayed on other video
files.
Figure 7.2: Video Face Recognition on person 2(Amy)
Apart from this, one more condition is tested when there is no picture present in
the database of the person being displayed in the video file. This is shown in the figure
7.3. The recognition system will still look into the database to find a match of the person
being present in the video. But, with no picture in database results in no matching results
are displayed.
37
Figure 7.3: No matching results on video file’s video
In the case of live video from a web-cam background noise must be
minimized in order to recognize a face. To accomplish this first a signature is calculated
for each image in the database. The signature is calculated by first performing facial
recognition on the image. The signature is then calculated from the rectangular area that
comprises the face in the image. This effectively removes any background interference.
The same procedure is applied to each video frame. To determine a match we calculate
the percentage difference between a video frame and each image in the database. The
frame that has the closest percentage match and that is also above the threshold is
38
considered the matching face. The threshold in this case is set to a value between 0 and
100.
Figure 7.4: Web-Cam recognition results
The result on the live video from web-cam is displayed by figure 7.4. In this case the
person to be identified has the images already stored in the database. This database is
selected in recognition. Again we select the “Search for Image” button and click on the
“WebCam” button. The live video from web-cam will start displaying at the output. The
39
recognition system searches the database against the person in the live video. If there is a
match then the results are displayed at the output with name whose identity was being
revealed.
On the other hand if the detected person’s image is not stored in the database then no
matching results will be displayed at the output. This scenario is explained by figure 7.5.
Here the detection system is able to detect the person with the boundary around the face
but recognition will display no matching results because the person is not present in the
database.
Figure 7.5: Live video face recognition with no matching results
40
Chapter 8
IMPLEMENTATION RESULTS
To verify the simulation results, about 50 images of the people of different races
are downloaded from websites.
These images are of Asian, American and African
people. The images are a mixture of men, women, people with glasses, children, people
with beards and some non face images. The efficiency of the project is about 81%. The
time required to perform the face detection on the single image is 0.32 Seconds. In the
database of 36 images it takes 1.31 seconds to display the matching results for the image
recognition part.
The time project takes to detect the face from the video file is
1.27seconds. To detect the face from live video using web cam, the required time is 2.60
seconds. When recognizing the face present in the video file from the database of 25
pictures, it takes 3.35 seconds to show the matching results at output. Using the same
database on the live video of the web-cam, the time to detect and match the results is 5.04
seconds. This time can be reduced on the smaller database and it can be more on the
larger database. It is because the recognition compares each and every image of the
database before returning the closest matching image.
41
Chapter 9
CONCLUSION
The feature based face detection system is implemented with the provision of
detection and the recognition on live video and the stored video.
The image face
detection is implemented first and then the same system is used to detect from video
sources. The recognition system has also been implemented on the image and the video
files. The accuracy of the system is achieved above 80%. The project is good at the
pictures of the people of different races and colors. The most interesting and challenging
parts are the live video face detection and the recognition. The project is good to detect
the frontal faces present in the images and the video files but not able to detect the sideviews faces. The failure of detection on the pictures with very dark backgrounds colors
are also the limitation of the system just like other systems. Overall it is a good project
by which I have gained valuable knowledge of image processing and the steps required
for any successful face detection. The advancement can be achieved as the future goal to
make most parts of the project automated for surveillance and vision based applications.
42
BIBLIOGRAPHY
1. Erik Hjelmas, Boon Kee Low, "Face Detection: A Survey",Computer Vision and
Image Understanding 83 . March 28, 2001, pages 236–274 .
2. Adriana Kovashka and Margaret Martonosi, “Future- Based Face Recognition for
Identification of Criminals vs. Identification for Cashless Purchase”, January 12 2010,
http://userweb.cs.utexas.edu/~adriana/kovashka_dmp_paper.pdf
3. S.A. Pardeshi and S.N. Talbar,”Local Feature Based Automatic Face Recognition
System”, January 18, 2010, http://www.jiit.ac.in/jiit/ic3/IC3_2008/IC32008/ALG2_20.pdf
4. Quanni Zhang and Ebroul Izquierdo, “Multi-Feature Based Face Detection”, January
18, 2010, http://www.acemedia.org/aceMedia/files/document/vie06-qmul-3.pdf
5. Phuong-Trinh Pham-Ngoc, Quang-Linh Huynh, “Robust Face Detection under
Challenges of Rotation, Pose and Occlusion”, January 26, 2010,
http://www.fas.hcmut.edu.vn/webhn10/Baocao/PDF/YVSM_PhuongTrinh.pdf
6. D. Beymer,” Face Recognition under Varying Pose”, In IEEE Conf. on Comp. Vision
and Pattern Recognition, 1994, pages 756-761.
7. “Image Processing Book”, February 2, 2010, http://www.imageprocessingbook.com/
8. “Color Space”, February 4, 2010, http://en.wikipedia.org/wiki/Color_space
9. “The RGB Colorspace”, February 4, 2010, http://gimpsavvy.com/BOOK/index.html?node50.html
10. “Java (Programming Language)”, February 10, 2010,
http://en.wikipedia.org/wiki/Java_%28programming_language%29
11. “Java Advanced Imaging”, February 10, 2010,
http://en.wikipedia.org/wiki/Java_Advanced_Imaging
12. Sun Developer Network (SDN), “Java Advanced Imaging API”, February 10, 2010,
http://java.sun.com/products/java-media/jai/iio.html
13. Java Media Framework,” Introduction”, February 10, 2010,
http://grack.com/downloads/school/enel619.10/report/java_media_framework.html
14. “Java Development Kit”, February 10, 2010,
http://en.wikipedia.org/wiki/java_development_kit
43
15. Dana Nourie, “Getting Started with the NetBeans IDE Tutorial”, February 12 2010
http://java.sun.com/developer/onlineTraining/tools/netbeans_part1/
16. Ming-Hsuan Yang, “Face Detection”, February 26, 2010,
http://faculty.ucmerced.edu/mhyang/papers/face-detection-chapter.pdf
17. Henry Chang and Ulises Robles, “Skin Segmentation”, November 10, 2009,
http://www-cs-students.stanford.edu/~robles/ee368/skinsegment.html
18. Bill Green, “Edge Detection Tutorial”, November 18, 2009,
http://www.pages.drexel.edu/~weg22/edge.html
19. A.N.Rajagopalan, K. Sandeep "Human Face Detection in Cluttered Color Images
using Skin color and Edge Information" Indian Conference on Computer Vision,
Graphics and Image Processing December 16-18, 2002.
20. Liya Ding and Aleix M. Martinez,”Precise Detailed Detection of Faces and Facial
Features”, January 10, 2010, http://www.ece.osu.edu/~aleix/CVPR08.pdf
44
ATTACHMENTS
Project Coding Classes:
Edge.java:
import javax.media.jai.JAI;
import javax.media.jai.KernelJAI;
import javax.media.jai.PlanarImage;
import javax.swing.JFrame;
/**
* This class demonstrates the use of the convolve operator for edge-detection on an
* image using the convolve operator and the Sobel horizontal kernel.
*/
public class Edge
{
// Create a constant array with the Sobel horizontal kernel.
public Edge()
{
}
public void EdgeDetect(String path)
{
float[] kernelMatrix = { -1, -2, -1,
45
0, 0, 0,
1, 2, 1 };
// Read the image.
PlanarImage input = JAI.create("fileload", path);
// Create the kernel using the array.
KernelJAI kernel = new KernelJAI(3,3,kernelMatrix);
// Run the convolve operator, creating the processed image.
PlanarImage output = JAI.create("convolve", input, kernel);
// Create a JFrame for displaying the results.
JFrame frame = new JFrame("Sobel horizontal border of the image "+path);
// Add to the JFrame's ContentPane an instance of
// DisplayTwoSynchronizedImages, which will contain the original and
// processed image.
frame.getContentPane().add(new DisplayTwoSynchronizedImages(input,output));
// Set the closing operation so the application is finished.
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.pack(); // Adjust the frame size using preferred dimensions.
frame.setVisible(true); // Show the frame.
}
}
FrameAcess.java:
/*
* To change this template, choose Tools | Templates
46
* and open the template in the editor.
*/
import java.awt.*;
import javax.media.*;
import javax.media.control.TrackControl;
import javax.media.format.*;
/**
* Program to access individual video frames by using a
* "pass-thru" codec. The codec is inserted into the data flow
* path. As data passes through this codec, a callback is invoked
* for each frame of video data.
*/
public class FrameAccess extends Frame implements ControllerListener {
Processor p;
Object waitSync = new Object();
boolean stateTransitionOK = true;
MediaLocator media = null;
public boolean open(MediaLocator ml, String search_folder) {
try {
p = Manager.createProcessor(ml);
47
} catch (Exception e) {
System.err.println("Failed to create a processor from the given url: " + e);
return false;
}
// set the search folder
if (search_folder != null)
{
GlobalSearchFolder.search = true;
GlobalSearchFolder.search_folder = search_folder;
GlobalSearchFolder.temp_name = search_folder +"\\temp.jpg";
}
else GlobalSearchFolder.search = false;
media = ml;
p.addControllerListener(this);
// Put the Processor into configured state.
p.configure();
if (!waitForState(p.Configured)) {
System.err.println("Failed to configure the processor.");
return false;
}
48
// So I can use it as a player.
p.setContentDescriptor(null);
// Obtain the track controls.
TrackControl tc[] = p.getTrackControls();
if (tc == null) {
System.err.println("Failed to obtain track controls from the processor.");
return false;
}
// Search for the track control for the video track.
TrackControl videoTrack = null;
for (int i = 0; i < tc.length; i++) {
if (tc[i].getFormat() instanceof VideoFormat) {
videoTrack = tc[i];
break;
}
}
if (videoTrack == null) {
System.err.println("The input media does not contain a video track.");
return false;
}
49
System.err.println("Video format: " + videoTrack.getFormat());
// Instantiate and set the frame access codec to the data flow path.
try {
Codec codec[] = { new PreAccessCodec(),
new PostAccessCodec()};
videoTrack.setCodecChain(codec);
} catch (UnsupportedPlugInException e) {
System.err.println("The process does not support effects.");
}
// Realize the processor.
p.prefetch();
if (!waitForState(p.Prefetched)) {
System.err.println("Failed to realize the processor.");
return false;
}
// Display the visual & control component if there's one.
setLayout(new BorderLayout());
Component cc;
50
Component vc;
if ((vc = p.getVisualComponent()) != null) {
add("Center", vc);
}
if ((cc = p.getControlPanelComponent()) != null) {
add("South", cc);
}
// Start the processor.
p.start();
setVisible(true);
return true;
}
public void addNotify() {
super.addNotify();
pack();
}
/**
* Block until the processor has transitioned to the given state.
* Return false if the transition failed.
51
*/
boolean waitForState(int state) {
synchronized (waitSync) {
try {
while (p.getState() != state && stateTransitionOK)
waitSync.wait();
} catch (Exception e) {}
}
return stateTransitionOK;
}
public void controllerUpdate(ControllerEvent evt) {
if (evt instanceof ConfigureCompleteEvent ||
evt instanceof RealizeCompleteEvent ||
evt instanceof PrefetchCompleteEvent) {
synchronized (waitSync) {
stateTransitionOK = true;
waitSync.notifyAll();
}
} else if (evt instanceof ResourceUnavailableEvent) {
synchronized (waitSync) {
stateTransitionOK = false;
waitSync.notifyAll();
52
}
} else if (evt instanceof EndOfMediaEvent) {
p.close();
}
}
}
DisplayTwoSynchronizedImages.java:
import java.awt.GridLayout;
import java.awt.event.AdjustmentEvent;
import java.awt.event.AdjustmentListener;
import java.awt.image.RenderedImage;
import javax.swing.JPanel;
import javax.swing.JScrollPane;
import com.sun.media.jai.widget.DisplayJAI;
/**
* This class represents a JPanel which contains two scrollable instances of
* DisplayJAI. The scrolling bars of both images are synchronized so scrolling one
* image will automatically scroll the other.
*/
53
public class DisplayTwoSynchronizedImages extends JPanel implements
AdjustmentListener
{
/** The DisplayJAI for the first image. */
protected DisplayJAI dj1;
/** The DisplayJAI for the second image. */
protected DisplayJAI dj2;
/** The JScrollPane which will contain the first of the images */
protected JScrollPane jsp1;
/** The JScrollPane which will contain the second of the images */
protected JScrollPane jsp2;
/**
* Creates an instance of this class, setting the components' layout, creating
* two instances of DisplayJAI for the two images and creating/registering
* event handlers for the scroll bars.
*
* @param im1 the first image (left side)
* @param im2 the second image (right side)
*/
public DisplayTwoSynchronizedImages(RenderedImage im1, RenderedImage im2)
{
super();
setLayout(new GridLayout(1, 2));
dj1 = new DisplayJAI(im1); // Instances of DisplayJAI for the
54
dj2 = new DisplayJAI(im2); // two images
jsp1 = new JScrollPane(dj1); // JScrollPanes for the both
jsp2 = new JScrollPane(dj2); // instances of DisplayJAI
add(jsp1);
add(jsp2);
// Retrieve the scroll bars of the images and registers adjustment
// listeners to them.
// Horizontal scroll bar of the first image.
jsp1.getHorizontalScrollBar().addAdjustmentListener(this);
// Vertical scroll bar of the first image.
jsp1.getVerticalScrollBar().addAdjustmentListener(this);
// Horizontal scroll bar of the second image.
jsp2.getHorizontalScrollBar().addAdjustmentListener(this);
// Vertical scroll bar of the second image.
jsp2.getVerticalScrollBar().addAdjustmentListener(this);
}
/**
* This method changes the first image to be displayed.
* @param newImage the new first image.
*/
public void setImage1(RenderedImage newimage)
{
dj1.set(newimage);
repaint();
55
}
/**
* This method changes the second image to be displayed.
* @param newImage the new second image.
*/
public void setImage2(RenderedImage newimage)
{
dj2.set(newimage);
repaint();
}
/**
* This method returns the first image.
* @return the first image.
*/
public RenderedImage getImage1()
{
return dj1.getSource();
}
/**
* This method returns the second image.
* @return the second image.
*/
56
public RenderedImage getImage2()
{
return dj2.getSource();
}
/**
* This method will be called when any of the scroll bars of the instances of
* DisplayJAI are changed. The method will adjust the scroll bar
* of the other DisplayJAI as needed.
*
* @param e the AdjustmentEvent that occurred (meaning that one of the scroll
*
bars position has changed.
*/
public void adjustmentValueChanged(AdjustmentEvent e)
{
// If the horizontal bar of the first image was changed...
if (e.getSource() == jsp1.getHorizontalScrollBar())
{
// We change the position of the horizontal bar of the second image.
jsp2.getHorizontalScrollBar().setValue(e.getValue());
}
// If the vertical bar of the first image was changed...
if (e.getSource() == jsp1.getVerticalScrollBar())
{
// We change the position of the vertical bar of the second image.
57
jsp2.getVerticalScrollBar().setValue(e.getValue());
}
// If the horizontal bar of the second image was changed...
if (e.getSource() == jsp2.getHorizontalScrollBar())
{
// We change the position of the horizontal bar of the first image.
jsp1.getHorizontalScrollBar().setValue(e.getValue());
}
// If the vertical bar of the second image was changed...
if (e.getSource() == jsp2.getVerticalScrollBar())
{
// We change the position of the vertical bar of the first image.
jsp1.getVerticalScrollBar().setValue(e.getValue());
}
} // end adjustmentValueChanged
}
EyeDetect.java:
import java.net.URL;
import java.awt.Component;
import java.awt.Graphics;
import java.awt.image.*;
import java.awt.Color;
import java.io.File;
58
import javax.imageio.*;
import javax.swing.*;
public class EyeDetect extends ImageIcon
{
static fdlibjni fdlib;
static int n;
static JFrame jf;
static JPanel panel;
static int w, h, threshold;
static byte[] img;
static StringBuilder sb = new StringBuilder();
public EyeDetect(URL url)
{
super(url);
}
static void receiveImage(String filepath)
{
jf = new JFrame();
panel = new JPanel();
fdlib = new fdlibjni();
threshold = 0;
try
{
59
System.out.println("call");
BufferedImage bi = ImageIO.read(new File(filepath));
w = bi.getWidth();
h = bi.getHeight();
System.out.println("image is " + w + "x" + h + " pixels");
img = new byte[w*h];
for (int y=0, i=0; y<h; y++)
{
for (int x=0; x<w; x++, i++)
{
//GET RGB VALUE TO FIND PIXEL COLOR
int c = bi.getRGB(x,y);
int r, g, b;
//FIND RED COLOR
r = (c&0x00ff0000)>>16;
//FIND GREEN
g = (c&0x0000ff00)>>8;
//BLUE COLOR
b = (c&0x000000ff);
//CONVERT INTO GRAYSCALE COLOR
img[i] = (byte) (.3*r + .59*g + .11*b);
}
}
60
System.out.println("detecting with threshold " + threshold);
fdlib.detectfaces(img, w, h, threshold);
n = fdlib.getndetections();
if (n==1)
System.out.println("1 face found.");
else
System.out.println(n + " faces found.");
java.io.File f = new java.io.File(filepath);
EyeDetect pi = new EyeDetect(f.toURL());
JLabel l = new JLabel(pi);
String infoImg ="File Name: "+f.getName()
+"\nFile Path: "+f.getPath()
+"\nTotal Numer Of Faces Found: "+n;
System.out.println(infoImg);
HomeFile.imageInformation(infoImg);
panel.add(l);
jf.getContentPane().add(panel);
jf.pack();
jf.setTitle("Face Detection Interface");
jf.setVisible(true);
}
catch(Exception e)
{
61
System.out.println(e);
}
}
public EyeDetect()
{
}
public void paintIcon(Component c, Graphics g, int x, int y)
{
super.paintIcon(c,g,x,y);
for (int i=0; i<n; i++)
{
System.out.println("x:" + fdlib.getdetectionx(i) + " y:" +
fdlib.getdetectiony(i) + " size:" + fdlib.getdetectionwidth(i));
g.setColor(Color.WHITE);
//DRAW RECTANGLE TO IDENTIFY EYES
g.drawRect((fdlib.getdetectionx(i)-(fdlib.getdetectionwidth(i)/8)20),(fdlib.getdetectiony(i)-(fdlib.getdetectionwidth(i)/8)20),fdlib.getdetectionwidth(i)/8,fdlib.getdetectionwidth(i)/8);
g.drawRect((fdlib.getdetectionx(i)+(fdlib.getdetectionwidth(i)/8)+10),(fdlib.getdetectiony
(i)-(fdlib.getdetectionwidth(i)/8)20),fdlib.getdetectionwidth(i)/8,fdlib.getdetectionwidth(i)/8);
}
System.out.println("paintIcon");
62
}
}
GlobalSearchFolder.java:
import javax.media.*;
public class GlobalSearchFolder {
private GlobalSearchFolder() {}
public static String search_folder, temp_name;
public static boolean search;
public static double threshold;
public static Processor p;
public static int x, y, width, height;
public static boolean webcam;
}
HomeFile.java:
import java.awt.*;
import java.awt.event.*;
import java.io.*;
import javax.swing.*;
import javax.swing.border.*;
63
public class HomeFile implements ActionListener
{
JFrame jf;
JPanel
mainPanel,imageChoose,videoChoose,dispInfo,dispImage,imageInfo,imageDispInfo;
JLabel banner,videoBanner,imageView;
JCheckBox search_image;
ImageIcon icon;
JTextField filePath, folderPath;
static JTextArea aboutImage,aboutImageInfo;
JButton
selectFile_but,chooseFile_But,videoProcess,faceDetectBut,imageComapre,skinDetect,ey
eDetect,noseDetect,mouthDetect,edgeDetect;
JButton webcamProcess, chooseFolder, detectAndSearch;
JFileChooser openDialog1, folderDialog;
private JScrollPane scrollPane,scrollPaneInfo;
public static final String BUTTON_TEXT = "Select Image/Video";
Container content;
SearchImages search_images;
public HomeFile()
{
//CREATE JFRAME WITH TITLE
jf = new JFrame("Face Detection");
Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize();
64
try
{
UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName());
} catch (ClassNotFoundException ex) {
ex.printStackTrace();
} catch (IllegalAccessException ex) {
ex.printStackTrace();
} catch (UnsupportedLookAndFeelException ex) {
ex.printStackTrace();
} catch (InstantiationException ex) {
ex.printStackTrace();
}
// Create Main Pannel
mainPanel = new JPanel();
mainPanel.setLayout(null);
mainPanel.setSize(500,500);
mainPanel.setBackground(new Color(139, 125, 107 ));
String str = "<html><hr color=\"#FFFFFF\"><p align=\"center\" color=\"#292421\"
style=\"padding:10px 100px 10px 10px;text-align:\"center\" bgcolor=\"#e2e2c6\" >"
+"&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs
p;&nbsp;&nbsp;"
+"&nbsp;&nbsp;"
+"Image Face Detection"
65
+"</p><hr></html>";
//CREATE LABEL
banner = new JLabel(str);
banner.setBounds(120,10,screenSize.width,100);
banner.setFont(new Font("tahoma, arial, sans-serif", Font.BOLD, 30));
banner.setForeground(new Color(162,184,73));
mainPanel.add(banner);
//CREATE IMAGE PANEL
imageChoose = new JPanel();
TitledBorder titleCom = BorderFactory.createTitledBorder("Choose JPEG Image
Files");
titleCom.setTitleColor(new Color(41, 36, 33 ));
titleCom.setTitleFont(new Font("tahoma, arial, sans-serif", Font.BOLD, 12));
imageChoose.setBorder(titleCom);
imageChoose.setLayout(null);
imageChoose.setBounds(20,125,700,100);
imageChoose.setBackground(new Color(205, 186, 150 ));
//CREATE TEXT FIELD FOR PATH
filePath = new JTextField();
filePath.setBounds(80,25,350,20);
filePath.setBorder(BorderFactory.createBevelBorder(BevelBorder.LOWERED));
imageChoose.add(filePath);
66
//CREATE FILE CHOOSER
chooseFile_But = new JButton("Browse File");
chooseFile_But.setBounds(450,25,100,20);
chooseFile_But.addActionListener(this);
imageChoose.add(chooseFile_But);
// create text box image folder
folderPath = new JTextField();
folderPath.setBounds(80, 60, 350, 20);
folderPath.setBorder(BorderFactory.createBevelBorder(BevelBorder.LOWERED));
imageChoose.add(folderPath);
// create folder chooser
chooseFolder = new JButton("Choose Search Folder");
chooseFolder.setBounds(450, 60, 150, 20);
chooseFolder.addActionListener(this);
imageChoose.add(chooseFolder);
mainPanel.add(imageChoose);
//CREATE VIDEO CHOOSE PANEL
videoChoose= new JPanel();
67
TitledBorder videoCom = BorderFactory.createTitledBorder("Video Face
Detection");
videoCom.setTitleColor(new Color(41, 36, 33 ));
videoCom.setTitleFont(new Font("tahoma, arial, sans-serif", Font.BOLD, 12));
videoChoose.setBorder(videoCom);
videoChoose.setLayout(null);
videoChoose.setBounds(730,125,250,100);
videoChoose.setBackground(new Color(205, 186, 150 ));
String strVideo = "<html><p>"
+"*&nbsp;From<br>";
videoBanner = new JLabel(strVideo);
videoBanner.setBounds(10,5,screenSize.width,100);
videoBanner.setFont(new Font("tahoma, arial, sans-serif", Font.BOLD, 10));
videoBanner.setForeground(Color.DARK_GRAY);
videoProcess = new JButton("Video File");
videoProcess.setBounds(120,70,120,20);
videoProcess.addActionListener(this);
webcamProcess = new JButton("WebCam");
webcamProcess.setBounds(120,45,120,20);
webcamProcess.addActionListener(this);
68
search_image = new JCheckBox("Search for Image", false);
search_image.setBounds(120, 20, 120, 20);
search_image.addActionListener(this);
//ADD COMPONENTS
videoChoose.add(videoProcess);
videoChoose.add(webcamProcess);
videoChoose.add(videoBanner);
videoChoose.add(search_image);
mainPanel.add(videoChoose);
//CREATE IMAGE PANEL
dispImage= new JPanel();
TitledBorder dispImageBorder = BorderFactory.createTitledBorder("Image Before
Process");
dispImageBorder.setTitleColor(new Color(41, 36, 33 ));
dispImage.setBorder(dispImageBorder);
dispImage.setLayout(null);
dispImage.setBounds(20,230,700,340);
dispImage.setBackground(new Color(205, 186, 150 ));
69
faceDetectBut = new JButton("Detect Face");
faceDetectBut.setBounds(550,12,140,20);
faceDetectBut.setVisible(false);
faceDetectBut.addActionListener(this);
dispImage.add(faceDetectBut);
detectAndSearch = new JButton("Detect and Search");
detectAndSearch.setBounds(550, 40, 140, 20);
detectAndSearch.setVisible(false);
detectAndSearch.addActionListener(this);
dispImage.add(detectAndSearch);
skinDetect = new JButton("Skin Color Detection");
skinDetect.setBounds(550,42,140,20);
skinDetect.setVisible(false);
skinDetect.addActionListener(this);
dispImage.add(skinDetect);
eyeDetect = new JButton("Eye Detection");
eyeDetect.setBounds(550,72,140,20);
eyeDetect.setVisible(false);
eyeDetect.addActionListener(this);
dispImage.add(eyeDetect);
70
noseDetect = new JButton("Nose Detection");
noseDetect.setBounds(550,102,140,20);
noseDetect.setVisible(false);
noseDetect.addActionListener(this);
dispImage.add(noseDetect);
mouthDetect = new JButton("Mouth Detection");
mouthDetect.setBounds(550,132,140,20);
mouthDetect.setVisible(false);
mouthDetect.addActionListener(this);
dispImage.add(mouthDetect);
edgeDetect = new JButton("Edge Detection");
edgeDetect.setBounds(550,162,140,20);
edgeDetect.setVisible(false);
edgeDetect.addActionListener(this);
dispImage.add(edgeDetect);
imageView = new JLabel();
imageView.setBounds(10,5,690,330);
dispImage.add(imageView);
mainPanel.add(dispImage);
imageInfo= new JPanel();
71
TitledBorder dispImgInfoBorder = BorderFactory.createTitledBorder("Image
Info");
dispImgInfoBorder.setTitleColor(new Color(41, 36, 33 ));
imageInfo.setBorder(dispImgInfoBorder);
imageInfo.setLayout(null);
imageInfo.setBounds(730,230,250,340);
imageInfo.setBackground(new Color(205, 186, 150 ));
aboutImage = new JTextArea();
aboutImage.setEditable(false);
aboutImage.setBackground(new Color(192,192,192));
scrollPane = new JScrollPane(aboutImage);
scrollPane.setBounds(5,15,240,320);
imageInfo.add(scrollPane);
imageDispInfo= new JPanel();
TitledBorder dispImageInfo = BorderFactory.createTitledBorder("Details Of the
Image");
dispImageBorder.setTitleColor(new Color(41, 36, 33 ));
imageDispInfo.setBorder(dispImageBorder);
imageDispInfo.setLayout(null);
imageDispInfo.setBounds(20,575,960,130);
imageDispInfo.setBackground(new Color(205, 186, 150 ));
aboutImageInfo = new JTextArea();
72
aboutImageInfo.setEditable(false);
aboutImageInfo.setBackground(new Color(192,192,192));
scrollPaneInfo = new JScrollPane(aboutImageInfo);
scrollPaneInfo.setBounds(5,15,950,110);
imageDispInfo.add(scrollPaneInfo);
mainPanel.add(imageDispInfo);
mainPanel.add(imageInfo);
//SET SIZE ,VISIBILITY AND WINDOW CLOSE OPTION
jf.add(mainPanel);
jf.setSize(screenSize.width,screenSize.height);
jf.setVisible(true);
jf.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
}
//ACTION PERFORMED FOR CLICK EVENT
public void actionPerformed(ActionEvent ae)
{
//OPEN DIALOG WINDOW
if(ae.getSource()==chooseFile_But)
{
openDialog1 = new JFileChooser();
openDialog1.setFileSelectionMode(JFileChooser.FILES_ONLY);
openDialog1.setDialogTitle("Choose JPEG Format File");
73
openDialog1.showDialog(content, BUTTON_TEXT);
File selectedFiles1 = openDialog1.getSelectedFile();
filePath.setText(selectedFiles1.toString());
filePath.setEditable(false);
icon = new ImageIcon(selectedFiles1.toString());
//VISIBLE BUTTONS
imageView.setIcon(icon);
faceDetectBut.setVisible(true);
skinDetect.setVisible(false);
eyeDetect.setVisible(false);
noseDetect.setVisible(false);
mouthDetect.setVisible(false);
edgeDetect.setVisible(false);
}
else if (ae.getSource() == chooseFolder)
{
folderDialog = new JFileChooser();
folderDialog.setFileSelectionMode(JFileChooser.DIRECTORIES_ONLY);
folderDialog.setDialogTitle("Choose the Image Folder");
74
folderDialog.showDialog(content, "Choose Folder");
File selectedFolder = folderDialog.getSelectedFile();
folderPath.setText(selectedFolder.toString());
folderPath.setEditable(false);
detectAndSearch.setVisible(true);
}
else if(ae.getSource()==videoProcess)
{
// detect faces in a video
if (search_image.isSelected() == true)
{
// detect and search
fdVideoTest.receiveVideo(filePath.getText(), folderPath.getText());
}
else
{
// just detect
fdVideoTest.receiveVideo(filePath.getText(), null);
}
}
75
else if(ae.getSource()==webcamProcess)
{
// detect faces in a webcam
if (search_image.isSelected() == true)
{
// detect and search
fdVideoTest.receiveWebCam(folderPath.getText());
}
else
{
// just detect
fdVideoTest.receiveWebCam(null);
}
}
else if(ae.getSource()==faceDetectBut)
{
//CALL FDTEST CLASS FOR DETECT FACE
fdtest.receiveImage(filePath.getText(), null);
aboutImageInfo.setText(ShowImageInfo.printInfo(filePath.getText()));
}
else if (ae.getSource() == detectAndSearch)
{
// detect a face and search
76
fdtest.receiveImage(filePath.getText(), folderPath.getText());
aboutImageInfo.setText(ShowImageInfo.printInfo(filePath.getText()));
}
else if(ae.getSource()==skinDetect)
{
aboutImageInfo.setText(fdtest.sb.toString());
}
else if(ae.getSource()==eyeDetect)
{
EyeDetect.receiveImage(filePath.getText());
}
else if(ae.getSource()==noseDetect)
{
NoseDetection.receiveImage(filePath.getText());
}
else if(ae.getSource()==mouthDetect)
{
MouthDetection.receiveImage(filePath.getText());
77
}
else if(ae.getSource()==edgeDetect)
{
new Edge().EdgeDetect(filePath.getText());
}
}
public static void main(String args[])
{
new HomeFile();
}
static void imageInformation(String info)
{
//SET IMAGE INFORMATION
System.out.println("call imageinfo");
aboutImage.append(info);
}
}
JPEGImageFileFilter.java:
78
import java.io.File;
import javax.swing.filechooser.FileFilter;
/*
* This class implements a generic file name filter that allows the listing/selection
* of JPEG files.
*/
public class JPEGImageFileFilter extends FileFilter implements java.io.FileFilter
{
public boolean accept(File f)
{
if (f.getName().toLowerCase().endsWith(".jpeg")) return true;
if (f.getName().toLowerCase().endsWith(".jpg")) return true;
return false;
}
public String getDescription()
{
return "JPEG files";
}
}
MouthDetection.java:
79
import java.net.URL;
import java.awt.Component;
import java.awt.Graphics;
import java.awt.image.*;
import java.awt.Color;
import java.io.File;
import javax.imageio.*;
import javax.swing.*;
public class MouthDetection extends ImageIcon
{
static fdlibjni fdlib;
static int n;
static JFrame jf;
static JPanel panel;
static int w, h, threshold;
static byte[] img;
static StringBuilder sb = new StringBuilder();
public MouthDetection(URL url)
{
super(url);
}
static void receiveImage(String filepath)
{
80
jf = new JFrame();
panel = new JPanel();
fdlib = new fdlibjni();
threshold = 0;
try
{
System.out.println("call");
BufferedImage bi = ImageIO.read(new File(filepath));
w = bi.getWidth();
h = bi.getHeight();
System.out.println("image is " + w + "x" + h + " pixels");
img = new byte[w*h];
for (int y=0, i=0; y<h; y++)
{
for (int x=0; x<w; x++, i++)
{
int c = bi.getRGB(x,y);
int r, g, b;
r = (c&0x00ff0000)>>16;
g = (c&0x0000ff00)>>8;
b = (c&0x000000ff);
img[i] = (byte) (.3*r + .59*g + .11*b);
}
}
81
System.out.println("detecting with threshold " + threshold);
fdlib.detectfaces(img, w, h, threshold);
n = fdlib.getndetections();
if (n==1)
System.out.println("1 face found.");
else
System.out.println(n + " faces found.");
java.io.File f = new java.io.File(filepath);
MouthDetection pi = new MouthDetection(f.toURL());
JLabel l = new JLabel(pi);
String infoImg ="File Name: "+f.getName()
+"\nFile Path: "+f.getPath()
+"\nTotal Numer Of Faces Found: "+n;
System.out.println(infoImg);
HomeFile.imageInformation(infoImg);
panel.add(l);
jf.getContentPane().add(panel);
jf.pack();
jf.setTitle("Face Detection Interface");
jf.setVisible(true);
}
catch(Exception e)
{
System.out.println(e);
82
}
}
public MouthDetection()
{
}
public void paintIcon(Component c, Graphics g, int x, int y)
{
super.paintIcon(c,g,x,y);
for (int i=0; i<n; i++)
{
System.out.println("x:" + fdlib.getdetectionx(i) + " y:" +
fdlib.getdetectiony(i) + " size:" + fdlib.getdetectionwidth(i));
g.setColor(Color.WHITE);
//DRAW RECTANGLE TO IDENTIFY MOUTH
g.drawRect((fdlib.getdetectionx(i)(fdlib.getdetectionwidth(i)/8)),(fdlib.getdetectiony(i)+(fdlib.getdetectionwidth(i)/4)),fdlib
.getdetectionwidth(i)/4,fdlib.getdetectionwidth(i)/8);
}
System.out.println("paintIcon");
}
}
83
NoseDetection.java:
import java.net.URL;
import java.awt.Component;
import java.awt.Graphics;
import java.awt.image.*;
import java.awt.Color;
import java.io.File;
import javax.imageio.*;
import javax.swing.*;
public class NoseDetection extends ImageIcon
{
static fdlibjni fdlib;
static int n;
static JFrame jf;
static JPanel panel;
static int w, h, threshold;
static byte[] img;
static StringBuilder sb = new StringBuilder();
public NoseDetection(URL url)
{
super(url);
}
84
static void receiveImage(String filepath)
{
jf = new JFrame();
panel = new JPanel();
fdlib = new fdlibjni();
threshold = 0;
try
{
System.out.println("call");
BufferedImage bi = ImageIO.read(new File(filepath));
w = bi.getWidth();
h = bi.getHeight();
System.out.println("image is " + w + "x" + h + " pixels");
img = new byte[w*h];
for (int y=0, i=0; y<h; y++)
{
for (int x=0; x<w; x++, i++)
{
int c = bi.getRGB(x,y);
int r, g, b;
r = (c&0x00ff0000)>>16;
g = (c&0x0000ff00)>>8;
b = (c&0x000000ff);
img[i] = (byte) (.3*r + .59*g + .11*b);
}
85
}
System.out.println("detecting with threshold " + threshold);
fdlib.detectfaces(img, w, h, threshold);
n = fdlib.getndetections();
if (n==1)
System.out.println("1 face found.");
else
System.out.println(n + " faces found.");
java.io.File f = new java.io.File(filepath);
NoseDetection pi = new NoseDetection(f.toURL());
JLabel l = new JLabel(pi);
String infoImg ="File Name: "+f.getName()
+"\nFile Path: "+f.getPath()
+"\nTotal Numer Of Faces Found: "+n;
System.out.println(infoImg);
HomeFile.imageInformation(infoImg);
panel.add(l);
jf.getContentPane().add(panel);
jf.pack();
jf.setTitle("Face Detection Interface");
jf.setVisible(true);
}
catch(Exception e)
86
{
System.out.println(e);
}
}
public NoseDetection()
{
}
public void paintIcon(Component c, Graphics g, int x, int y)
{
super.paintIcon(c,g,x,y);
for (int i=0; i<n; i++)
{
System.out.println("x:" + fdlib.getdetectionx(i) + " y:" +
fdlib.getdetectiony(i) + " size:" + fdlib.getdetectionwidth(i));
g.setColor(Color.WHITE);
//DRAW RECTANGLE TO IDENTIFY NOSE
g.drawRect((fdlib.getdetectionx(i)(fdlib.getdetectionwidth(i)/8)),(fdlib.getdetectiony(i)),fdlib.getdetectionwidth(i)/4,fdlib.g
etdetectionwidth(i)/8);
}
System.out.println("paintIcon");
}
87
}
PostAccessCodec.java:
import javax.media.*;
import javax.media.Format;
import javax.media.format.*;
/**
*
* @author Jalwinder
*/
public class PostAccessCodec extends PreAccessCodec {
// We'll advertize as supporting all video formats.
public PostAccessCodec() {
supportedIns = new Format [] {
new RGBFormat()
};
}
88
/**
* Callback to access individual video frames.
*/
void accessFrame(Buffer frame) {
// do something
}
public String getName() {
return "Post-Access Codec";
}
}
PreAccessCodec.java:
import java.awt.*;
import javax.media.*;
import javax.media.Format;
import javax.media.format.*;
import javax.media.util.BufferToImage;
import java.awt.image.BufferedImage;
import javax.media.util.ImageToBuffer;
import java.io.File;
import javax.imageio.ImageIO;
89
/*********************************************************
* Inner class.
*
* A pass-through codec to access to individual frames.
*********************************************************/
public class PreAccessCodec implements Codec {
static int frame_number = 0;
SearchImages search_images = null;
static boolean skip_frame = false;
/**
* The code for a pass through codec.
*/
// We'll advertize as supporting all video formats.
protected Format supportedIns[] = new Format [] {
new VideoFormat(null)
};
90
// We'll advertize as supporting all video formats.
protected Format supportedOuts[] = new Format [] {
new VideoFormat(null)
};
Format input = null, output = null;
public String getName() {
return "Pre-Access Codec";
}
// No op.
public void open() {
}
// No op.
public void close() {
}
// No op.
public void reset() {
}
public Format [] getSupportedInputFormats() {
return supportedIns;
91
}
public Format [] getSupportedOutputFormats(Format in) {
if (in == null)
return supportedOuts;
else {
// If an input format is given, we use that input format
// as the output since we are not modifying the bit stream
// at all.
Format outs[] = new Format[1];
outs[0] = in;
return outs;
}
}
public Format setInputFormat(Format format) {
input = format;
return input;
}
public Format setOutputFormat(Format format) {
output = format;
return output;
}
92
public int process(Buffer in, Buffer out) {
// This is the "Callback" to access individual frames.
// in is the incoming frame, out is the outgoing frame.
BufferedImage bi = null;
int w = 0, h = 0, n;
byte[] image_buffer = null;
fdlibjni fdlib;
File temp_file = null;
String found_image = null;
String infoImg;
int this_x, this_y, this_width, this_height;
if (GlobalSearchFolder.search == true)
temp_file = new File(GlobalSearchFolder.temp_name);
fdlib = new fdlibjni();
// convert the input buffer to an Image
BufferToImage stopBuffer = new BufferToImage((VideoFormat) in.getFormat());
Image img = stopBuffer.createImage(in);
if (img != null && skip_frame == false)
93
{
frame_number++;
// convert the img to a BufferedImage
bi = new BufferedImage(img.getWidth(null), img.getHeight(null),
BufferedImage.TYPE_INT_RGB);
// to access the raster
Graphics2D g2 = bi.createGraphics();
g2.drawImage(img, 0, 0, null);
img.flush();
w = bi.getWidth();
h = bi.getHeight();
System.out.println("image is " + w + "x" + h + " pixels");
image_buffer = new byte[w * h];
for (int y = 0, i = 0; y < h; y++)
{
for (int x = 0; x < w; x++, i++)
{
//GET RGB VALUE TO FIND PIXEL COLOR
int c = bi.getRGB(x,y);
94
int r, g, b;
//FIND RED COLOR
r = (c&0x00ff0000)>>16;
//FIND GREEN
g = (c&0x0000ff00)>>8;
//BLUE COLOR
b = (c&0x000000ff);
//CONVERT INTO GRAYSCALE COLOR
image_buffer[i] = (byte) (.3*r + .59*g + .11*b);
}
}
// detect
fdlib.detectfaces(image_buffer, w, h, 0);
//GET THE COLOR VALUE AND DETECT IMAGE
n = fdlib.getndetections();
infoImg = "\nFrame Number = " + frame_number
+ "\nNumber of Faces Found = " +n ;
if (n > 0)
{
// search for a similar image
if (GlobalSearchFolder.search == true)
{
95
if (search_images == null)
{
search_images = new SearchImages();
if (GlobalSearchFolder.webcam == false)
search_images.buildImageList(GlobalSearchFolder.search_folder);
else
search_images.BuildBufferedImageList(GlobalSearchFolder.search_folder);
}
// save the file
try
{
ImageIO.write(bi, "jpg", temp_file);
this_x = fdlib.getdetectionx(0);
this_y = fdlib.getdetectiony(0);
this_width = fdlib.getdetectionwidth(0);
if (GlobalSearchFolder.webcam == true)
found_image = search_images.GetMatch(bi, this_x, this_y,
this_width, this_width);
else found_image =
search_images.findImage(GlobalSearchFolder.temp_name);
temp_file.delete();
}
96
catch (Exception e)
{
found_image = null;
}
}
}
for (int i = 0; i < n; i++)
{
g2.setColor(Color.WHITE);
//DRAW RECTANGLE TO IDENTIFY FACE
g2.drawRect(fdlib.getdetectionx(i)-fdlib.getdetectionwidth(i),
fdlib.getdetectiony(i)-fdlib.getdetectionwidth(i),
fdlib.getdetectionwidth(i)*2,fdlib.getdetectionwidth(i)*2);
g2.drawString(String.valueOf(i+1), fdlib.getdetectionx(i)(fdlib.getdetectionwidth(i)-10), fdlib.getdetectiony(i)-(fdlib.getdetectionwidth(i)-10));
}
// replace the input buffer with the altered image
if (n > 0)
{
in = ImageToBuffer.createBuffer(bi, 20);
}
97
g2.dispose();
if (found_image != null)
{
int index_1, index_2;
index_1 = found_image.lastIndexOf('\\');
index_2 = found_image.lastIndexOf('.');
String this_name = found_image.substring(index_1 + 1, index_2);
infoImg = "\nFrame Number = " + frame_number
+ "\nNumber of Faces Found = " + n + "\n" + this_name + " was identified
as a match.";
}
HomeFile.imageInformation(infoImg);
skip_frame = true;
}
else skip_frame = false;
// Swap the data between the input & output.
Object data = in.getData();
in.setData(out.getData());
out.setData(data);
98
// Copy the input attributes to the output
out.setFormat(in.getFormat());
out.setLength(in.getLength());
out.setOffset(in.getOffset());
return BUFFER_PROCESSED_OK;
}
public Object[] getControls() {
return new Object[0];
}
public Object getControl(String type) {
return null;
}
}
SearchImages.java:
// class to find an image within an image directory that has a
// close match to the input image
import java.io.File;
import java.awt.Color;
99
import java.awt.image.RenderedImage;
import java.awt.image.renderable.ParameterBlock;
import javax.media.jai.InterpolationNearest;
import javax.media.jai.JAI;
import javax.imageio.ImageIO;
import javax.media.jai.iterator.RandomIter;
import javax.media.jai.iterator.RandomIterFactory;
import java.awt.Window;
import java.awt.image.*;
import java.io.FileOutputStream;
import javax.media.jai.Interpolation;
import java.awt.*;
import java.io.FileInputStream;
import java.awt.image.*;
public class SearchImages {
File[] image_files; // list of image files in the folder
RenderedImage[] rothers;
BufferedImage[] bothers;
double [] bsignatures;
static private Color[][] signature;
private static final int baseSize = 300;
static fdlibjni fdlib;
BufferedImage bi;
100
int this_x, this_y, this_width, this_height;
public int getFaces(File filepath)
{
int w, h, n;
byte[] img;
try
{
bi = ImageIO.read(filepath);
}
catch (Exception e)
{
return(0);
}
w = bi.getWidth();
h = bi.getHeight();
fdlib = new fdlibjni();
System.out.println("image is " + w + "x" + h + " pixels");
img = new byte[w*h];
for (int y=0, i=0; y<h; y++)
{
101
for (int x=0; x<w; x++, i++)
{
//GET RGB VALUE TO FIND PIXEL COLOR
int c = bi.getRGB(x,y);
int r, g, b;
//FIND RED COLOR
r = (c&0x00ff0000)>>16;
//FIND GREEN
g = (c&0x0000ff00)>>8;
//BLUE COLOR
b = (c&0x000000ff);
//CONVERT INTO GRAYSCALE COLOR
img[i] = (byte) (.3*r + .59*g + .11*b);
}
}
fdlib.detectfaces(img, w, h, 0);
//GET THE COLOR VALUE AND DETECT IMAGE
n = fdlib.getndetections();
if (n > 0)
{
this_x = fdlib.getdetectionx(0);
this_y = fdlib.getdetectiony(0);
this_width = fdlib.getdetectionwidth(0);
this_height = fdlib.getdetectionwidth(0);
102
}
return(n);
}
public int getFaces(BufferedImage b_image)
{
int w, h, n;
byte[] img;
w = b_image.getWidth();
h = b_image.getHeight();
fdlib = new fdlibjni();
img = new byte[w*h];
for (int y=0, i=0; y<h; y++)
{
for (int x=0; x<w; x++, i++)
{
//GET RGB VALUE TO FIND PIXEL COLOR
int c = b_image.getRGB(x,y);
int r, g, b;
//FIND RED COLOR
r = (c&0x00ff0000)>>16;
103
//FIND GREEN
g = (c&0x0000ff00)>>8;
//BLUE COLOR
b = (c&0x000000ff);
//CONVERT INTO GRAYSCALE COLOR
img[i] = (byte) (.3*r + .59*g + .11*b);
}
}
fdlib.detectfaces(img, w, h, 0);
//GET THE COLOR VALUE AND DETECT IMAGE
n = fdlib.getndetections();
if (n > 0)
{
this_x = fdlib.getdetectionx(0);
this_y = fdlib.getdetectiony(0);
this_width = fdlib.getdetectionwidth(0);
this_height = fdlib.getdetectionwidth(0);
}
return(n);
}
public void BuildBufferedImageList(String path)
{
int i;
104
BufferedImage this_image;
Image temp_image, image;
int c_x, c_y, c_height;
File dir = new File(path);
image_files = dir.listFiles(new JPEGImageFileFilter());
bothers = new BufferedImage[image_files.length];
bsignatures = new double[image_files.length];
for (i = 0; i < image_files.length; i++)
{
try
{
this_image = ImageIO.read(image_files[i]);
// get the face
if ((getFaces(this_image)) == 1)
{
c_x = this_x - (this_width / 2);
c_y = this_y - (this_width / 2);
c_height = this_y - this_width;
bothers[i] = this_image.getSubimage(c_x, c_y, this_width, c_height);
bsignatures[i] = GetSignature(bothers[i]);
105
}
else
{
bothers[i] = this_image;
bsignatures[i] = GetSignature(bothers[i]);
}
}
catch (Exception e)
{
bothers[i] = null;
bsignatures[i] = -1;
}
}
}
public String GetMatch(BufferedImage match_image, int this_x, int this_y, int
this_width, int this_height)
{
double match_sig, this_dist, small_dist, percent;
BufferedImage crop_image;
int index, i, c_x, c_y, c_height;
index = -1;
small_dist = 0;
106
c_x = this_x - (this_width / 2);
c_y = this_y - (this_width / 2);
c_height = this_y - this_width;
crop_image = match_image.getSubimage(c_x, c_y, this_width, c_height);
FileOutputStream os;
try
{
os = new FileOutputStream("c:\\crap.png");
ImageIO.write(crop_image, "PNG", os);
}
catch (Exception e)
{
}
match_sig = GetSignature(crop_image);
for (i = 0; i < image_files.length; i++)
{
if (bsignatures[i] != -1)
107
{
if (match_sig <= bsignatures[i]) percent = (match_sig / bsignatures[i]) * 100.0;
else percent = (bsignatures[i] / match_sig) * 100.0;
if (percent >= GlobalSearchFolder.threshold && percent >= small_dist)
{
small_dist = percent;
index = i;
}
}
}
if (index == -1) return(null);
return(image_files[index].getAbsolutePath());
}
public double GetSignature(BufferedImage this_image)
{
double signature;
int x, y, pixel_count;
pixel_count = 0;
signature = 0;
108
for (x = 0; x < this_image.getWidth(); x++)
{
for (y = 0; y < this_image.getHeight(); y++)
{
signature = signature + this_image.getRGB(x, y);
pixel_count++;
}
}
if (pixel_count == 0) return(0);
if (signature < 0) signature = -signature;
return(signature / pixel_count);
}
public void NewbuildImageList(String path)
{
// builds a list of images in the given directory.
int i, n;
File dir = new File(path);
image_files = dir.listFiles(new JPEGImageFileFilter());
109
rothers = new RenderedImage[image_files.length];
for (i = 0; i < image_files.length; i++)
{
try
{
n = getFaces(image_files[i]);
if (n > 0)
rothers[i] = Newrescale(ImageIO.read(image_files[i]), this_x, this_y,
this_width, this_height);
// rothers[i] = rescale(ImageIO.read(image_files[i]));
else rothers[i] = rescale(ImageIO.read(image_files[i]));
}
catch (Exception e)
{
}
}
return;
}
public void buildImageList(String path)
{
110
// builds a list of images in the given directory.
int i;
File dir = new File(path);
image_files = dir.listFiles(new JPEGImageFileFilter());
rothers = new RenderedImage[image_files.length];
for (i = 0; i < image_files.length; i++)
{
try
{
rothers[i] = rescale(ImageIO.read(image_files[i]));
}
catch (Exception e)
{
}
}
return;
}
111
public String NewfindImage(String source_image, int x, int y, int width, int height)
{
// finds the image with the closest match
// return string with the image name.
// build the image list
File f_image = new File(source_image);
double small_dist = 20000.0;
int i, index = -1;
String check_name;
// find a close image
try
{
// calculate the signature of the source image
RenderedImage ref = Newrescale(ImageIO.read(f_image), x, y, width, height);
signature = calcSignature(ref);
// calculate the distance to the other images
double[] distances = new double[image_files.length];
for (i = 0; i < image_files.length; i++)
{
112
distances[i] = calcDistance(rothers[i]);
}
// now find the nearest image.
for (i = 0; i < image_files.length; i++)
{
check_name = image_files[i].toString();
if (check_name.equalsIgnoreCase(GlobalSearchFolder.temp_name) != true)
{
if (distances[i] <= small_dist && distances[i] <=
GlobalSearchFolder.threshold)
{
index = i;
small_dist = distances[i];
}
}
}
}
catch (Exception e)
{
return(null);
}
// if a close image was found return it
if (index != -1) return(image_files[index].toString());
113
return(null);
}
public String findImage(String source_image)
{
// finds the image with the closest match
// return string with the image name.
// build the image list
File f_image = new File(source_image);
double small_dist = 20000.0;
int i, index = -1;
String check_name;
// find a close image
try
{
// calculate the signature of the source image
RenderedImage ref = rescale(ImageIO.read(f_image));
signature = calcSignature(ref);
114
// calculate the distance to the other images
double[] distances = new double[image_files.length];
for (i = 0; i < image_files.length; i++)
{
distances[i] = calcDistance(rothers[i]);
}
// now find the nearest image.
for (i = 0; i < image_files.length; i++)
{
check_name = image_files[i].toString();
if (check_name.equalsIgnoreCase(GlobalSearchFolder.temp_name) != true)
{
if (distances[i] <= small_dist && distances[i] <=
GlobalSearchFolder.threshold)
{
index = i;
small_dist = distances[i];
}
}
}
}
catch (Exception e)
{
115
return(null);
}
// if a close image was found return it
if (index != -1) return(image_files[index].toString());
return(null);
}
private RenderedImage Newrescale(RenderedImage i, int x, int y, int width, int height)
{
RenderedImage crop_image, scaled_image;
FileOutputStream os = null;
// crop the image
ParameterBlock crop_pb = new ParameterBlock();
crop_pb.addSource(i);
crop_pb.add((float) this_x - (this_width / 2));
crop_pb.add((float) this_y - (this_width / 2));
crop_pb.add((float) this_width );
crop_pb.add((float) this_y - this_width);
int xx, yy;
116
xx = i.getWidth();
yy = i.getHeight();
crop_image = null;
try
{
crop_image = JAI.create("crop", crop_pb, null);
os = new FileOutputStream("c:\\crap.png");
JAI.create("encode", crop_image, os, "PNG", null);
}
catch (Exception e)
{
int crap;
crap = 0;
}
// scale the image.
Image this_image = null;
BufferedImage bi = null;
Image new_scaled_image = null;
117
FileInputStream is;
try
{
this_image = ImageIO.read(new File("c:\\crap.png"));
new_scaled_image = this_image.getScaledInstance(300, 300,
Image.SCALE_REPLICATE);
bi = new BufferedImage(new_scaled_image.getWidth(null),
new_scaled_image.getHeight(null),
BufferedImage.TYPE_INT_RGB);
Graphics2D g2 = bi.createGraphics();
g2.drawImage(new_scaled_image, 0, 0, null);
ImageIO.write(bi, "PNG", new File("c:\\crap.png"));
}
catch (Exception e)
{
}
float scaleW = ((float) baseSize) / crop_image.getWidth();
float scaleH = ((float) baseSize) / crop_image.getHeight();
118
// Scales the original image
ParameterBlock pb = new ParameterBlock();
pb.addSource(crop_image);
pb.add(scaleW);
pb.add(scaleH);
pb.add(0.0F);
pb.add(0.0F);
pb.add(Interpolation.getInstance(Interpolation.INTERP_BICUBIC));
// Creates a new, scaled image and uses it on the DisplayJAI component
scaled_image = JAI.create("scale", pb);
return (bi);
}
private RenderedImage rescale(RenderedImage i)
{
// scale the image.
float scaleW = ((float) baseSize) / i.getWidth();
float scaleH = ((float) baseSize) / i.getHeight();
// Scales the original image
119
ParameterBlock pb = new ParameterBlock();
pb.addSource(i);
pb.add(scaleW);
pb.add(scaleH);
pb.add(0.0F);
pb.add(0.0F);
pb.add(new InterpolationNearest());
// Creates a new, scaled image and uses it on the DisplayJAI component
return JAI.create("scale", pb);
}
private Color[][] calcSignature(RenderedImage i)
{
// calculate the signature for a given image
// Get memory for the signature.
Color[][] sig = new Color[5][5];
// For each of the 25 signature values average the pixels around it.
// Note that the coordinate of the central pixel is in proportions.
float[] prop = new float[]
{1f / 10f, 3f / 10f, 5f / 10f, 7f / 10f, 9f / 10f};
for (int x = 0; x < 5; x++)
for (int y = 0; y < 5; y++)
sig[x][y] = averageAround(i, prop[x], prop[y]);
120
return sig;
}
private Color averageAround(RenderedImage i, double px, double py)
{
// Get an iterator for the image.
RandomIter iterator = RandomIterFactory.create(i, null);
// Get memory for a pixel and for the accumulator.
double[] pixel = new double[3];
double[] accum = new double[3];
// The size of the sampling area.
int sampleSize = 15;
int numPixels = 0;
// Sample the pixels.
for (double x = px * baseSize - sampleSize; x < px * baseSize + sampleSize; x++)
{
for (double y = py * baseSize - sampleSize; y < py * baseSize + sampleSize; y++)
{
iterator.getPixel((int) x, (int) y, pixel);
accum[0] += pixel[0];
accum[1] += pixel[1];
accum[2] += pixel[2];
numPixels++;
121
}
}
// Average the accumulated values.
accum[0] /= numPixels;
accum[1] /= numPixels;
accum[2] /= numPixels;
return new Color((int) accum[0], (int) accum[1], (int) accum[2]);
}
private double calcDistance(RenderedImage other)
{
// Calculate the distance for that image.
Color[][] sigOther = calcSignature(other);
// There are several ways to calculate distances between two vectors,
// we will calculate the sum of the distances between the RGB values of
// pixels in the same positions.
double dist = 0;
for (int x = 0; x < 5; x++)
for (int y = 0; y < 5; y++)
{
int r1 = signature[x][y].getRed();
int g1 = signature[x][y].getGreen();
int b1 = signature[x][y].getBlue();
int r2 = sigOther[x][y].getRed();
122
int g2 = sigOther[x][y].getGreen();
int b2 = sigOther[x][y].getBlue();
double tempDist = Math.sqrt((r1 - r2) * (r1 - r2) + (g1 - g2)
* (g1 - g2) + (b1 - b2) * (b1 - b2));
dist += tempDist;
}
return dist;
}
}
ShowImageInfo.java:
import java.awt.image.ColorModel;
import java.awt.image.DataBuffer;
import java.awt.image.SampleModel;
import java.io.File;
import javax.media.jai.JAI;
import javax.media.jai.PlanarImage;
/**
* This class display basic information about images.
*/
public class ShowImageInfo
{
123
static StringBuilder sb = new StringBuilder();
static String printInfo(String s)
{
System.out.println("=================================================
==");
// Display image data. First, the image size (non-JAI related).
File image = new File(s);
System.out.printf("File: %s Size in bytes: %d\n",s,image.length());
sb.append("File:"+s+"\nSize in bytes:"+image.length()+"\n");
// Open the image (using the name passed as a command line parameter)
PlanarImage pi = JAI.create("fileload",s);
// Now let's display the image dimensions and coordinates.
System.out.print("Dimensions: ");
System.out.print(pi.getWidth()+"x"+pi.getHeight()+" pixels");
sb.append("Dimensions: "+pi.getWidth()+"x"+pi.getHeight()+" pixels\n");
// Remember getMaxX and getMaxY return the coordinate of the next point!
System.out.println(" (from "+pi.getMinX()+","+pi.getMinY()+
" to " +(pi.getMaxX()-1)+","+(pi.getMaxY()-1)+")");
sb.append(" (from "+pi.getMinX()+","+pi.getMinY()+
" to " +(pi.getMaxX()-1)+","+(pi.getMaxY()-1)+")\n");
System.out.print(pi.getTileWidth()+"x"+pi.getTileHeight()+" pixels");
124
// Display info about the SampleModel of the image.
SampleModel sm = pi.getSampleModel();
System.out.print("Data type: ");
sb.append("Data type: ");
switch(sm.getDataType())
{
case DataBuffer.TYPE_BYTE:
break; }
{ System.out.println("byte"); sb.append("byte\n");
case DataBuffer.TYPE_SHORT: {
System.out.println("short");sb.append("short\n");
break; }
case DataBuffer.TYPE_USHORT: {
System.out.println("ushort");sb.append("ushort\n");
case DataBuffer.TYPE_INT:
break; }
break; }
{ System.out.println("int"); sb.append("int\n");
case DataBuffer.TYPE_FLOAT: {
System.out.println("float");sb.append("float\n");
case DataBuffer.TYPE_DOUBLE:
sb.append("double\n"); break; }
break; }
{ System.out.println("double");
case DataBuffer.TYPE_UNDEFINED: {
System.out.println("undefined");sb.append("undefined\n"); break; }
}
// Display info about the ColorModel of the image.
ColorModel cm = pi.getColorModel();
if (cm != null)
{
System.out.println("Number of color components: "+cm.getNumComponents());
125
sb.append("Number of color components: "+cm.getNumComponents()+"\n");
System.out.print("Colorspace components' names: ");
sb.append("Colorspace components' names: ");
for(int i=0;i<cm.getNumComponents();i++)
{
System.out.print(cm.getColorSpace().getName(i)+" ");
sb.append(cm.getColorSpace().getName(i)+" ");
}
sb.append("\n");
System.out.println();
System.out.println("Bits per pixel: "+cm.getPixelSize());
sb.append("Bits per pixel: "+cm.getPixelSize()+"\n");
}
else
{
System.out.println("No color model.");
sb.append("No color model.");
}
return sb.toString();
}
}
FdvideoTest.java:
import javax.media.*;
126
import java.io.File;
public class fdVideoTest {
// class to open and start processing a video
static void receiveWebCam(String SearchFolder)
{
// this is for a webcam.
CaptureDeviceInfo device;
MediaLocator ml = null;
FrameAccess fa = new FrameAccess();
//initiates the camera
device = CaptureDeviceManager.getDevice("vfw:Microsoft WDM Image Capture
(Win32):0");
//gets the location of the device, is needed for the player
ml = device.getLocator();
// set the threshold for image matches for searching
GlobalSearchFolder.threshold = 75.0;
GlobalSearchFolder.webcam = true;
127
// open the the webcam
fa.open(ml, SearchFolder);
}
static void receiveVideo(String FilePath, String SearchFolder)
{
MediaLocator ml = null;
// this is for a file
File f = new File(FilePath);
FrameAccess fa = new FrameAccess();
try {
ml = new MediaLocator(f.toURL());
}
catch (Exception e) {
}
// open the the video
GlobalSearchFolder.threshold = 100.0;
GlobalSearchFolder.webcam = false;
128
fa.open(ml, SearchFolder);
}
}
Fdlibjni.java:
public class fdlibjni
{
static {
System.loadLibrary("fdlibjni");
}
native void detectfaces(byte[] imagedata, int imagewidth, int imageheight, int
threshold);
native int getndetections();
native int getdetectionx(int nr);
native int getdetectiony(int nr);
native int getdetectionwidth(int nr);
}
Fdtest.java:
import java.net.URL;
129
import java.awt.Component;
import java.awt.Graphics;
import java.awt.image.*;
import java.awt.Color;
import java.io.File;
import javax.imageio.*;
import javax.swing.*;
public class fdtest extends ImageIcon
{
static fdlibjni fdlib;
static int n;
static JFrame jf, jf1;
static JPanel panel;
static int w, h, threshold;
static byte[] img;
static StringBuilder sb = new StringBuilder();
public fdtest(URL url)
{
super(url);
}
static void receiveImage(String filepath, String searchfolder)
{
130
//RECEIVE JPEG IMAGE FILE PATH
jf = new JFrame();
jf1 = new JFrame();
panel = new JPanel();
fdlib = new fdlibjni();
threshold = 0;
SearchImages search_images;
String close_image;
GlobalSearchFolder.threshold = 100.0;
try
{
System.out.println("call");
//READ THE IMAGE
BufferedImage bi = ImageIO.read(new File(filepath));
w = bi.getWidth();
h = bi.getHeight();
System.out.println("image is " + w + "x" + h + " pixels");
img = new byte[w*h];
for (int y=0, i=0; y<h; y++)
{
for (int x=0; x<w; x++, i++)
{
131
//GET RGB VALUE TO FIND PIXEL COLOR
int c = bi.getRGB(x,y);
int r, g, b;
//FIND RED COLOR
r = (c&0x00ff0000)>>16;
//FIND GREEN
g = (c&0x0000ff00)>>8;
//BLUE COLOR
b = (c&0x000000ff);
//CONVERT INTO GRAYSCALE COLOR
img[i] = (byte) (.3*r + .59*g + .11*b);
}
}
System.out.println("detecting with threshold " + threshold);
fdlib.detectfaces(img, w, h, threshold);
//GET THE COLOR VALUE AND DETECT IMAGE
n = fdlib.getndetections();
if (n==1)
System.out.println("1 face found.");
else
System.out.println(n + " faces found.");
java.io.File f = new java.io.File(filepath);
132
//SET DETECTED IMAGE IN LABEL
fdtest pi = new fdtest(f.toURL());
JLabel l = new JLabel(pi);
String infoImg ="File Name: "+f.getName()
+"\nFile Path: "+f.getPath()
+"\nTotal Numer Of Faces Found: "+n;
System.out.println(infoImg);
HomeFile.imageInformation(infoImg);
panel.add(l);
jf.getContentPane().add(panel);
jf.pack();
jf.setTitle("Face Detection Interface");
jf.setVisible(true);
if (n > 0 && searchfolder != null)
{
// find a similiar image in the search folder
search_images = new SearchImages();
search_images.buildImageList(searchfolder);
close_image = search_images.findImage(filepath);
if (close_image != null)
{
java.io.File fx = new java.io.File(close_image);
133
fdtest pix = new fdtest(fx.toURL());
JLabel lx = new JLabel(pix);
panel.add(lx);
jf1.getContentPane().add(panel);
jf1.pack();
jf1.setTitle("Face Detection Interface");
jf1.setVisible(true);
int index_1, index_2;
index_1 = close_image.lastIndexOf('\\');
index_2 = close_image.lastIndexOf('.');
String this_name = close_image.substring(index_1 + 1, index_2);
String search_info = "\n";
search_info = search_info + this_name + " was identified as a match.";
HomeFile.imageInformation(search_info);
}
else
{
HomeFile.imageInformation("Nobody was identified as a match.");
}
}
134
}
catch(Exception e)
{
System.out.println(e);
}
}
public fdtest()
{
}
public void paintIcon(Component c, Graphics g, int x, int y)
{
super.paintIcon(c,g,x,y);
for (int i=0; i<n; i++)
{
//GET X Y AXIS
System.out.println("x:" + fdlib.getdetectionx(i) + " y:" +
fdlib.getdetectiony(i) + " size:" + fdlib.getdetectionwidth(i));
//SET RECT COLOR AS WIHTE
g.setColor(Color.WHITE);
//DRAW RECTANGLE TO IDENTIFY FACE
g.drawRect(fdlib.getdetectionx(i)fdlib.getdetectionwidth(i),fdlib.getdetectiony(i)fdlib.getdetectionwidth(i),fdlib.getdetectionwidth(i)*2,fdlib.getdetectionwidth(i)*2);
//SET STRING TO COUNT FACE
135
g.drawString(String.valueOf(i+1), fdlib.getdetectionx(i)(fdlib.getdetectionwidth(i)-10), fdlib.getdetectiony(i)-(fdlib.getdetectionwidth(i)-10));
Color s=g.getColor();
if(192<=s.getGreen() && s.getGreen()<=255 && 192<=s.getBlue() &&
s.getBlue()<=255 && 192<=s.getRed() && s.getRed()<=255)
{
System.out.println("Person :"+(i+1)+"White");
sb.append("Person "+(i+1)+" : White\n");
}
else if(128<=s.getGreen() && s.getGreen()<=192 && 128<=s.getBlue() &&
s.getBlue()<=192 && 128<=s.getRed() && s.getRed()<=192)
{
System.out.println("Person :"+(i+1)+"Light White");
sb.append("Person "+(i+1)+" : Light White\n");
}
else if(64<=s.getGreen() && s.getGreen()<=128 && 64<=s.getBlue() &&
s.getBlue()<=128 && 64<=s.getRed() && s.getRed()<=128)
{
System.out.println("Person :"+(i+1)+"Black");
sb.append("Person "+(i+1)+" : Black\n");
}
}
System.out.println("paintIcon");
}
136
}
Download
Related flashcards
Create Flashcards