Uploaded by Vasim Mulla

Real time Access Control Using Face Recognition

advertisement
SYNOPSIS FOR M.E. DISSERTATION
M.E. Electronics-Part: II (SEMISTER-III)
1. NAME OF THE COLLEGE
: - KARMAVEER BHAURAO PATIL COLLEGE OF
ENGINEERING, SATARA.
2. NAME OF THE COURSE
: - M.E. Electronics
3. NAME OF THE STUDENT
(With PRN No.)
: - MR.MULLA VASIM BASHIR
(PRN NO - 1412703201)
4. DATE OF REGISTRATION
: - August 2016
5. NAME OF THE GUIDE
: - Dr. Patil Vikram S.
Department of Electronics, KBPCOE, Satara.
(With PG Recognition No.
And Date)
6. PROPOSED TITLE
Student sign
: - SU/PG/BUTR/RECOG/2254/15 /06/2012
: - Real time Access Control Using Face Recognition
With MultipleVisitor Detection.
Guide sign
7. INTRODUCTION
-
Face recognition is a form of biometric identification involving recognition of
individuals based on the salient characteristics of their face images [1]. Be it for government
use such as law enforcement, voter identification, surveillance and immigration, or for
commercial use such as gaming industry, face tagging on internet, e-commerce, healthcare
and banking, a large number of real world applications utilize face recognition. As a result,
there has been enormous interest in this area of research.
Face recognition system is a complex image-processing problem in real world
applications with complex effects of illumination, occlusion, and imaging condition on the
live images. It is a combination of face detection and recognition techniques in image
analyzes. Detection application is used to find position of the faces in a given image.
Recognition algorithm is used to classify given images with known structured properties,
which are used commonly in most of the computer vision applications. These images have
some known properties like; same resolution, including same facial feature components, and
similar eye alignment. These images will be referred as “standard image” in the further
sections. Recognition applications uses standard images and detection algorithms detect the
faces and extract face images which include eyes, eyebrows, nose, and mouth. That makes
the algorithm more complicated than single detection or recognition algorithm. The first step
for face recognition system is to acquire an image from a camera. Second step is face
detection from the acquired image. As a third step, face recognition that takes the face images
from output of detection part. Final step is person identity as a result of recognition part.
Fig.1. Steps of Face Recognition System Applications
Student sign
Guide sign
Types of biometrics
Biometrics
can
measure
both
physiological
and
behavioural
characteristics.
Fig 2. Classification of biometrics.
Why we choose face recognition over other biometric?
There are number of reasons to choose face recognition. This includes the following1. It requires no physical interaction on behalf of the user.
2. It is accurate and allows for high enrolment and verification rates.
3. It does not require an expert to interpret the comparison result.
4. It can use your existing hardware infrastructure; existing cameras and image capture
devices will work with no problems.
5. It is the only biometric that allow you to perform passive identification in a one to
many environment (e.g.: identifying a terrorist in a busy Airport terminal).
8. RELEVANCEFace recognition has been one of the most interesting and important research fields in
the past two decades. The reasons come from the need of automatic recognitions and
surveillance systems, the interest in human visual system on face recognition, and the design
Student sign
Guide sign
of human-computer interface, etc. These researches involve know-ledge and researchers from
disciplines such as neuroscience, psychology, computer vision, pattern recognition, image
processing, and machine learning, etc.
The initial stage in face recognition system is to detect the face from an input image,
the main objective of this stage is to find all the faces that appear in the image irrespective of
its pose, aging, expression, illumination and disguise.
In face recognition with the purpose of localizing and extracting the face region from
the background factors like pose, illumination, occlusion and the size of image makes
difficult to detect or to recognize the face correctly. Yan, Kriegman and Ahuja presented a
classifications that is well accepted [2].classified methods are categorized into four types;
they are Knowledge-based methods, Feature invariant approaches, Template matching,
Appearance-based methods, each of these methods have its own limitations.
Methods
Knowledge-Based Methods
Drawbacks


Feature-Invariant Methods


Difficult to translate human knowledge into rules
precisely.
Difficult to extend this approach to detect faces in
different poses.
Difficult to locate facial features due to several
corruptions (illumination, noise, and occlusion).
Difficult to detect features in complex background.
Template-Based Methods


Templates need to be initialized near the face images.
Difficult to enumerate templates for various types of
poses.
Appearance Based Methods

Major problem with these methods is that they require
a very long computation time in the training phase.
Student sign
Guide sign
Face detection by skin colour thresholding enhances the input image, segments the
skin regions in colour spaces RGB and YCbCr, and combines the edge image with the skin
colour image to separate between the skin regions and the background. The advantage of this
method is the detection of faces in different illumination conditions, different sizes, different
poses, and different expressions.
9. LITERATURE REVIEWFace detection is the first step of face recognition system. Output of the detection
can be location of face region as a whole, and location of face region with facial features
(i.e. eyes, mouth, eyebrow, nose etc.). Detection methods in the literature are difficult to
classify strictly, because most of the algorithms are combination of methods for detecting
faces to increase the accuracy. Facial features are important information for human faces
and standard images can be generated using these information. In literature, many
detection algorithms based on facial features are available. Ramya Srinivasan et al. [1]
detect faces and facial features by extraction of skin like region with YCbCr color space
and edges are detected in the skin like region. Then, eyes are found with Principal
Component Analysis (PCA) [3] on the edged region. Finally, Mouth is found based on
geometrical information. Another approach extracts skin like region with Normalized
RGB color space and face is verified by template matching. To find eyes, eyebrows and
mouth, color snakes are applied to verified face image. Bouzas and Arvanitopoulos [4]
faces are verified with Linear Support Vector Machine (SVM). For final verification of
face, eyes and mouth are found with the information of Cb and Cr difference. For eye
region Cb value is greater than Cr value and for mouth region Cr value is greater than Cb
value. Another application segments skin like regions with statistical model.
Dahmane and Meunier built up a prototype to model facial expression considering
various approaches to facial characteristics [5]. Also image quality plays an important
role in face recognition systems. Higher the clarity of the face in an image, more is the
accuracy and vice versa. This parameter is well explained by Jiansheng Chen et.al [6].
Mostly the success factor of any face recognition system depends on the quality of image
provided. Face recognition is considered one of the most important biometric methods
displaying some advantages over other biometric approaches, for being natural and
passive, not requiring the cooperation of individuals as in other techniques such as iris
recognition. This characteristic of facial recognition makes it ideal for applications in the
field of security, where it is necessary to perform recognition through security cameras
Student sign
Guide sign
without the cooperation of the individual. Hence various tasks of face recognition have to
be studied [7]. Wang and Ji suggested an method which will increase the performance of
the system. There experimental results show that using the method of performance
modeling the accuracy of the system can be improved[8]. The segmented parts are taken
as candidate and verification is done by calculating the entropy of the candidate image
and use thresholding to verify face candidate [9]. Qiang-rong and Hua-lan [10] applied
white balance correction before detecting faces. The color value is important for
segmentation, so while acquiring the image colors may reflect false color. To overcome
this, white balance correction should be done as a first step. Also, skin color can be
modeled in elliptical region in Cb (blue difference chroma component) and Cr (red
difference chroma component) channel in YCbCr color space. Skin like region is
segmented if the color value is inside elliptic region and candidate regions are verified
using template matching [11]. Peer et al. [12] detect faces using only skin segmentation in
YCbCr color space and researchers generate the skin color conditions in RGB color space
as well. N. Sudha et.al [13] proposed an algorithm based on principal component analysis
(PCA) which corresponds to eigenvectors of the data covariance matrix arranged in
descending order of Eigen values.
10. PROPOSED WORK:A. Problem Statement:Lot of work has been performed in the field of face recognition. But this work is
somewhat limited or mainly focused to single face recognition only. But in an image there
can be more than one person. So there is a need of a system which will identify multiple faces
in image. Also the current system has difficulties to locate facial features due to illumination,
noise and occlusion. Hence we are trying to build a system which will overcome the
difficulties mentioned and will identify an authorised person from the group of persons for
door accessing.
B. Scope:The proposed system mainly focuses on detecting and recognizing multiple persons
from an image. Work has been performed in single face detection but our challenge lies in
recognizing authorized person from a crowdy environment. After successfully identifying the
Student sign
Guide sign
person, the further application is to provide access to the user into the house which is a
generalized application. In case of an unauthorized person is detected an alarm is raised.
C. Objective of Work:Our project mainly deals with face recognizing system. So the proposed objectives of
this project are:
i)
To successfully detect multiple faces in an image.
ii)
To remove the difficulties in face detection and recognition due to illumination,
noise and occlusion.
iii)
To provide access to the owner if the recognition is successfully authenticated.
iv)
To raise an alarm if an intruder tries to enter the house.
D. MethodologyIn face recognition system the most important point to be considered, is the
degradation of the image according to various filtering or conversion techniques which will
help in recognizing the person with the images stored in database. A design flow for the
degradation of image is shown in fig 4. The first step is Load/Capture image. In this step the
image is captured and fed to the system.
Then the next step is obtaining Blur image. In image terms blurring means that each
pixel in the source image gets spread over and mixed into surrounding pixels. Steps to Blur
image:
– Traverse through entire input image array.
– Read individual pixel colour value (24-bit).
– Split the colour value into individual R, G and B 8-bit values.
– Calculate the RGB average of surrounding pixels and assign this average value to it.
– Repeat the above step for each pixel.
– Store the new value at same location in output image.
Then RGB Colour Model is formed. The RGB colour model approximates the way
human vision encodes images by using three primary colour channels: red, green, and blue.
The RGB colour model is additive, which means the red, green, and blue channels combine
to create all the available colours in the system.
Student sign
Guide sign
Then Skin Colour Thresholding is done. Thresholding is the simplest method of
image segmentation. It is usually used for feature extraction where required features of image
are converted to white and everything else to black (or vice-versa). Steps for Thresholding:
Load/Store image
Blur
RGB to
LAB/HSV
Colour Model
conversion
Skin Colour
Thresholding
Blob Detection
Face Localization
Cropping
Grayscale
Image
Edge Detection
Registered Faces
Feature
Extraction
Feature
Comparison &
Output
Post
Processing
Database
Fig 3. Design flow.
Student sign
Guide sign
–Traverse through entire input image array.
– Read individual pixel colour value (24-bit) and convert it into gray scale.
– Calculate the binary output pixel value (black or white) based on current threshold.
– Store the new value at same location in output image.
Then in next step Blob Detection is done. In the field of computer vision, blob
detection refers to mathematical methods that are aimed at detecting regions in a digital
image that differ in properties, such as brightness or colour, compared to areas surrounding
those regions.
Then next step is Face Localization. Here the various faces within an image are
localized or targeted for further identification purpose. Then with Cropping the various faces
within an image that is been processed are cropped so as to avoid unnecessary part of the
image.
After Face Localization, Grayscale digital image is obtained. Grayscale digital image
is an image in which the value of each pixel is a single sample, that is, it carries only intensity
information.
Now Edge detection is the next stage. It is a fundamental tool in image processing,
machine vision and computer vision, particularly in the areas of feature detection and feature
extraction, which aim at identifying points in a digital image at which the image brightness
changes sharply or, more formally, has discontinuities. The result of applying an edge
detector to an image may lead to a set of connected curves that indicate the boundaries of
objects, the boundaries of surface markings as well as curves that correspond to
discontinuities in surface orientation.
The next step Feature extraction involves obtaining relevant facial features from the
data. These features could be certain face regions, variations, angles or measures, which can
be human relevant (e.g. eyes spacing) or not. This phase has other applications like facial
feature tracking or emotion recognition.
Then in Feature comparison and output stage the extracted features are matched with
that saved in the database. If they are matched, recognition is successful else, fail. Then in
Post processing if recognition is successfully the microcontroller opens the Door else alarm is
turned ON as shown in fig. 4.
Student sign
Guide sign
Image
capturing
camera
Personal
computer
Alarm
DC Motor
Door
Access
Fig 4. System block diagram.
11. FACILITIES AVAILABLE AND REQUIREMENTS:Hardware & Software:1. Pc system with accessories.
2. MATLAB software with appropriate toolbox.
3. Camera.
4. Cable to transmit a signal.
12. PROJECT SCHEDULE:Month/Year
Description
Aug 2016-Nov 2016
Literature survey & submission of synopsis.
Dec 2016- Jan 2017
To study of different face recognition and feature extraction
algorithm.
Feb- Mar2017
Implementation of face recognition and feature extraction
algorithm.
April-May 2017
Implementation of Embedded Hardware for access control.
May-June 2017
Comparing simulation & result
June-July 2017
Dissertation phase completion.
13. Expected Date of Completion:-30th July 2017.
14. Approximate Expenditures:-20,000 INR.
Student sign
Guide sign
REFERNCES[1]
Ramya Srinivasan, Abhishek Nagar, Anshuman Tewari, Donato Mitrani, Amit RoyChowdhury, 2014, “Face recognition based on sigma sets of image features”, Proc.
2014 IEEE International Conference on Acoustic, Speech and Signal Processing
(ICASSP),, Samsung Research America, Dallas.
[2]
M.-H. Yang, D. Kriegman, and N. Ahuja. Detecting faces in images: A survey IEEE
Transactions on Pattern Analysis and Machine Intel-ligence, 24(1):34–58, January
2002.
[3]
K. Seo, W. Kim, C. Oh and J. Lee, 2002, “Face Detection And Facial Feature
Extraction Using Colour Snake”, Proc. ISIE 2002 - 2002 IEEE International
Symposium on Industrial Electronics, pp.457-462, L 'Aquila, Italy.
[4]
Dimitrios Bouzas, Nikolaos Arvanitopoulos and Anastasios Tefas 2014, “Graph
Embedded Nonparametric Mutual Information For Supervised Dimensionality
Reduction”, Proc. 2014 IEEE transactions on neural networks and learning systems.
[5]
Mohamed Dahmane and Jean Meunier, 2014, “Prototype-Based Modeling for Facial
Expression Analysis”,Proc. IEEE transactions on multimedia, VOL. 16, NO. 6,
October 2014.
[6]
Jiansheng Chen, Yu Deng, Gaocheng Bai, and Guangda Su, 2015, “Face Image
Quality Assessment Based on Learning to Rank”, Proc. IEEE signal processing
letters, vol. 22, no. 1, January 2015.
[7]
Luis Fernando Martins Carlos Junior and Joao Luis Garcia Rosa, 2014, “Face
Recognition through a Chaotic Neural Network Model”, Proc. International Joint
Conference on Neural Networks (IJCNN), July 2014, Beijing, China.
[8]
Peng Wang and Qiang JiS. Kherchaoui and A. Houacine, 2006, “Performance
Modeling and Prediction of Face Recognition Systems”, Proc. IEEE Computer
Society Conference on Computer Vision and Pattern Recognition (CVPR’06).
Student sign
Guide sign
D. Huang, T. Lin, C. Ho and W. Hu, 2010, “Face Detection Based On Feature
[9]
Analysis And Edge Detection Against Skin Colour-like Backgrounds”, Proc.
2010Fourth International Conference on Genetic and Evolutionary Computing,
pp.687-690, Shenzen, China.
[10]
J. Qiang-rong and L. Hua-lan, 2010, “Robust Human Face Detection in Complicated
Colour Images”, Proc. 2010 The 2nd IEEE International Conference on Information
Management and Engineering (ICIME), pp.218 – 221, Chengdu, China.
[11]
C. Aiping, P. Lian, T. Yaobin and N. Ning, 2010, “Face Detection Technology Based
On Skin Colour Segmentation And Template Matching”, Proc. 2010Second
International Workshop on Education Technology and Computer Science, pp.708711, Wuhan, China.
[12]
P. Peer, J. Kovac and F. Solina, 2003, “Robust Human Face Detection in Complicated
Colour Images”, Proc. 2010 The 2nd IEEE International Conference on Information
Management and Engineering (ICIME), pp. 218 – 221, Chengdu, China.
[13]
N. Sudha, A. R. Mohan and Pramod K. Mehe, 2011, “A Self-Configurable Systolic
Architecture for Face Recognition System Based on Principal Component Neural
Network”, Proc. IEEE transactions on circuits and systems for video technology, vol.
21, no. 8, August 2011.
Mr. Mulla Vasim Bashir
Student
Head
Electronics Engineering Dept.
Karmaveer Bhaurao Patil
Dr. Patil Vikram S.
Project Guide
Principal
Karmaveer Bhaurao Patil
College of Engineering, Satara.
College of Engineering, Satara.
Student sign
Guide sign
Download