Smile detection in face images captured in unconstrained real world scenarios is an interesting problem with many potential applications. This paper presents an efficient approach to smile detection, in which the intensity differences between pixels in the grayscale face images are used as features. We adopt AdaBoost to choose and combine weak classifiers based on intensity differences to form a strong classifier. Experiments show that our approach has similar accuracy to the state-of-the-art method but is significantly faster. Our approach provides 85% accuracy by examining 20 pairs of pixels and 88% accuracy with 100 pairs of pixels. We match the accuracy of the Gabor-feature-based support vector machine using as few as 350 pairs of pixels.
The machine analysis of facial expressions in general has been an active research topic in the last two decades.
Most of the existing works have been focused on analyzing a set of prototypic emotional facial expressions, using the data collected by asking subjects to pose deliberately these expressions
This is a challenging problem
It seems very difficult to capture the complex decision boundary among spontaneous expressions
Very limited data were used in their study
In this paper, we focus on smile detection in face images captured in real-world scenarios.
We present an efficient approach to smile detection, in which the intensity differences between pixels in the grayscale face images are used as simple features.
AdaBoost then is adopted to choose and combine weak classifiers based on pixel differences to form a strong classifier for smile detection
Experimental results show that our approach achieves similar accuracy to the state-of-the-art method but is significantly faster.
Our approach provides 85% accuracy by examining 20 pairs of pixels and
88% accuracy with 100 pairs of pixels.
Smile detection has many applications in practice, such as interactive systems (e.g., gaming), product rating, distance learning systems, video conferencing, and patient monitoring. For example, the statistics on the audience smile can be a hint for “how much the audience enjoys” the multimedia content.
Smile detection has received much interest for commercial applications. For example, in some digital cameras, the “smile shutter” shoots automatically when a smiling face is detected
Extracting effective features
Feature-point detection
Geometric features exploitation module
Boosting Pixel Intensity Differences
Smile Detection
Extracting effective features
In this module, first the user has to give input image. The input image is first checked for the features. In case if the image does not contain human features, then it does not detect it. If the input image contains Human features, then it detects the features
Feature-point detection
In this modules, the feature points are detected automatically. For face detection, first we convert binary image from RGB image. For converting binary image, we calculate the average value of RGB for each pixel and if the average value is below than 110, we replace it by black pixel and otherwise we replace it by white pixel.
By this method, we get a binary image from RGB image. Then, we try to find the forehead from the binary image. We start scan from the middle of the image, then want to find a continuous white pixels after a continuous black pixel. Then we want to find the maximum width of the white pixel by searching vertical both left and right site. Then, if the new width is smaller half of the previous maximum width, then we break the scan because if we reach the eyebrow then this situation
will arise. Then we cut the face from the starting position of the forehead and its high will be 1.5 multiply of its width. In the figure, X will be equal to the maximum width of the forehead. Then we will have an image which will contain only eyes, nose and lip. Then we will cut the RGB image according to the binary image.
Geometric features exploitation module
For lip detection, we determine the lip box. And we consider that lip must be inside the lip box. So, first we determine the distance between the forehead and eyes.
Then we add the distance with the lower height of the eye to determine the upper height of the box which will contain the lip. Now, the starting point of the box will be the ¼ position of the left eye box and ending point will be the ¾ position of the right eye box. And the ending height of the box will be the lower end of the face image. So, this box will contain only lip and may some part of the nose. Then we will cut the RGB image according the box.
Boosting Pixel Intensity Differences
After extracting intensity difference features from face images preprocessed by
HE, we run AdaBoost to choose the discriminative features and combine the
selected weak classifiers as a strong classifier. With the selected top 500 features, the trained AdaBoost achieves the accuracy of 89.7%.
Smile Detection
That is, the weight for each pixel is accumulated, and the grayscale intensity in pictures in Fig. 4 is proportional to the times of that pixel being used. It is evident that the involved pixels are distributed mainly in the regions around mouth, with a few from the eye areas. This is reasonable, considering that the major difference between smile and non-smile faces is the mouth or the lips. To validate this further, we derive the “mean faces” of smile and non-smile by averaging all smile faces and non-smile faces in the data set, which are shown in Fig. 5. We can see in the mean faces that, visually, the main difference lies in the mouth region and the eyes, where smile faces have open mouth, whereas non-smile faces have mouth closed.
•
SYSTEM : Pentium IV 2.4 GHz
• HARD DISK : 40 GB
• FLOPPY DRIVE : 1.44 MB
•
MONITOR
•
MOUSE
: 15 VGA colour
: Logitech.
• RAM : 256 MB
• KEYBOARD : 110 keys enhanced.
• Operating system :- Windows XP Professional
•
Front End :- Microsoft Visual Studio .Net 2008
• Coding Language : - C# .NET.
Caifeng Shan , Member, IEEE, “ Smile Detection by Boosting Pixel Differences”,
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 1,
JANUARY 2012.