<< Chapter < Page Chapter >> Page >
A formal defintion of our problem and initial conditions

The problem

Our project involves the examination of four sets of images from each subject. The program is equipped to detect four emotions: happy , angry , sad , and surprised . It should be able to examine the four images, distinguish between them, and correctly classify each image with its corresponding emotion to a satisfactory degree. Although initial implementations will have a user-defined cropping of relevant facial features, we will also need to design a way to accurately automate this process.

Initial concerns

Naturally, we want two images of the same person to be different only in emotion, not in lighting, position, intensity, etc. Thus, all pictures were taken in the same environment with the same digital camera, each framing the subject's face only as he or she looked straight ahead. The images were then grayscaled and reduced to a size of 250 by 333 pixels. This took care of issues of normalization, since all images were then uniform.

A second, more important issue was which portions of the face to examine. The brain tends to look at several regions: the eyes, the mouth, the cheeks, and the forehead. However, the difference between emotions is very subtle for all of these regions except the mouth, which tends to be the most expressive. Thus, we decided to focus exclusively on the mouth for our project and attempt to gain accurate results using only that portion of the face.

An example of the four images to be processed (happy, surprised, angry, sad)

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Ece 301 projects fall 2003. OpenStax CNX. Jan 22, 2004 Download for free at http://cnx.org/content/col10223/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Ece 301 projects fall 2003' conversation and receive update notifications?

Ask