Emotions are also a form of non-verbal communication that we use to reflect our physiological and mental state. These both states, physiological and mental, is something that we express continuosly, so even when we do not express our emotions they are present and it is posible to measure. Being able to read this states from a person, can help to predict it emotions and then their behaviour [1].

In order to get these states we are using the model base on the one proposed on [1], where by getting images of the facial expresions and head movements it is possible to infer emotions from a person. "The resulting model is a multi-level probabilistic graphical model that represents the facial events in a raw video stream at different levels of spatial and temporal abstraction. Dynamic Bayesian Networks model observable head and facial displays, and corresponding hidden mental states over time. The automated mind-reading system implements the model by combining top-down predictions of mental state models with bottom-up vision-based processing of the face." [1] With this system it is possible to infer six different emotions beyond the basic emotions: agreeing, concentrating, disagreeing, interested, thinking and unsure.
Hardware

In terms of hardware, in order to run this system, it is only needed a web-cam which being able to capture at least 30 frames per second. This rate assure the quality of the video images to feed the system and get better inferences about the emotions.
Software

Base on the previous work on [1] and the work developed by the MIT Media Lab [2], our team developed a console application for Windows written in Java which is able to capture video images from user's face and depending on the running parameters, report the probability for each of the six emotions during a session, or only the emotion with the higher probability (according with a threshold) during the session.

This application was designed as simple as possible resources (CPU and RAM) wise.
Installation
  1. Download the application.
  2. Unzip the file, and copy all the files into a new folder called "faceReader".
  3. And you are done!
Using the system
  1. In order to use the application you must have, as mentioned in the hardware section above, a webcam connected/embedded in your computer. It is important to note that the camara cannot being used by any other application while running this application.
  2. Open a Command window.
  3. Change to the "mindReader" folder.
  4. Type MindWrapper <output_file_name> [all | higher].
    Where output_file_name is the name of the file where all data will be stored. This file will be stored on the "faceReader" folder. If t he second parameter is not defined, the default value is "all", that means that the six emotions will be stored for the whole session. If the "highe" option is choosen then the output file will only contain the emotion with the highest probability on each time point.
  5. To stop recording data just press Ctrl + C.
  6. The output file will be in the same folder as the application.
Description of the output file

The output file could be opened as CSV file, so it can be readed as plain text file or using some spreadsheet application such as Excel. If you run the programm just with the SCS your file will have the following fields:

Field Description Values
Timestamp It is the timestamp (date and time) of the computer running the system. It could be used to synchronize the data with other inputs. The value is a combination of the date and time on the computer, with the following format "yymmddhhmmssSSS" (y - year, m - month, d - day, h - hour, m - minutes, s - seconds, S - milliseconds).
Agreement This value shows the probability of this emotion being present on the user at a particular time (frame). This value is between 0 to 1. If the value is -1 it means it was not possible to define an emotion. This happen the user's face is out of the camera focus.
Concentrating This value shows the probability of this emotion being present on the user at a particular time (frame). This value is between 0 to 1. If the value is -1 it means it was not possible to define an emotion. This happen the user's face is out of the camera focus.
Disagreement This value shows the probability of this emotion being present on the user at a particular time (frame). This value is between 0 to 1. If the value is -1 it means it was not possible to define an emotion. This happen the user's face is out of the camera focus.
Interested This value shows the probability of this emotion being present on the user at a particular time (frame). This value is between 0 to 1. If the value is -1 it means it was not possible to define an emotion. This happen the user's face is out of the camera focus.
Thinking This value shows the probability of this emotion being present on the user at a particular time (frame). This value is between 0 to 1. If the value is -1 it means it was not possible to define an emotion. This happen the user's face is out of the camera focus.
Unsure This value shows the probability of this emotion being present on the user at a particular time (frame). This value is between 0 to 1. If the value is -1 it means it was not possible to define an emotion. This happen the user's face is out of the camera focus.
References
[1] el Kaliouby, R.A. Mind-reading machines: automated inference of complex mental states. 2005. Retrieved November 16th, 2010, from: University of Cambridge - Computer Laboratory - Technical reports: http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-636.html