My DREU project is in the area of socially interactive robotics and human communication. More specifically I am working on facial tracking using active appearance models (AAMs).
the purpose of this reasearch is to create robot faces with human-like facial expressions. This would be very useful in medical mannequins as it would portray a more accurate representation of working with a human patient. In current medical mannequins the facial expressions of a patient are not present and in many cases where the patient is concious for the procedure their face is a valuable source of information for gauging their level of pain or comfort. By providing the next generation of mannequins with human-like facial expressions we can provide a more realistic working environment thus leading to better trained practioners.
In order to provide that more realistic representation we need to understand how humans express themselves through their face. For that purpose we are using AAM facial tracking. AAM facial tracking provides a 3-D model of the human face and allows for a more accurate portrayal than geometric facial tracking. We are using facial tracking in order to then map the necessary facial expressions to the robot's face. In order to achieve this we need to translate the AA model (3-D) to the robot's face. The problem is that AAM is a 3-D representation while the robot's face is based on 2-D movements with lower DOF than a human face.
At the moment we know of two possible solutions to our problem. The first would be to translate the AAM representation into a geometric one and then map those points directly to the servos and motors in the robot's face, or we could first simplify the AAM into a set of action units (AU) and represent each of those AUs by pre-defined movements in the robot's face. For details on how the project turned out read the Final Project Report.