Main Objective


This summer, my project will be to build a gestural interface with a Microsoft Kinect, or similar inexpensive capture device, to control an on-screen avatar. This system will have three major components:

(1) Hand tracking

(2) Gesture recognition

(3) Realtime playback of motion clips

The target number of gestures for the system to recognize is five gestures. The gestures will be running, jumping, marching, kicking, and dribbling. Users can perform a gesture with their hands in front of the sensor. When a gesture is recognized by the system, an on-screen avatar will perform a motion corresponding to the identified gesture.



Final Paper


Abstract: This project serves as a gestural interface for searching a motion capture database. This system is intended to handle the gestures of a wide range of users employing algorithms that were trained on gestures identified in a user study. With this system, users can act out gestures in front of a natural interaction device, which will trigger different behaviors of an on-screen character. This inexpensive system captures movements using a Microsoft Kinect or similar sensor. A blob tracking method is used in conjunction with hidden Markov models (HMMs) to recognize the motions to be retrieved from a motion capture database and performed by an on-screen avatar.

View PDF