This week, I worked on creating a classifier for different human gestures. I experimented with both SVM and HMM models on gesture data that had been collected in a previous experiment. The idea is that a person can make a certain gesture (and the data will be recorded with a Wii remote or some other device with accelerometer data), and then the classifier will identify this gesture as one of several trained motions. Then, the robot can respond accordingly to the motion. Another possible method to control robot motion is to directly map a human’s motion to robot motion. These two methods will most likely be compared in a user study later in the summer to see what is the most effective method. Hopefully my other team members will finish creating a good device for HRI so that we can produce our own data that I can then tinker with.