Progress has been made! My partner and I used the Wii remote to make a dataset of our proposed gestures — we each recorded ourselves doing our set of motions about ten times each. I then fed the dataset into my HMM script and generated confusion matrices to see how the gestures/classifier performed. The results revealed some potential issues such as the gestures being too similar, the dataset too small, or the classifier not accurate enough. Nonetheless, this is good feedback to work off of and will keep me busy in my last few weeks!
My partner and the other REU students working in my lab for the summer all leave this weekend, so I’ll be continuing the project solo for my last few weeks.