DREU

Clemson University, Summer 2017

Erin Walter

About Erin

Erin Walter is currently an MA candidate of Interdisciplinary Computer Science at Mills College in Oakland, CA. She is a San Jose, CA native and has traveled to Guatemala and Europe. She has a background in video production and advertising, and is interested in machine learning, human-computer interaction, full stack web development, and computer graphics. Her expected graduation is December 2017. Erin is looking forward to enhancing her skills in Python and meeting new people. Pronouns: She/Her/Hers. Email: ewalter at mills.edu

Erin's Mentor

Mentor

Sophie Jörg

Since 2012, Sophie has been an Assistant Professor at Clemson University. Her primary research interests are in computer graphics, especially animation and perception.

Research

Areas of Interest

    Character animation techniques and algorithms. "I am particularly interested in developing new animation techniques using motion capture, statistical properties and learning through databases."
    Perception of lifelike virtual humans. "Humans are capable of successfully distinguishing between human and computer-generated motions, even if the differences are marginal. This skill makes it a challenge to produce convincing animations, especially for very realistic human-like virtual characters. I aim to determine which components of human motion are crucial to lifelike appearance and which errors diminish this realism."
    Hand and finger motions. "Hand and finger motions are omnipresent in daily life. Nevertheless, virtual characters often lack convincing hand and finger motions. Capturing, analyzing, understanding, and automatically generating these subtle movements are topics I address in my research."

"I am also interested in machine learning, game design, neuroscience, and human-computer interaction. I received my PhD in 2011 from the Graphics, Vision and Visualization Group at Trinity College Dublin, Ireland, advised by Carol O'Sullivan. I then spent a year as a postdoctoral researcher at Carnegie Mellon's Graphics Lab working with Alla Safonova and Jessica Hodgins. During my PhD, I also conducted research as a visiting student in the Graphics Lab at Carnegie Mellon University and as an intern at Disney Research, Pittsburgh."

Research Project

We will be working to evaluate the impact of errors in hand and finger motions. Hand and finger motions are extremely important in daily interactions, but they are difficult to capture, so in computer animation they are typically created manually. At Clemson, we are working on developing algorithms to automatically create finger motions. One part of that project is to find out the impact of different types of errors on our perception of hand and finger motions.

Progress

Week 1: June 5 - 9

I'm not in Cali anymore! This week was filled with acclimating to all that is South Carolina. The heat and humidity, of course, are the most notable differences. I arrived and my dorm was very sparse, but luckily everyone has been incredibly helpful and friendly. I am also very thankful for Publix Market, which has all the necessities and kitchen items I needed to get settled. The first week I met with my mentor and fellow researchers. I spent the majority reading literature, mainly on related works in distance metrics and human perception experimentation and evaluation. I did not anticipate the related works to involve so much machine learning, but I am excited to learn more on the topic and learn how to code and compute distance metric algorithms using Python. I also learned about the motion capture post-processing step, which includes the tedious task of manual marker labeling. My research aims to improve the motion capture post-processing step by adding data-driven synthesis for hand gestures, adding to the efficiency of animations, and improving human perception of finger motions.

As a group we went to an escape room in nearby Greenville to do research for future VR projects, but also because escape rooms are really fun. It was my first time participating in an escape room, and I hope it will not be my last.

Here is a photo of us celebrating our victory after escaping the evil kidnapper:

Week 2: June 12 - 16

This week consisted of getting to know the Vicon Blade system in depth. Vicon Blade is a motion capture software used to both capture data and post process it. I am getting more comfortable with the software, while realizing how important it is to maintain a consistent workflow and understand how the program's algorithms work.

I am excited to continue working on my project, as we near closer to a motion capture setup and debugging. I will also be working more on solidifying distance metric features that will be incorporated into our experiment design. Next week the other REU student is coming. She is from California and I am very excited to meet her!

I also spent the weekend planning my upcoming 4th of July trip to Asheville, NC, which is a two hour drive. I am excited to explore the surrounding mountain regions.

Week 3: June 19 - 23

The third week, my goal was to continue debugging the motion capture system and discover problems within the current setup. We are trying to achieve a working setup so that we can conduct a live motion capture session in the coming weeks that will aid our research. I tested each of the camera cables individually in a systematic fashion, which allowed for 4 cameras to work simultaneously. Previously, we had only 1 camera working at a time, then 3, and now we have 4. I am hopeful that a mixture of old and new cameras will allow us to have approximately 10 cameras or more for a motion capture setup.

I also continued working on post processing and labeling the motion capture data. It is taking me much longer than anticipated, because many of the label placements need to manually be switched. Then, gaps where specific joint are either occluded or disappear need to be filled in. This process is called filling in gaps. The final steps involve final cleanup and exporting an .fbx file from Blade to be imported into Maya for animation.

Dr. Jörg and I met at the end of the week and discussed specifics for how we will calculate our distance metric. Once given pairs of similar or less similar motions, the learning machine will learn a distance metric function that can predict if new objects are similar. We are planning to have a human based data that will allow us to assess which finger motions are similar or less similar. It is important to base this on human perception, because we ultimately want our results to be validated by human perception. For our distance metric, we will focus on joint orientation distance, finger spreading distances, as well as joint velocity. We may add more features later once I have calculated the distances of the features. This requires extracting data from our model in Maya, and calculating distances. I am excited to work more on this aspect of the project, and work with Python scripts to extract the data.

Isabela, who came to Clemson this week, is working on clustering our data in order to visualize and classify gestures. She has more experience in machine learning than me, so I'm sure I will be asking her lots of questions. I was so surprised to learn that we went to the same high school! It is great working with another person from California, let alone someone who is from my hometown. Unfortunately the weather has been overcast and a bit rainy this weekend, but once we both get our bicycles and the weather improves, we plan on venturing around Clemson and taking in all the beauty.

Week 4: June 26 - 30

This week I discovered that I needed to segment my clips, so that I am only using active phases of hand gestures. The active phases will be determined using wrist velocity. Using a velocity threshold, we will use segments within a specific velocity range to systematically segment clips, so we are not arbitrarily selecting which frames to use as the active phase in a gesture.

Sophie gave me Matlab code for velocity, which I will need to apply to my wrist rotation data. This can be complex, because the data needs to properly be converted into a matrix that will work nicely with the velocity algorithm. Isabela has introduced me to numpy, which contains many methods for concatenating and reshaping arrays. I am still getting comfortable working with reading and parsing the data from text files, and converting to numpy arrays. Once the data looks good, I will need to test the velocity algorithm and make sure it works well with the segmentation code.

The code for segmentation was previously written in Unity, using C++, so it will be a challenge to 'translate' such code into Python from C++. I am familiar with Java, but not C++, so hopefully I can interpret the code. Sophia and Moshe (a PhD student) both explained the segmentation algorithm in detail to me, which has many rules and conditions for properly splitting segments. Because we are segmenting a rotation curve, there are many points where we may need to merge and split data. Previously, this method has been used in Matlab and Unity, but not Maya. My goal will be to see if I can properly segment clips in Maya, so the segmentation can be applied to many clips. Once the data is segmented, my goal will be to extract features and begin finding and calculating distance metrics.

This weekend I am taking a trip to Asheville, North Carolina for 4th of July. I am really excited because my boyfriend is coming to visit. We plan on hiking in the mountains, exploring the city of Asheville, and traveling to the Biltmore estate.

Week 5: July 3 - 7

After having a wonderful 4th of July in Asheville, I spent the rest of the week completing the velocity algorithm, testing it, and gathering data. I first made sure that my wrist rotation data was correct. I was actually getting the rotations locally, instead of world, or 'global' position, so I had to recollect the rotation data in order to calculate the velocity. This took some time, but luckily my velocity code was written and worked. The velocity is calculated using a formula that computes values from wrist rotations, and is applied through multiplying speed * time. In the context of animation, time can be measured by frames. Since it was a short week, I continued to work on the segmentation code, but did not finish it. I am hopeful that next week I can finish the segmenting, which will help me automatically determine active phases systematically, rather than having to find the active phases of each clip manually.

This week I also learned how to use matplotlib, which I really enjoyed! Being able to visualize your data is so helpful. Since I only need to gather 5-8 sample gestures, the velocity visualization helped me determine which gesture I want to use, by being able to see which clips might have the longest active phase. I want to try and use segments that have at least 20-50 usable frames, because a clip on average is less than 1 second. Since I want the phase to be visible and easy to distinguish from a human perspective, the more frames the better. Having more frames will also give me more allowance to add offsets such as joint rotation, bending of fingers, changing distances between fingers, and increasing and decreasing frame rates (which speeds up and slows down animation respectively).

Here is an example of the wrist velocity data, for the "attention" gesture:

I hope to spend some time this weekend in nature, as the weather will be warming up and it looks like perfect beach weather. Here in Clemson the 'beach' is actually on a lake, unlike in California where I automatically think of the ocean.

Week 6: July 10 - 14

This past weekend was so much fun! About 10 of the REU students and I went to the beach, which was a beautiful cove not too far from campus. We did a short 1.8 mile hike to the beach and back, and spent most of the day swimming in lake Hartwell. On Sunday, I went for an impromptu bike ride to Y beach and back, and by the end of the weekend my body was exhausted from all the activity. I attribute this to the 90 degree weather with added humidity. I wouldn't say I'm a big fan of the heat, but I do prefer it over cold weather, and I'm glad to be experiencing a true summer for once. Back in San Francisco where I'm from, 4th of July was nothing but fog and coldness. I feel pretty lucky to be avoiding the summer 'winter' of SF and enjoy all this nature.

Other than weekend adventures, this week was really productive in the lab. I finished my segmentation code, and now I can begin my work adding the offsets to fingers in active phases. I figured out how to slow down and speed up clips using Maya's re-timing feature via the GUI, which Isabela found via Google search. Maya can be confusing, because there are usually at least 3 different ways to do something: either manually in the UI, programmatically through MEL or Python, or through the 'dope sheet' editor, which is more commonly used in film production. In my experience, whether I need my results to be repeated/reproducible is usually how I determine whether to do something programmatically vs. the GUI.

Next week I will need to do a little research on how to best compute joint distances using Euclidean distance, then continue adding offsets.

Week 7: July 17 - 21

This week I was able to use a portion of a formula from a research paper that finds the distance between two Euler angles. I am using a basic Euclidean distance formula, which can be applied to the rotations of the fingers for a gesture's active phase. Using the distance, I am able to find the distances between each finger joint that I want to include in my representation. Dr. Jörg suggested we do not need to use fingertips, since they generally don't contain helpful rotation information, typically holding values of 0.0. I will be using the finger joints labeled MP, PIP, and DIP, and computing the distances between them on the right hand. I can then compare the distances of those joints with distances of joints from a different clip with an offset.

Most of my clips have offsets applied, but I may do some final adjusting to make sure they are perceptually different, or have interesting offsets. The distances between clips should not be too time consuming to find, given that I have the code written and ready to go. However, I do have a big step ahead of me, and a big step for the experiment, which is applying the life-like model of a hand to my skeleton, so that we can perform the similarity experiment. I don't have experience animating objects to match another object's animation (translation, rotation, and scale), and this is something that will need to be done in Maya.

Next week, I plan to start adding the rotations from the skeleton to the hand model provided, then begin to output playblasts of clips which can be compared to one another. This may or may not be a time consuming process, and I'm not confident I can finish before I leave, since I only have 3 weeks left. However, I will try my best to accomplish this part of the research!

Week 8: July 24 - 27

This week I began computing distances between joints, and computing the differences between clips using those distances. I also made more adjustments to the hands to make sure the bending was perceptually different enough, and finished making slower and faster version of the clips.

I began adding rotations from the original skeleton to the hand model, however there are some complications. Unfortunately, the local orientations of the skeleton and geometry model do not align, and are in fact completely different. Next week I will look into ways of fixing the issue. This will likely involve adding offsets to orientation constraints and adjusting the local pivot points of the joints. I also plan to continue working on evaluating and visualizing the distances between clips, and begin writing my final report.

This week I got to see the other REU student's posters at 5th Annual Summer Undergraduate Research Symposium. Most of the poster topics were related to biology and bioengineering, so there were plenty of concepts I was unfamiliar with, such as "Evaluating Gadolinium Containing Nanoparticle Contrast Agents for Magnetic Resonance Imaging". However, the students did an excellent job of explaining their research, and I enjoyed seeing how each project was unique.

Since it is the last weekend for some of the REUs, myself and three other students are going to Nashville for a weekend trip. I'm a musician, so it should be great exploring Music City and soaking up all the history of Nashville. There are also amazing vegan food places in Nash that I'm excited to try.

Week 9: July 31 - August 4

This week was a big challenge, and my goal was to fix the orientation issues for the geometry hands. I did make some progress fixing the positions of the hand and fingers, but not all of the wrist and finger rotations are correct. Even the PhD student, who has a lot of experience in Maya, said there isn't an optimal solution to the problem, because the original skeleton is not good. Basically, when a skeleton's animation is 'zeroed' out, the skeleton should make a T-pose. The skeletons I am working with are very imperfect, and do not form the T-pose, thus it is very hard to change the rotation correctly. There is a more difficult approach to solving the problem, called IK, but even experienced Maya programmers take a while to learn this. I will consult with Sophie next week on how to solve this problem, as she is currently at SIGGRAPH 2017.

Since I ran into some issues with the geometry hands, I output visualizations of the joint distances between clips and began comparing them. I also began writing my final report. Only one more week left!

Week 10: August 7 - 11

For my final week, my advisor and I decided to change the project, due to issues with joint orientations of the two skeletons. Instead of adding geometry to the original animation skeleton, I used to an existing hand model to create different finger poses. I created 14 different finger poses, to create a good mix of poses that were perceptually similar and perceptually different in order to make comparisons. I used Euclidean distance and Root Mean Square to create a metric for comparisons between poses. I also created 360 degree turntables for each of the poses, and visualized the data for the comparisons.

I am excited to be going back home to California, and will spend some time reflecting on my experience. I will also write my final report. Overall, this was a great summer that opened my eyes up to many areas of computer graphics research. I am very interested in researching more in computer graphics, and possibly being a co-author on a paper to submit to a graphics conference. I am very interested in futhering independing research in the field and find new ways of combining computer graphics with other topics like machine learning, data visualization, and human-computer interaction.

Final Report