|CRA Distributed Mentor Program (Summer '02)|
There is much interest in using motion capture data to create realistic movement for character animation. However, the limitations are that motion capture data is easily transferable only if the animated character is the same exact body proportions as the motion capture actor. Otherwise, if you want to change the body proportions and shape or to change the resulting motion, things become complicated. Our work is focued on using motion capture data to animate a character of significantly different size and shape from the original motion capture actor.
I basically came to CMU with no background in CG. I had taken the basic lower division courses in data structures and compilers, along with the upper division courses in digital design and operating systems. I think this has only maximized on the amount of material I learned this summer. The tools that I learned and used are: openGL, visual C++, maya, and some CG mathematical techniques.
When transferring motion capture data to the source character, there will inevitably be limbs intersecting with each other among other problems. We then run a collision detection routine to clean up the mess. However, this will often break contacts that need to be enforced. For example, when the character does the motion of putting his hand on his hip, the hand may be displaced from the hip after collision detection. My project was to identify these kinematic constraints where two things need to remain in contact, and to automated the process as much as possible. My focus is on constraints for the end effectors, and in these specific cases, those would be the tips of the hands.
The first thing I did was to apply a more realistic human model to display the motion capture data. I started with a motion capture data player, that displays the motion with a cylinder man. Our lab's resident artist constructed a geometric mesh of the motion capture actor in Maya. I import this model to the motion capture player, apply the correct transformations for each frame, and -voila!- the mocap data player displays the geometric mesh of the mocap actor moving with the mocap data. We can then examine the motion frame by frame and manually identify frames that should be detected as constraints.
The second thing I did was use a previously published collision detection package for its capabilities to query for contact determination. This package could be used to look for objects that are intersecting or within a specified tolerance and return the distances between objects. I utilized this package to query each frame to check if there were any
contacts between the end effector and specific body parts.
I also started looking into currently existing morphing algorithms that would provide a mapping correspondence between the surface of the source mesh to the target mesh. I first looked into possible morphing packages but was unsuccessful. I then examined some papers where we could use their algorithm to implement the mapping. One publication uses transformations through base domains to form their correspondence. Another paper implements an implicit function to form the correspondence. Both of them seemed like too much work for our purpose, since we do not actually need to morph between the two forms.
This summer research experience went very well. I had the opportunity to live and work in a new environment and meet a diverse group of people. I was lucky to be placed with this lab in particular, because the facilities are well-funded and the people are open-minded and friendly. The cost of living here in Pittsburgh is one of the lowest compared to the rest of the country (especially noticeable since I come from Berkeley).
Being able to attend SIGGRAPH was amazing. I was completely blown away. It was a gathering of many different types of people, including computer scientists, marketing representatives, and artists.