CRA Distributed Mentor Program (Summer '02)

Home
About Me
My Mentors
My Project
Journal
Fun Places
Around Pittsburgh















  The Project : Automatic Detection of Kinematic Constraints

This project will help to contribute to graduate student Alla Safonova's project regarding scaling human motion capture data to animate characters of significantly different shape from the original motion capture actor.

There are specific points of contact that need to be preserved when scaling mocap (short for "motion capture") data.  The anatomic structure of the source geometry to the target geometry is quite different.  Thus, I need to apply collision detection to the scaled model.  However, this results in new behavior that is inconsistent with the original mocap data.  Specifically, I would like to identify the instances when the end effectors (i.e. the tip of the hand or the foot) needs to remain in contact with a specific body part.  For example, when a man is scratching his head with his hand, we need to make sure that the hand is actually scratching in a plane such that it makes contact with the head.  Our previous model made a scratching motion at some displacement away from the head.  We were able to manually correct the error, but we would like to come up with an algorithm to do this automatically.


Step 1: Apply a more accurate geometric mesh to the mocap data

I started with a mocap data visualization program created by a couple grad students in the lab.  It uses cylinders as body parts for the skeleton.

                                 Skeleton with cylinders

cylinder mesh <image loading...>


I want to apply a more accurate geometric mesh of the human mocap actor to the skeleton.  We had another student create it in Maya.  I then extract each body part, such as the humerus, lowerback, etc., from the model as an .obj file.

                    Mocap actor geometric mesh model

human mesh model < image loading . . . >


An important aspect to consider is that the cylinders in the original program were displayed using openGL transformations.  OpenGL transformations do not physically move the objects to the correct coordinates, rather they just project the image to that position.  In order to perform collision detection, it is necessary to actually move each body part to the correct coordinates for each frame.  Thus, I needed to calculate the transformation and apply it to the vertices of the mesh for each frame.

                  Skeleton with human geometric mesh

human mesh display < image loading . . . >



Step 2: Use tolerance detection to identify kinematic constraints

I want to find frames when the end effector is in contact with a body part, such as the hip.  I use the SWIFT++ collision detection package to detect frames when the end effector is within a specified tolerance from a specified body part.  For example, these are some of the types of poses I want to pick up on:

                                       Hand on hip

hand on hip <image loading . . . >


                               Hands forming a T shape

hands in T < image loading . . . >


As you can see, those are two instances when we want to mark as kinematic constraints during collision detection, because we want the end effector to remain in contact with the body after scaling.


Step 3: Finding a correspondence between the two models

Now that I have identified the kinematic constraints on the original mocap actor
model, I need to find the corresponding position on the new character model.   I am investigating the usage of a morphing package to implement this correspondence.