|CRA Distributed Mentor Program (Summer '02)|
Week 1 (June 3, 2002) - Before this research program, I have not had much experience with computer graphics. For the first week, I read some background material about the different file formats and programs used for the computer animation research.
I also explored three possible projects for my summer research project. All these projects will help contribute to Alla's main project of scaling human motion capture data to non-humanoid characters. One option is more biologically based in that we would come up with a general algorithm that can be applied to a very broad category (i.e. bipeds, quadrupeds, birds, etc.) to calculate the muscle distribution of a character, given the volume and skeleton. This project would involve much searching for previous work in building such a structure for computer graphics purposes and anatomy literatures. We would then have to figure out how to incorporate it into code.
The second option is regarding motion texturing. For Alla's project, when we transfer human motion capture data to the non-humanoid character, the non-humanoid character may be capable of actions that the human is not. For example, when the prairie dog scratches his head, he does so at a much higher frequency than the human does. Ideally, we would speed up the human's scratching motion for that segment. However, then the time of the actual scratching is shortened. We then must use motion texturing to make the scratching motion appear continuous and consistent, yet for the same period of time that the human scratches for. Also, there is the problem that the human already does some scaling when he is acting out the sequence. We have been able to successfully apply scaling and motion texturing to a prairie dog scratching his head, but not to a chicken pecking.
The final goal of the project is a general algorithm to automatically detect instances that need to be sped up and to scale and apply motion texture to the sequence such that the result is natural-looking for that specific character. The third option is the one that I chose at the end of the week. This project is automatic detection of kinematic constraints. Basically, when an end effector (i.e., a hand) needs to remain in contact with a certain part of the body at a certain point in time. The complications involved with this project is that the non-humanoid character has a very different structure from the human structure. We need to be able to apply some sort of morphing that would determine at what point of contact of the human corresponds to a point of contact on the non-humanoid character.
After I decided on this project, Alla gave me some material to read about specific file formats. .obj is used for geometric modeling of a structure using triangles. .asf is the skeleton file used for the motion capture programs. .amc is the file that stores the actual frames for each part of the motion of the skeleton. I also read some background information about orientation representation and inverse kinematics.
Week 2 (June 10, 2002) - I read some previous code for displaying motion capture data with a skeleton that has cylinders and ellipses for its body. I modified a function that converts from .obj to .row so that it takes the data in an .obj file and stores it in some data structures. Then, from this, I display it with openGL polygon functions. I was successful in displaying something, but it needs to be scaled and oriented properly. Thus, Alla had me read some more about using openGL for transformations and orientation, and the sections of code relating to recursively traversing through the skeleton and drawing each bone.
Week 3 (June 17, 2002) - The current existing source code displays the motion capture data with 3D cylinders and simple shapes. We would like to take the geometric mesh of a man, broken down to each of the components for it, and replace the cylinders and simple shapes with a more accurate model. Each component can be exported from Maya as an .obj file. The local coordinates of these meshes directly correspond to the skeleton.asf files.
For the first part of the week, I first tried to display the left humerus (upper arm) of the man in the right starting position. I was finally able to successfully do that by disabling some of the transformations applied to the cylinder (that used to be the upper arm for the man) and applying a translation of my own. However, when I loaded the motion data, the humerus was disjointed from the man. I then tried to move the left humerus mesh so that it is in the same position as the cylinder's starting position in its local coordinate system. Thus, I would be able to just leave all the transformations, including modelview and projection, that is applied to the cylinder. Since the mesh has the same starting position as the cylinder, it should be oriented properly after all the transformations have been applied.
However, in the middle of the week, I discovered that collision detection cannot be applied to objects oriented simply by openGL commands. This is because there is a difference between displaying the object with openGL and actually moving the object to that position in global coordinates. In order for collision detection to be used, the object must actually be moved to the proper position in global coordinates.
I wasn't sure how to obtain a tranformation when you have the initial and final positions and orientations, so Alla told me how to do that. So let's say you have: P0 = initial position, P1 = final position, D0 = initial orientation, D1 = final orientation. You first make a direct translation to get from P0 to P1. Then, you get the axis of rotation by taking the cross product of D0 and D1. Then there is a simple function that calculates the angle of rotation, and a function that produces the transformation matrix when given the axis and angle of rotation. Voila! It was so cool. I really want to take an advanced linear algebra course now.
Week 4 (June 24, 2002) - Most of the end of last week and this week was spent actually implementing those transformations into the existing source code. I spent a long time just poring through the code to figure out what I should modify and etc. So after I put all the modifications that I think should be put in, there was the frustrating debugging mode, where everything should work theoretically, but it doesn't. After I fixed some simple errors, it basically works correct except it is translated too far from where it should be. I was completely stumped. So finally, after I've checked and made printout statements for everything, I asked Alla about it. She told me to move my draw method somewhere else, and it turned out all the transformations were correct and I was just calling it at the wrong point in sequence. That was such a relief, so I guess the hard part is over.
We meet with Prof. Hodgins on Tuesday mornings to update her on our progress and get new ideas about our projects. Thus, on Monday, Alla told me to stop working on the code and start thinking ahead about what we should do next after the code is working. The next aspect of the project, after I am able to incorporate the geometric mesh with the motion data properly, is to determine which frames we need to mark as a kinematic constraint. Basically, this will be any moment where the end effector (i.e. hand) comes into contact with the body. There is actually a SIGGRAPH 2002 paper "Synthesis of Complex Dynamic Character Motion from Simple Animations" that contains some information about automatically detecting kinematic constraints. However, their method determines the constraints based on the end effector being still over a certain number of frames. We want the end effector to be close to the body and still for a certain number of frames. The new faculty member in our lab, James Kuffner, had suggested that I take a look at SWIFT++, a collision detection package developed at UNC.
On Tuesday, we had our meeting and it now seems like I will be using SWIFT++. However, in order to use SWIFT++, the data needs to be altered to a proper form. Our lab meetings are on Tuesday evening. The plan is to have each grad student present a SIGGRAPH paper until SIGGRAPH. This time, Kiran presented the papers relating to cloth simulation. Thus, we spent most of Tuesday reading the papers, which had intense amounts of math and physics.
Week 5 (July 1, 2002) - This was a relatively short week, but I managed to update the code with additions to incorporate the SWIFT package in. However, I am still trying to figure out why no collisions within a specified tolerance are being detected. I've already tried a simple case with two cubes, and that seems to be picking up on detections when it should be. I will need to try some test cases with how SWIFT is applying the transformations to the geometric mesh.
On Friday, I helped Kiran with his cloth simulation project. Kiran is a 3rd year grad student. We were recording data for different types of cloth by a 2D video image and a 3D motion capture "video" for two positions where the cloth is held. Basically, we wanted to capture how the cloth moved depending on what type of cloth it is. In the process, I learned how to calibrate the cameras in the mocap lab (well, more for this specific purpose).
Week 6 (July
8, 2002) - We finally determined how to apply the transformations to the
mesh correctly for each frame of the motion capture data, and how to feed
this transformation to the SWIFT++ collision detection functions. I
tried out the contact determination queries for the man walking, and it
appears to be working properly. The two objects that I use for the
contact determination are the two feet. In a walking motion capture,
when the feet are together, the query detects that there is a point of
contact and when they are walking apart, the collision detection query returns
false for points of contact. This is great news!
Day 1 (7/21/02) –
One of the papers I found looks promising. It is called "Multiresolution Mesh Morphing". It uses Dijkstra's algorithm and some user constraints to find a simplified base domain for each object. It then obtains a bijectional correspondence map between the two objects through the base domains. To go from a point on the source object to the target object, a compound transformation from source object to source base domain to target base domain to target object is used. Although the correspondence goes through base domains, it is still applicable to any point on either object.
I have also been working
on completing the final project for my CRA website and writing documentation
of my summer project for Alla.
Week 10 (July 5, 2002) - We have decided that both the package and the morphing paper would not be useful in serving our purpose. Since this is my last week , we decided that I should focus on the capabilities of the contact determination system has on a wide range of motion capture data. We want it to be able to pick up on kinematic constraints with a general definition for the tolerance and which limbs to be activated in the detection system. Before I have only been working with motion capture data mostly focusing on capturing the constraint of a hand resting on the hip and hands in a T-shape. Now, I also have data with walking and swimming motions, along with the motions in acting out Cock Robin. The walking motion will be tricky since we don't want the arms swinging past the body to be picked up as a constraint. However, since we are using the tip of the fingers, I don't think it will be a problem.From what I observe about the videos and the motion capture data, it may be possible to set up the contact determination to check for frames when the end effectors come in contact with any part of the body that is naturally able for them to reach (for example, probably unnecessary to check for right fingers and right hand, or right fingers and right radius). It may be insufficient to only check for velocity of each bone since both fists may be in contact and moving at the same time (style134). It could be useful to check for the relative velocities of the two objects in contact. If one is still, then the other needs to be still to count as a constraint. If both are moving together, then it would be a constraint. There may also need to be different tolerances depending on which two parts you are checking.