CRA Distributed Mentor Program (Summer '02)

Home
About Me
My Mentors
My Project
Journal
Fun Places
Around Pittsburgh















  Journal

Week 1 (June 3, 2002) - Before this research program, I have not had much experience with computer graphics. For the first week, I read some background material about the different file formats and programs used for the computer animation research.

I also explored three possible projects for my summer research project. All these projects will help contribute to Alla's main project of scaling human motion capture data to non-humanoid characters. One option is more biologically based in that we would come up with a general algorithm that can be applied to a very broad category (i.e. bipeds, quadrupeds, birds, etc.) to calculate the muscle distribution of a character, given the volume and skeleton. This project would involve much searching for previous work in building such a structure for computer graphics purposes and anatomy literatures. We would then have to figure out how to incorporate it into code.

The second option is regarding motion texturing. For Alla's project, when we transfer human motion capture data to the non-humanoid character, the non-humanoid character may be capable of actions that the human is not. For example, when the prairie dog scratches his head, he does so at a much higher frequency than the human does. Ideally, we would speed up the human's scratching motion for that segment. However, then the time of the actual scratching is shortened. We then must use motion texturing to make the scratching motion appear continuous and consistent, yet for the same period of time that the human scratches for. Also, there is the problem that the human already does some scaling when he is acting out the sequence. We have been able to successfully apply scaling and motion texturing to a prairie dog scratching his head, but not to a chicken pecking.

The final goal of the project is a general algorithm to automatically detect instances that need to be sped up and to scale and apply motion texture to the sequence such that the result is natural-looking for that specific character. The third option is the one that I chose at the end of the week. This project is automatic detection of kinematic constraints. Basically, when an end effector (i.e., a hand) needs to remain in contact with a certain part of the body at a certain point in time. The complications involved with this project is that the non-humanoid character has a very different structure from the human structure. We need to be able to apply some sort of morphing that would determine at what point of contact of the human corresponds to a point of contact on the non-humanoid character.

After I decided on this project, Alla gave me some material to read about specific file formats. .obj is used for geometric modeling of a structure using triangles. .asf is the skeleton file used for the motion capture programs. .amc is the file that stores the actual frames for each part of the motion of the skeleton. I also read some background information about orientation representation and inverse kinematics.

Week 2 (June 10, 2002) - I read some previous code for displaying motion capture data with a skeleton that has cylinders and ellipses for its body. I modified a function that converts from .obj to .row so that it takes the data in an .obj file and stores it in some data structures. Then, from this, I display it with openGL polygon functions. I was successful in displaying something, but it needs to be scaled and oriented properly. Thus, Alla had me read some more about using openGL for transformations and orientation, and the sections of code relating to recursively traversing through the skeleton and drawing each bone.

Week 3 (June 17, 2002) - The current existing source code displays the motion capture data with 3D cylinders and simple shapes. We would like to take the geometric mesh of a man, broken down to each of the components for it, and replace the cylinders and simple shapes with a more accurate model. Each component can be exported from Maya as an .obj file. The local coordinates of these meshes directly correspond to the skeleton.asf files.

For the first part of the week, I first tried to display the left humerus (upper arm) of the man in the right starting position. I was finally able to successfully do that by disabling some of the transformations applied to the cylinder (that used to be the upper arm for the man) and applying a translation of my own. However, when I loaded the motion data, the humerus was disjointed from the man. I then tried to move the left humerus mesh so that it is in the same position as the cylinder's starting position in its local coordinate system. Thus, I would be able to just leave all the transformations, including modelview and projection, that is applied to the cylinder. Since the mesh has the same starting position as the cylinder, it should be oriented properly after all the transformations have been applied.

However, in the middle of the week, I discovered that collision detection cannot be applied to objects oriented simply by openGL commands. This is because there is a difference between displaying the object with openGL and actually moving the object to that position in global coordinates. In order for collision detection to be used, the object must actually be moved to the proper position in global coordinates.

I wasn't sure how to obtain a tranformation when you have the initial and final positions and orientations, so Alla told me how to do that. So let's say you have: P0 = initial position, P1 = final position, D0 = initial orientation, D1 = final orientation. You first make a direct translation to get from P0 to P1. Then, you get the axis of rotation by taking the cross product of D0 and D1. Then there is a simple function that calculates the angle of rotation, and a function that produces the transformation matrix when given the axis and angle of rotation. Voila! It was so cool. I really want to take an advanced linear algebra course now.

Week 4 (June 24, 2002) - Most of the end of last week and this week was spent actually implementing those transformations into the existing source code. I spent a long time just poring through the code to figure out what I should modify and etc. So after I put all the modifications that I think should be put in, there was the frustrating debugging mode, where everything should work theoretically, but it doesn't. After I fixed some simple errors, it basically works correct except it is translated too far from where it should be. I was completely stumped. So finally, after I've checked and made printout statements for everything, I asked Alla about it. She told me to move my draw method somewhere else, and it turned out all the transformations were correct and I was just calling it at the wrong point in sequence. That was such a relief, so I guess the hard part is over.

We meet with Prof. Hodgins on Tuesday mornings to update her on our progress and get new ideas about our projects. Thus, on Monday, Alla told me to stop working on the code and start thinking ahead about what we should do next after the code is working. The next aspect of the project, after I am able to incorporate the geometric mesh with the motion data properly, is to determine which frames we need to mark as a kinematic constraint. Basically, this will be any moment where the end effector (i.e. hand) comes into contact with the body. There is actually a SIGGRAPH 2002 paper "Synthesis of Complex Dynamic Character Motion from Simple Animations" that contains some information about automatically detecting kinematic constraints. However, their method determines the constraints based on the end effector being still over a certain number of frames. We want the end effector to be close to the body and still for a certain number of frames. The new faculty member in our lab, James Kuffner, had suggested that I take a look at SWIFT++, a collision detection package developed at UNC.

On Tuesday, we had our meeting and it now seems like I will be using SWIFT++. However, in order to use SWIFT++, the data needs to be altered to a proper form. Our lab meetings are on Tuesday evening. The plan is to have each grad student present a SIGGRAPH paper until SIGGRAPH. This time, Kiran presented the papers relating to cloth simulation. Thus, we spent most of Tuesday reading the papers, which had intense amounts of math and physics.

Week 5 (July 1, 2002) - This was a relatively short week, but I managed to update the code with additions to incorporate the SWIFT package in. However, I am still trying to figure out why no collisions within a specified tolerance are being detected. I've already tried a simple case with two cubes, and that seems to be picking up on detections when it should be. I will need to try some test cases with how SWIFT is applying the transformations to the geometric mesh.

On Friday, I helped Kiran with his cloth simulation project. Kiran is a 3rd year grad student. We were recording data for different types of cloth by a 2D video image and a 3D motion capture "video" for two positions where the cloth is held. Basically, we wanted to capture how the cloth moved depending on what type of cloth it is. In the process, I learned how to calibrate the cameras in the mocap lab (well, more for this specific purpose).

Week 6 (July 8, 2002) - We finally determined how to apply the transformations to the mesh correctly for each frame of the motion capture data, and how to feed this transformation to the SWIFT++ collision detection functions.  I tried out the contact determination queries for the man walking, and it appears to be working properly.  The two objects that I use for the contact determination are the two feet.  In a walking motion capture, when the feet are together, the query detects that there is a point of contact and when they are walking apart, the collision detection query returns false for points of contact.  This is great news!

The next step was to try my current program out with other motions.  First, I assessed which frames of the motion we would like to detect, just by looking at the movie and the motion capture data.  Then, I checked if it was possible for the collision detection program to pick up on these frames.  We have a mesh modeling artist in our lab, but everyone in the lab seems to need him to do something for each of our individual projects.  Currently, I am waiting for a proportionally scaled mesh to the skeleton and separate pieces for each joint of the skeleton.

The plan for next week is to read up on morphing packages.  The next stage of our project is to find a way to morph from a human mesh to a animal mesh in such a way that we can map a correspondence between each vertex on the human to each vertex on the animal.

Week 7 (July 15, 2002) - Joel, our computer modeling artist, created a new and improved model of our motion capture actor, Rory.  This new model is broken up so that there is a separate component for each joint in the skeleton.  I had some problems with decomposing this new model, but Joel showed me some ways I can tweak with the model so that it will decompose properly.

After using the new model with more motion capture videos, the collision detection is not working properly.  It seems that the transformations  applied through the collision detection package is not uniform with the motion capture session displayed in our viewer.  We are checking for  frames with the end effector are within a certain tolerance from the body.  The global coordinates of the end effector in the viewer differs from those of the ones in the collision detection program.  Also, the collision detection program doesn't have a graphical interface so that we can compare that to our viewer.  It does have a graphical interface for the decomposer but there are some complicated compiler errors that doesn't seem to be easy to fix.

Next week is SIGGRAPH!

Week 8 (July 22, 2002) - SIGGRAPH

Day 1 (7/21/02) –
I attended the full-day course “Introducing X3D”.  X3D is the open standard for defining 3D content on the World Wide Web.  It can also be integrated with other multimedia applications.  We learned about using the software to implement 3D shapes, textures, animations, and an interactive viewable in a browser.  They also provided a basic introduction to XML and VRML, which forms the basis for X3D.  There was also an interactive laboratory session where we were able to get hands-on exposure to creating our own 3D world.  We were each given the actual software CDs.  This course was great for learning about some of the current standards in computer graphics.

In the evening, we attended a special presentation about the making of Star Wars Episode II.  There were five speakers that each had different roles in the creation of the movie.  Most of the presentation was about cloth animation and Yoda.  They discussed problems they had during simulation, such as the fringes on Jar-Jar’s clothing not moving correctly or the bangles on another character’s arm kept falling off.  There was also a speaker that specialized in the “beautiful explosions and carnage” in the film.  The final speaker was the animation director, who focused on the amazing “Yoda fight” and the preparation and research for that scene.  He described researching Chinese and Japanese martial arts movies in preparation for the scene, and how they arranged placement in the scene.

Day 2 (7/22/02) –
In the morning, I went to the animation theater showcase of various amateur and commercial works that use the latest computer graphics technology.  It was interesting to see the avenues that have opened up for artists and animators, although some were a little too disturbing for me.  The Vizzavi commercials were delightful and just simple fun.  Some student-made films, such as “SOS”, “Fishman”, and “The Bummer” were also short and sweet.  There were also company-produced shorts such as “The Coin” and the Gorillaz music videos that were fun to watch.

In the afternoon, I attended the course in “Panic-Free Public Speaking”.  I thought this would be useful since I have terrible stage fright.  This course was directed mostly to people who will be presenting at SIGGRAPH.  They gave us on meditation techniques to calm down and focus.  One of the speakers gave many helpful tips on preparation and delivery.  The instructors gave a very casual and easygoing presentation, which made the course enjoyable.  I then attended a Fast-Forward Papers Preview, where each presenter of the papers session gave a 50-second preview of his or her paper.

Day 3 (7/23/02) –
I attended the “Modeling and Simulation” papers session, where one of the grad students in our lab presented his paper on “Creating Models of Truss Structures with Optimization”.  I thought it was a great idea, because it integrated computer graphics with civil engineering.  He has created an algorithm where the user specifies a few fixed points and the computer calculates a stable system of truss structure for those constraints.  There was also a paper about ductile fracture from Berkeley and CMU, and an MIT paper about a standard they created for authoring solid models.  There were amazing animations accompanying all these papers.  The animation for the ductile fracture was even included in the Electronic Theater.

I visited the art gallery and studio, where there were more tangible displays with elements of computer graphics in modern art.  It consisted of 2D, 3D, and interactive works in both traditional and new forms.  One work that was particularly impressive was the "A Virtual Tour of the Cone Sisters Apartments".  The Cone sisters collected many artistic works and the project has reconstructed their apartment in a 3D virtual world.  It was displayed on a flat panel touch screen where the user can navigate through the apartment and obtain information on each work.  Another interesting work was the exploration of connecting visual and audio stimuli with psychological responses.  They made a video of a tour through an old, abandoned hospital, along with creepy sound effects at specific parts.  It was obviously trying to give the audience the sensation of walking through a “haunted” building.  There are no supernatural elements, but there are displays that could be conceived as spooky to the observer.

I also attended an exhibitor tech talk, where a company representative presents the features of its new product.  Since I am interested in biomedical applications, the tech talk I went to is about “3D visualization from molecules to immersion”.  It was a presentation by TGS, Inc. about their software that is used in medical research and surgery simulation environments.  It was good to learn about some commercial products on the market that can be used as tools for research.

Day 4 (7/24/02) –
I attended the course “Advanced Virtual Medicine: Techniques and Applications for Virtual Endoscopy”.  This was a great introduction to the latest technology and techniques in virtual medicine and medical imaging, in regards to virtual endoscopy.  He discusses diagnosis planning and viewing of various types of endoscopy within the body.  He talked about data acquisition, pre-processing of the data, viewing the data, and navigating through the 3D structure.  He also talked about how things could go wrong at any stage of the pipeline and could possibly need to return to a previous step.  It was a good survey course of the latest developments of a specific topic at the intersection of biotechnology and computer graphics.

In the afternoon, I attended the papers session on “Animation from Motion Capture” where a postdoc from our lab presented his paper in this area.  There were five papers in this session that all address the same problem with slightly different approaches.

In the evening, we attended the Electronic Theater, where the best works of the computer animation festival were displayed (the animation theater I attended on day 2 is also part of the festival).  There were many clips that portrayed escaping from oppression.  Amongst all the film noir and modern art stuff, the crowd favorite was the Polygon Family.  It was basically about a husband coming home late from the bar while his wife is waiting for him, and turning it into boxing-match/street-fighter bout.  No fancy rendering.  No amazing CG.  Just simple bust-out-laughing fun.  It just shows that a good idea and an artist with good timing can still get a lot further than all the advanced CG effects.

Day 5 (7/25/02) –
I explored the company exhibitions and emerging technologies area.  Emerging Technologies consists of functional projects that enhances interaction between digital and human systems.  One of the more practical projects was Lewis the robot wedding
photographer.  He wanders around at a wedding and takes random pictures of people.  It uses face detection and other measures to determine a well-composed photograph.  People do not feel as self-conscious around a robot photographer, and thus the robot is able to capture people interacting more naturally.

Week 9 (July 29, 2002) - I looked into some morphing papers and a software package available from Cambridge.  However, it appears the package does not provide the information that we need, not to mention it does not seem to be reading our file (even after conversion) properly.  Thus, I looked into reading some other papers to see if we can implement the correspondence technique ourselves.

One of the papers I found looks promising.  It is called "Multiresolution Mesh Morphing".  It uses Dijkstra's algorithm and some user constraints to find a simplified base domain for each object.  It then obtains a bijectional correspondence map between the two objects through the base domains.  To go from a point on the source object to the target object, a compound transformation from source object to source base domain to target base domain to target object is used.  Although the correspondence goes through base domains, it is still applicable to any point on either object.

I have also been working on completing the final project for my CRA website and writing documentation of my summer project for Alla.

Week 10 (July 5, 2002) - We have decided that both the package and the morphing paper would not be useful in serving our purpose.  Since this is my last week , we decided that I should focus on the capabilities of the contact determination system has on a wide range of motion capture data.  We want it to be able to pick up on kinematic constraints with a general definition for the tolerance and which limbs to be activated in the detection system.  Before I have only been working with motion capture data mostly focusing on capturing the constraint of a hand resting on the hip and hands in a T-shape.  Now, I also have data with walking and swimming motions, along with the motions in acting out Cock Robin.  The walking motion will be tricky since we don't want the arms swinging past the body to be picked up as a constraint.  However, since we are using the tip of the fingers, I don't think it will be a problem.

From what I observe about the videos and the motion capture data, it may be possible to set up the contact determination to check for frames when the end effectors come in contact with any part of the body that is naturally able for them to reach (for example, probably unnecessary to check for right fingers and right hand, or right fingers and right radius).  It may be insufficient to only check for velocity of each bone since both fists may be in contact and moving at the same time (style134).  It could be useful to check for the relative velocities of the two objects in contact.  If one is still, then the other needs to be still to count as a constraint.  If both are moving together, then it would be a constraint.  There may also need to be different tolerances depending on which two parts you are checking.