Journal


|| Home || My Research || About Me || My Mentor || Journal || Final Report ||


<< Week 1 -- Week 2 -- Week 3 -- Week 4 -- Week 5 -- Week 6 -- Week 7 -- Week 8 -- Week 9 -- Week 10 -- Week 11 >>



Week 1: June 6 - June 10
Wednesday, June 8
I've finally got this page up and running. Let's see - where shall I begin? After staying at Wellesley for graduation (and seeing 98% of my friend circle walk the stage), I finally made it in to Pittsburgh on Saturday, despite the MIA-RD, lack of printing resources for my electronic ticket and finally a delayed flight resulting in a missed connection in DC. Hong, the Ph. D candidate I'm subletting from, was most hospitable - she picked me up at the bus stop and drove me to the supermarket. The apartment is *cute*. Watch this space for pictures.
Saturday evening and all of Sunday was spent on the couch, watching television and assorted DVDs. It was the most laid-back day I had had in a long time.

Monday was my first day on the job. Prof. Hodgins told me about the various projects that were in the pipes, and introduced me to some of the staff in the CMU Graphics Lab and the Motion Capture Lab .
Tuesday was pretty low key, but that evening the Graphics Lab took all of us to see the Pirates/Orioles baseball game at PNC Park. In spite of having lived in the middle of Red Sox Nation for two years, this was my first time at a stadium. And it was memorable - the Pirates made an exceptional comeback to win, a feat that I am told is rare indeed.
Wednesday...that's today...has entailed some reading, the setting up of this site, and a somewhat incomprehensible lecture on the new ASIMO robot by a Honda representative from Japan. Aside from language barrier, we did get to see some movies of Asimo moving around, and the lecture was quite informative for a rookie like me.

Friday, June 10
Today Bilge, a graduate student who is running the experiments on the Asimo robot, gave me about 200 pages of reading to help me make up my mind about what I want to be involved in - human-robot interaction experiments or graphics stuff. Hmm...I predict some extended trips to Starbucks this weekend.

- Top -


Week 2: June 13 - June 17
Tuesday, June 14
Jessica is away this entire week. She has assigned me a starter project - she wants to look at motions that resemble the stimuli that they use, and needs me to use Maya (an application I am still quite unfamiliar with but can learn more about) to produce these dot patters fron motion capture files. I'm supposed to work with Marcella Tanzil, who is another DMP student who will be arriving here today.
Bilge wanted to use me as a test subject in his experiment, so today, I played the game against the ASIMO robot. It was hard! He did not play as was expected though - he was supposed to say things based on what I was doing, but he didn't say a word through the entire thing.
Now that I'm done taking the test, I can finally be told exactly what the experiment is about. In Bilge's words, he is "measuring the difference in how people perceive the robot before and after the interactions and also between two experimental conditions", i.e. when Asimo plays competitively (like he did in my case) vs. collaboratively.
Friday, June 17
Marcella and I are stuck on one of the types of motions Jessica wants us to produce. She said "scrambled motion", but we're not sure what this is supposed to look like, and email isn't exactly conducive to her explaining it to us from where she is. Jessica will be back on Monday, and we'll be able to clarify what she wants us to do.

- Top -


Week 3: June 20 - June 24
Monday, June 20
We finally figured out what it was that we were supposed to do. We did the required online research to write a Mel script for it but were not entirely sure we had it right. Unfortunately, the nature of the scrambling is such that Jessica could not tell right away if it was what we wanted.
Tuesday, June 20
Today I went to my first weekly Graphics Lab meeting. One of the Ph. D students here made a presentation on a paper that is going to be presented by some German researchers at SIGGRAPH, and spent the rest of the meeting poking holes in their work. What I understood of it was pretty cool, and I must admit I was both impressed and intimidated by the fact that the people here could find so many faults with the paper.
Wednesday, June 22
We finally got the scrambling working! I wrote up a code that consolidates and executes the entire process, and Marcella rendered the images to make the movies. Jessica wants us to figure out a way to insert invisible occluding objects into the movies, to create the presence of the human body. Studies have shown that human subjects are more likely to correctly identify a dot-motion as human is presented with the appropriate occlusions that would be produced on the dots were there a human body present.

We tried inserting a solid semi-cylinder, the same colour as the background, but could not get rid of the shading that gives away its presence when rendered. Katya, another DMP student who works in the Motion Capture Lab, said that we could make a black hole in Maya that would "eat up" anything behind it. This would, however, necessitate our using a black background to our movies. Jessica asked us to try and figure out a way to keeb the background grey, so we are now looking into using alpha channels and compositing rendered images for the foreground and background.

- Top -


Week 4: June 27 - July 1
Wednesday, June 29
Yesterday, in the Lab meeting, we went through the dry runs of two SIGGRAPH papers that are being presented by researchers at CMU this year. It was interesting to get a peek into the behind-the-scenes work that goes into presenting a paper at a big conference like SIGGRAPH, with people discussing a buddy-system to make sure all the presentations went smoothly and multiple people had back-ups of the presentation files. It was also nice because not everyone in the room was familiar with everything that was being talked about, so I felt less out of place. The comments were very low-level, and included suggestions for increasing the prefatory background provided so that non-experts would not be scared away - a proposition that I whole-heartedly support.

Today, we went down to the Mocap lab to take a look at motions that would, when rendered into dot-patterns, serve our research purposes.

Thursday, June 30
Jessica is going to be away again starting tomorrow, and will be back on Tuesday. She has given us a long list of things to keep ourselves busy while she is gone.
  1. Figure out a way to create the occlusion effect I mentioned earlier.
  2. Prepare a simple website with links to movies of the interesting motions we selected, to be sent to the people at UVA for their comments.
  3. Take the NIH test.
  4. Look for cites of
    Perception of Human Motion With Different Geometric Models (1998)
            Jessica K. Hodgins, James F. O'Brien, Jack Tumblin
                    IEEE Transactions on Visualization and Computer Graphics
    to see if we turn up anything new in the graphics literature.
  5. Read about how to put together a CMU IRB protocol in case we get to run subjects.
  6. Help Mo in the MoCap Lab with rendering the "wrong" blob movies, and read about the scaling laws behind that experiment. In Jessica's words,
    "Here's a short version of the idea. The biomech folks have scaling laws which they believe are reasonable approximations for how motion should be scaled to go from a small creature (who scampers) to a big creature (who lumbers). The psych people have scaled motion badly and observed that it "breaks" the motion (subjects don't like it or find it compelling). We are trying to demonstrate that if it is scaled badly, yes it breaks the motion, but if it is scaled correctly, it doesn't."
    Here is a previous paper by Jessica, which uses these scaling laws.
  7. Take one of the best/most famous dot papers and see who has cited it recentlly to see what interesting things have been done in this area.
Well, looks like we've got our plates full!
Friday, July 1
I came in early today to get a headstart on the work in store. It was strange walking around Newell-Simon Hall with no one else around. Today, we
  1. Figured out a way to create the occlusion effect. Turns out there is a simple texture in Maya that, when applied to the occluding object, causes it to blend perfectly with the background. Creating the actual occlusive form was a pain but I managed, and Marcella rendered the movie.
  2. Prepared a simple website with links to movies of the interesting motions we selected, to be sent to the people at UVA for their comments.
  3. Registered with the NIH and began reading the first lesson on the test.
  4. Looked for cites of Perception of Human Motion With Different Geometric Models (1998) and turned up five relatively recent papers.

- Top -


Week 5: July 4 - July 8
Monday, July 4
It's the 4th of July! I really didn't want the long weekend to be over, but I am looking forward to another week of work. Jun (my roommate) and I watched the fireworks Downtown from atop Schenley Hill - it wasn't the best of views but there were a few law-breakers up there with their own pyrotechnics, and we got quite a show!
Tuesday, July 5
We spoke to Mo about the animated blobs, and got the scaling done. Now all that's left is to render them. Since my machine's graphics card capabilities are not nearly good enough to render large amounts of data, Marcella will have to do this part herself, leaving me with very little to do for the rest of the day.
Wednesday, July 6
In yesterday's lab meeting, we saw two more SIGGRAPH paper dry-runs. As usual, I understood very little of the actual subject matter, but I think I'm getting the hang of scientific presentations and how they need to be structured. All DMP students in the lab are expected to make 10-minute presentations at a lab meeting in early August, so hopefully I'll have enough to talk about by then and be able to put together a coherent, interesting presentation.

Jessica is finally back, so we showed her what we had done since she left us. We were still not satisfied with The occlusion effect. What I had done was to insert cylinders between the balls marking the joints, but it became apparent that this was not sufficient because

  1. It necessitated fixing the camera angle before the occlusive effects could be seen
  2. The occlusion itself was rather innatural and not compelling at all. In fact the non-occluded dots were more easily indentifiable for what they were that our "occluded" ones, suggesting (based on previous studies) that our occlusion was grossly inaccurate.
One suggestion we received was to use backfacing, a technique that allows animators to render only those portions of images that are facing the camera. Maya allows for backfacing, so by reversing the normals of the occluding objects' surfaces, we could trick it into rendering only those surfaces that were facing away from the camera, irrespective of the camera angle. This would produce the desired occlusion without needing us to decide upon the camera angle beforehand. Another advantage of using this technique, I soon realised, was that I could use a whole human body instead of strategically placed cylinders to make the effects of the occlusion more realistic. Thus, two birds were killed with the same stone.
Friday, July 8
The back-faced human shell required some tweaking because the more intricate parts like the fingers didn't react too well to the back-facing - bits and pieces of hand and foot would miraculously materialise from time to time. I solved the problem by replacing the hands and feet with ellipsoids, which were much better behaved under backfacing.

More progress on the animations for the scaling laws experiments - the general consensus seems to be that the incorrectly scaled blobs don't look incorrect, they just look very tired or very energetic. We tried scaling them more for a different set of motions, with a fixed camera and the different-sized blobs all travelling in the same lne of motion but with varying speeds.

We have so far been shying away from human motion because of all the preconceptions that come with it, but Jessica decided to make a set of human motions and see how they look even if we don't end up running them in an experiment. Elyse, a DMP student working in the MoCap Lab, is working with some emotional walks. We are to get a-hold of one of the more exaggerated ones - she has happy, sad, afraid and confident - and render it with a fixed camera, checkerboard ground plane, and one of the generic models. Jessica said that "confident" would be a good choice, but that motion was not cleaned yet so we did "afraid" (shown below). On Monday we'll go down to the MoCap Lab and clean the "confident" guy up so we can use him.

- Top -


Week 6: July 11 - July 15
Monday, July 11
This Saturday, Amanda Rainer, Sravana Reddy, Daisy Lee and I (all DMP students) went to watch Fantastic Four at Loews on the Waterfront. It was quite enjoyable, and the new Ice Age 2 trailer was a special treat. We then had a late lunch at a Mexican place near Amanda's. Sunday was a more laidback, indoors day for me. All in all a very enjoyable weekend.
Tuesday, July 12
We had our weekly meeting with Jessica today. She asked us to finish scaling the afraid and confident walks so she could show them to our collaborators. She also asked us to polish up the occluded and scambled motions. I added the noise we had talked about earlier so that the scrambled motion did not start from or ever return to a human shape. It is, in my opinion, quite unidentifiable now.

Once Jessica ships the movies off to the folks at Virginia, we'll have to wait for their comments before taking further steps.

Thursday, July 14
This week's lab meeting saw two more SIGGRAPH paper dry runs - one of them was on a new algorithm for producing papercutting instructions by discovering and analysing symmetries and redundancies to identify folds. I thought it was pretty neat.

We discovered a problem with our scaled motion - apparently my (uprooted public cluster graphics-hardware-incompetent) computer misbehaved and lied to us when it said that the motion had been scaled, because the final rendered movies showed the exact same speeds. We're now going to have to re-render the motions after scaling them properly.

- Top -


Week 7: July 18 - July 22
Tuesday, July 19
HP Book VI came out this Saturday!!! Of course, I'm too cheap to buy it, but hope to be able to mooch it off of someone soon enough. Most fans I know are all about sharing the Potter love.

Yesterday was devoted almost exclusively to rendering movies. It's a long, boring process that involved setting the process up and leaving the machine running for a few hours. I read the papers Jessica recommended, on the influence of speed on the perception of motion as human. I also made some progress on the NIH test, which I am told we will have to pass before we can administer even online surveys.

And I finished Book V :-D

Friday, July 22
In rendering our blob-movies, we tried to keep the camera angles stationary. Since some of the blobs have been scaled by a factor of 10, the smaller blobs are tiny if we set the camera to fit the big ones. Unfortunately, if we alter the camera angle for each size and motion (the movie length is constant so the distance travelled varies), the checkerboard flooring is the only visual cue to the globs' relative (and absolute) sizes. This risks being insufficient, so we are thinking of re-rendering the movies with a model of a palm tree in the background to serve as a size reference.

Our research group has started putting together the online survey on the scaled blobs. Since the study is going to be between rather than within subjects (they only see one movie each, so it now becomes important that they know what size the checkerboard in absolute terms rather than relative to the other movies they have seen). This reinforces the need for the palm tree (or some other such model) in the background.

- Top -


Week 8: July 25 - July 29
Tuesday, July 26
Jessica has decided that the subject is to be done within subjects after all. We had a meeting with Sara Kiesler and Matthew Marge to finalise the details of the survey. Hopefully, we'll have it set up before we leave for SIGGRAPH on Friday (I can't believe it's already here!)

We're going to need Maya models of a palm tree and a beach chair. Katya said she would work on these and get back to us, since neither Marcella not I have much modelling experience.

Thursday, July 28
Katya has given us a beautiful beach scene to work with. We're going to render a still frame with the three blobs in a line (coincident with the path they take in the movies), alongside the tree and chair for size-comparison. This will be shown once at the beginning of the survey. >/P>

The lab seems to be emptying out as people leave for SIGGRAPH. We leave tomorrow, althought the conference doesn't really start until Sunday. I'm staying with a friend from college for the first couple of days, and over the weekend after the conference, so that I can make the most of my trip across the country :-) It should be fun!

- Top -


Week 9: Aug 1 - Aug 5
Monday, Aug 1
So we're at SIGGRAPH! Sunday was fairly low-key. I attended two courses - one on Open GL and the other on the making of the CG film Madagascar. Then there was the Fast-Forward, with 50-minute (often humourous) presentations by the authors of all the papers being presented over the duration of the conference.A very nice first day indeed. The hotel is quite nice, although not quite as fancy as appears on the website :-)

Today, I attended an course on Crowd and Group Animation. Then there was the keynote addres delivered by the one and only George Lucas. The lines were long and the halls overflowed, but I was a bit disappointed with the fact that the "address" took the form of an interview-like Q&A session instead of a direct speech. The host was not in top form, and people actualy began to trickle out toward the end.

We also had our electronic theatre tickets for tonight. The movies shown ranged from technical how-we-did-it clips from the movies to short CG films from artists around the world. All in all a great experience.

Tuesday, Aug 2
Today I attended a course which provided an introduction to quantum computing, a very new field of computing indeed, but one which will inevitably assume much importance once Moore's Law allows us to store multiple bits per atom, imposing a physical limit on the number of transistors that can be stored on a microchip. The ideas presented were quite revolutionary and not hard to grasp at least on a basic level.

I also attended a session on the making of The Polar Express, and a sketch on production rendering in movies like Star Wars and Stealth. There was also a session on "Beautiful Things" that presented a number of new techniques for creating and analysing patterns and virtual structures of different kinds.

Finally, there was a Star Wars retrospective program from Industrial Light and Magic, George Lucas' production company, which guided us down memory lane and back again with presentations by technical artistes who had worked on the original series as well as the new movies. It was refreshing to witness the creation of some of the most radical special effects of the time with revolutionary miniatures and real life mini-explosions, and to see that while a lot had changed with the advent of CG effects, the underlying desire to outdo the conventionally possible was to be seen from the very beginning.

Wednesday, Aug 3
Third day! This morning I attended a sketch session on novel interfaces, and a panel on believable and AI-driven characters. I then took time to visit the exhibition and picked up a loot of candy at the Dreamworks booth. In the afternoon, I went to another sketch on new techniques for virtual and augmented reality, and another panel, "From University Lab to Movie Screen and Back Again", with representatives from major production houses and academic institutions, which I found particularly insightful.

Tonight was the "Cyber Fashion Show". Let's just say that it was...erm...interesting. I'm so glad I was with Harriet, so we could roll our eyes at each other everytime something particularly outrageous came on.

Thursday, Aug 4
And it's the last day of SIGGRAPH! I can't believe it's over so soon. This morning I went to a panel on futuristic display systems - the "ultimate" display, so to speak. We came to the conclusion that the ultimate display would be some kind of self-regenerating, paper-like material that could be plastered onto any surface like wallpaper and, unlike paper, be re-usable. I then attended my first and only paper-session at SIGGRAPH, "Styles of Human Motion". Surprisingly (in a good way), most of the presentations were well-structured, straight-forward and pretty easy to understand, even for someone with as little experience as me. Apart from the math, of course :-P Once they pulled out the magic equations, all was lost. Or rather, I was.

I made my way to the emerging Technologies fair and got to see some pretty exciting stuff, including several haptic devices such as a virtual canoe and hang-glider, and a screen from Microsoft that allowed the user to scan and manipulate documents by hand. I also went to two sessions at the animation theatre, which was showing various themed collections of short films every half hour. Finally, I ended my first SIGGRAPH at a sketch on autonomous characters, including intelligent passengers and characters that could realistically respond to a dynamically changing environment.

It has been a packed five days, and I have learned so much about the world of graphics. The conference has definitely increased my interest in the field manifold.

- Top -


Week 10: Aug 8 - Aug 12
Wednesday, Aug 10
I spent the weekend after SIGGRAPH in LA with a friend, and got to explore the city, from Beverly Hills to Hollywood and Santa Monica. I am now back in my old stomping ground at CMU's Newell Simon Hall. Jessica is in Japan for the week and hasn't given us anything to do. Yesterday's lab meeting was a very informal SIGGRAPH debriefing, with members presenting their views on what they thought were the best and worst presentaions. Next Tuesday, we DMP students are to deliver brief 10-minute presentations on our work from the summer. Marcella and I have a good deal - we get to split 15 minutes - but I am still a litte nervous about presenting, since we have been working on so many little projects. I've started preparing my half of the slides (we decided that a division of labour would be best), and am well on my way to finalising what I will say, but we're going to have to rehearse it together a couple of times at the very least.
Friday, Aug 12
We finally took the NIH test and handed in our certificates to be submitted for approval with the IRP protocol. Once that has been approved, the study will be ready for release.

We're now working on movies for re-running the blob experment with the correctly scaled chair and tree in the background, as well as the movies of the human backflip (shown below).

We require the incorrect scalings to force gravity g (original value = 9.81ms-2) to be outside the range -12.00ms-2 < g < -8.00ms-2. The way we calculate gravity is as follows:
The units of gravity are ms-2).
If we're scaling the dimensions (in m) of the figure by x, then we should (in the correct condition) be scaling time (in s) by sqrt(x) since x/(sqrt(x)*sqrt(x)) = 1.
Thus, for the correct scalings, gravity does not change.
For the incorrect scalings, however, gravity changes by a factor of x / (incorrect scaling)2
So the incorrect gravity in each case can be calculated as x / (incorrect scaling)2 * 9.81ms-2.
The resulting gravities of our current scalings (x = 1.5 and x = 0.66) are well within this range, so we are looking into the possibility of reducing the scaling to improve the camera angle.

We thus have three studies in all - the blobs without the background objects, the blobs with the background objects and the humans (with the background objects for size reference). Although I won't be around to see the results, our goal is to be able to show that

  1. scaling motion correctly with size matters (hopefully this will be supported by all three studies)
  2. adding objects for scale increases the effect
  3. the effect is stronger for human motion than for non-humanoid motion (such as that of the blob)
Our presentation is coming along, but we still have a lot to do. We're probably going to have to come in over the weekend.

- Top -

Week 11: Aug 15 - Aug 19
Tuesday, Aug 16
We make our presentations today! Marcella and I are just adding the finishing touches to ours. I am not terribly nervous, although I will admit I was happy to see that fewer people than usual will be present tonight.
Wednesday, Aug 17
The presentation went quite well, and in some ways I feel as though it marked the unofficial "end" to my stay here. There were a few questions thrown out that had to be deflected, and Jessica had to come to our rescue more than once, but it was essentially a smooth affair. These last three days are going to be spent largely on organising my files and making copies so that they can be easily accessed even after I'm gone and my machine wiped.
Friday, Aug 19
It's my last day! Walking to work today was bitter-sweet; I have to admit I am excited to be returning to school soon and seeing all my friends again.

I've put finishing touches to the beach scene for the second blob study, and am quite satisfied with the result (seen below).

Since I won't be around to see the second and third blob study carried out, it is important that I let someone know where all the files are. I've organised my folder and prepared a description file for easy reference. Whoever needs to use my files will no doubt find this useful.

- Top -


|| Home || My Research || About Me || My Mentor || Journal || Final Report ||