Week 1: Introduction to the ArticuLab
I arrived in Pittsburgh on Saturday, May 25th and managed to successfully navigate myself and my enormous suitcase to Carnegie Mellon's Stever House (where I will be staying for the duration of my time here). My dorm room is very nice - much nicer, in fact, than my Duke apartment - with the added bonus of AC. I spent most of the weekend unpacking, going to Target (to buy things I'd neglected to bring with me from California), and mooching around in my (still roomate-less) dorm room.
Holiday, which meant more mooching around. I also finished reading more Dr. Cassell's research articles, particularly these two:
The first is an overview of how we humans use gestures alongside speech to give directions. It covers the different types of gestures. The second describes how the ethnicity of embodied conversational agents affect their interactions with children of different races.
First day at the ArticuLab! I was told to arrive around 10 AM. After making my way to the appropriate building (which is, fortunately for me, very close to Stevers), I met Samantha, the graduate student who heads the research on the Alex project. I also met Shannon, the other DREU student who will be working with the ArticuLab this summer. We were introduced to the other members of the lab: Dave (who heads the Rapport project), Evelyn (the lab manager), Callie (another graduate student working on Alex), Nikita (visiting from India), Zhou (a PhD student), and Anders (the lab's research programmer). For more information about everyone at the lab, see the Peoples page on the ArticuLab website. Everyone I met was incredibly friend and willing to help me; but definitely the best part was everyone's enthusiasm for the research going on at the lab!
After this introductin to the lab, Shannon and I spent the rest of the day finishing our IRB training, which is necessary because we work in a lab that involves live participants. The IRB takes a surprisingly long time to complete - actually, Shannon and I spent the majority of the day filling out the IRB forms. Once the IRB forms were complete, we read a few articles about Alex. The articles can be found at the following links:
On Wednesday, we were introduced to our first set of mini-projects. In a few weeks, Samantha would be conducting a user study that would test whether comprehension of verb morphology in middle school students (specifically, whether ethnicity played a role in their comprehension). Shannon would be responsible for building the user's study's application. I, on the other hand, would be working with Alex, the embodied conversational agent that Samantha often used in her work. Appearance-wise, Alex is a racially and sexually ambiguous child (approximately 6-8 years old). He is able to interact with other children through a Wizard of Oz system - that is, Alex interacts with a real child through a computer screen with input that is controlled by a researcher. The researcher acts like the Wizard of Oz from L. Frank Baum's popular book, facilitating the interaction between Alex and the child.
Currently, Alex runs on the game engine Panda3D, using the SmartBody platform from USC's Institute of Creative Technologies. That is, while Panda3D is the engine that runs Alex's interactions with a user, the actual rendering is done by the SmartBody platform. Unfortunately, Panda3D is no longer maintained. As such, I was tasked with establishing Alex in Unity3D, which is a more recent and fairly popular game engine.
I didn't (and still don't) expect my task to be easy. Although I know Unity fairly well, the task required that I also understand how SmartBody interacts with Unity. Moreover, I would need to use other components of ICT's Virtual Human ToolKit that enable a virtual being to respond to text/speech input, lipsynch, and generally interact with users. The entire underlying architecture of the SmartBody system (and how it interacts with Unity) can be found here.
I started on my task by downloading Unity Pro on my assigned lab computer. Then, I looked through all of the Alex-related assets that were available to me (models, animations, textures, etc). As a simple demonstration, I imported a model of Alex and his classroom environment into Unity. At the same time, I downloaded the (rather enormous) VHToolKit. Throughout the day, I read about the VHToolKit and SmartBody on the VHToolKit website (linked earlier).
On Thursday, I started experimenting with Unity and Maya. This included importing all of the Maya files for Alex into Unity and trying to set them up to mirror the Alex program as run through Panda3D. I don't know much about Panda3D, but I imagine it's much harder to set up scenes in Panda3D than in Unity. From what I've heard about Panda3D, it sounds like almost everything is done through scripting. On the other hand, all of my experiences with Unity have been very user friendly - a lot of options are click, drag, and drop. I tried matching the placement of objects in my Unity scene with the coordinates listed in the Panda3D Python script; however, there must be some discrepancy in units or in axes, because the coordinates listed in the Python script don't translate well when applied in Unity.
At any rate, I was able to set up the basic Alex scene in Unity, although I haven't included any of the animations. I expect those will come later when I learn how to use SmartBody to work with Alex.
On Friday, I downloaded the VHToolkit and started experimenting with it. The VHToolkit is huge, so just downloading it took a few hours. After I'd downloaded it, I opened up the Unity/VHToolkit test files to read through the packaged scripts. Despite my experiences with Unity, the task at hand is one that I'm not sure I'm entirely equipped to deal with. I think I will need to play around a lot with the given scripts and assets to figure out how Alex will work in the SB/VHT/Unity interface.