From the previous week, I have a few new group members. We are all working on the Privacy interfaces project, but have separate goals. I am working on the user interfaces, Alex is working on localization (how the robot knows where it is accurately), and Penn is working on image filters and products (such as a map overlay for my GUI). The first part of the week was focused on solidifying and presenting our projects to the rest of the REU groups. I think that this was useful in finding my place in the overall project scope, and I have a very concrete concept of what I should be able to accomplish during my time here. Alex is mostly helping with the back-end of the UI, but because my project is in two parts, I can easily switch to the other independent half if one side gets stuck or if I am waiting for someone else to pass along their part.

We also moved forward towards our smaller goal of getting an operator performance paper published. We went into the library to scope out locations for our study. We will be using physical redaction (covering up unwanted objects) to demonstrate how privacy can be protected with using a robot. The spaces we selected were rather small, so I may need to modify my interface plans accordingly. In the short-term it is faster to create a very specific solution to our navigation problem, but in the long term of the project creating a more open-ended and multifunctional solution would be better. My group is leaning towards the specific solution, so I will have to put in the extra effort on the side or later on to make this utility more general purpose.

First Navigation Example I also was able to make a very basic UI that can stream data live from the robot’s kinect camera. It takes data from rviz, a visualization package for ROS. In theory, I should be able to apply this to other visualization tools such as getting laser scan data, odometry data, and map data. It is very exciting to be able to start to pull together the pieces, and I continue to be on track for my work. Next week, I want to look into getting the map aspect incorporated into my GUI. That way, I will have both the image data and the map data going at the same time, which was what I wanted for the end of the fourth week.