In this first week, I arrived at Oregon without much of a hitch. We didn’t have very much to do except for practising with ROS tutorials. Apparently, there is an NSF REU group that will be joining me next week, and I will be a part of that group. Due to that, I think a lot of my direction will begin to happen next week instead of this. I’ve worked on tutorials and become acquainted with ROS, but other than that I’m not sure what my main goal will be for this entire project.

ROS uses both Python and C++, and even though I am more comfortable in C++, the majority of the lab programs in Python for efficiency and a cleaner codebase, so I have been brushing up on Python as I have gone through the tutorials. By the end of this week, I have been able to remotely operate a TurtleBot, generate a map from its laser scan and odometer data, and use that map to let it navigate autonomously throughout the lab. As far as interfaces are concerned, I am beginning to think about what methods would be best to display this map data to a user. In its current state, it seems like a bunch of dots that line up to form a wall, but it can be difficult for a new user to understand this image.

I know that I want to work with Cindy on interfaces, but there is no infrastructure or existing code for an interface/GUI with ROS.
Most of what I’ve seen are more implementations for the physical things that the robot can see or do as it pertains to privacy, and less with how the user interacts with it.