The Final Week

As this is my last week, I have decided to put more of my focus on completing documentation, ensuring that whatever I leave behind can be easily picked up by the next person working with physical privacy. For this, I’ve created a tutorial on how to use the applications, a general document which is an overview of what the applications do, and a workflow document specifying individual classes, project goals, and a style guide.

I think that it’s important for me to understand that while I probably won’t be able to accomplish everything that I wanted to for the course of this 10 weeks, I should also acknowledge what I have done, and that the work I have completed was pretty thorough and is still useful to the overall scope of the project. I’m thinking about coming back to OSU again and turning this physical privacy work into a paper, so I hope to continue working with Cindy and Bill to get that research done.

As a final note, I want to say thank you to everyone for such an awesome summer experience. Here is our final demo video of everything we have created thus far. Enjoy!

Super Accomplishments

It’s my second-to-last week, and wow did I get a lot done! To begin, I will cover our work flow as well as what I have so far with the Define Zones and Map Registration programs. In the first step, you have a semantic map or a floor plan to work from. This is a (very) loose floor plan that I drew of the lab. On this image, you can place markers to form zones, and each zone can be saved with its own privacy type. At the current moment it still only handles 4 points, but this should be expandable to any number of points as long as we can register them. You can define the zones, specify their name and privacy type, and also export all of them into a YAML file.

Privacy Zones Interface

The next step is translating the semantic floor plan/map into the SLAM map generated by the robot using Map Registration. In the below example, using Jonathon’s Triangle library we can deal with the crookedness and other errors present in the robot’s SLAM map. Once points are selected and paired between the two maps, you can import the YAML file from earlier defined zones program. (As a side note, learning how to use the YAML library as far as importing and exporting data was a really useful thing that I learned how to do this week.) This Map Registration program was developed by both Penn and I. I worked mostly with the User Interface and file IO (front-end), and Penn handled the transformation of points (back-end).

Map Registration Interface

Finally, Alex has developed a package that imports the converted points and places those points as a marker in rViz. Since it shows up in Rviz, it can be used in our remote_nav package which also uses rviz (in other words, the navigator can see these zones and if they can or cannot travel there)
From here, the plan is to make it so that the costmap can be manipulated so that a robot cannot navigate to an area specified as “private”. This can be applied to other types of filters. If the robot was confined to public areas then it would not be allowed to leave zones marked as such.

Privacy Output

Untying my Knot of Organization

I learned a lot this week about organization, design, and the research process as a whole. At the beginning of the week, I got a decent amount done as far as getting the zone system working. This is part of the Define Zones program, the complement of the Map Registration program from last week. This involved a lot of organizing my code. So far, a zone is its own object and is stored in a list of zones. I’m still working out a way to export that list of points so that each is its own object. In the beginning, things were somewhat convoluted. If the order of classes was MainWindow->GraphicsScene -> ZoneHandler -> Zones, Points then I have an issue where I am passing things up to the main window and then passing them back down. I also don’t support more than 4 points yet, and I want to eventually move to the point where someone can define any number of points (thus “drawing” a shape around the zone they want to select). The user does not yet know what they have already created, so I think it is also important to be able to show all of the zones they have created so far.

I talked with Cindy about my problem, and she helped a lot as far as better figuring out what my organization should be. There are a lot of concepts like overloading paint functions that I would not have understood at the beginning of the program, but understand now. In a way, I really enjoy the projects and the lab here at OSU, but I still don’t feel like a roboticist. If I could work in an area like Human Robot Interaction or Graphical Interfaces like I am doing now, that would be preferred over the mechanical engineering side of robotics. I guess I just don’t feel like a “roboticist” yet.

We continue to communicate with our partners at Cornell about the project, and they have recently begun their pilot study. It’s interesting how important a pilot study is to a project. They came across various issues that they might not have had time to adjust for if they proceeded with the full study, and they have to make some pretty major changes. The study is difficult because performance is a difficult measure, especially with people as there are a number of things that can vary from person-to-person. I’ll have to keep that in mind when I do my own projects with HCI/HRI in the future.

More about Map Registration

To elaborate, there are two programs for defining zones for the local user. The first is the Map Registration application, which takes a SLAM map and transforms the coordinates between it and a regular floorplan. This will be my first time working with QtDesigner instead of hard-coding the visual elements, but I also think this will be better to manage as the number of tools increases. Now that we are working with a UI that is going to have multiple tools, I need the options and data for those tools to only be visible each time a tool is selected. I’m not entirely sure how to tackle this approach yet, and will require some design thoughts. I like the thought of a tabbed interface, but I’m not sure how to do that. Penn and I talked about having a drop down menu that sends a signal when it changes. Depending on the current selection, it would hide or show other widgets. Right now we are using OpenCV’s affine transformation tool to calculate the difference in points between the two maps. I was thinking about how to handle the exported map, but there are some incompatibilities between openCV and PyQt that might take too long to properly address.
This week, I also learned a decent amount about the different coordinate systems in Qt. For example, I implemented zooming by use of the scroll wheel, but zooming in does not affect any of the coordinates since I am zooming the entire QGraphicsView but the coordinates are taken from the pixels on the maps themselves. I currently have a small bug that leaves remnants when zoomed in, but I think that this might be a problem with the bounding rectangles of the child objects in the scene.

Below is our current implementation for Map Registration (excluding zoom)

You Win Some, You Lose Some

I’ve worked with the transparency problem with the buttons on Remote Nav for a while, but I feel like continuing too long would be detrimental to the project. I modified the layout to accommodate for the change in plans, and this should be somewhat of a final layout at least for this version. By the end of the week, we had a nearly full-capacity demo of the port to the PR2. Clicking left and right, instead of turning the robot, turns the robot’s head to look left and right. The robot can currently move forward, turn around, look left, and look right. We have yet to try everything all at once in having it drive around the room, but this will be our next big step to finishing this interface. My next interface that I am working on is the local user interface. It will handle editing the map file to specify where the remote operator can and cannot go (physical/locational privacy handling) To do that, I have looked into ways in which I could interpret the data. My first thought is reading in a map with the defined zones in black. It has to be the same size as the original map. Can I pull the coordinates from that picture and translate them to the map? if I drive the robot into that area, can I print a statement that the robot entered my zone? These are the things that I want to try to gain some headway on this week and hopefully I can get working on the interface aspect of it in a while.

Simplified Lab Map

This map is a simplified floor plan from the SLAM map that we have generated earlier, and it will be what the local user will see when defining zones. The problem with the original SLAM map is that it is difficult to interpret. While this map is very simple, you can identify where the doors are and some of the furniture in the room, which helps with identifying locations. To get some help with future development, I talked to Michelle, one of Cindy’s former students who also worked with Qt (the C++ one) on a project. She used openGL for her projects, and she was able to give me a lot of tips for dealing with QGraphicsViews, implementing transparent buttons (although I had already benched that bug by the time I talked to her), and how to accomplish some of the tasks that are similar between our two programs. I learned a lot from talking to her, and I have some good ideas for moving forward. She also provided me with some reading material for Qt development, and I think I have a more firm grasp on the framework behind Qt now.

Demo-worthy?

We had a chance to meet and talk with our mentors and I was able to get some good feedback. One quirk that we had was the robot did not rotate 180 degrees when “turn around” was pressed. This was a tolerance issue, so increasing the tolerance made it closer to the ideal rate. I also fixed the bug with the video feed. I went with the first method and added a processEvents() function during the loop, so it would let the video feed and any other functions catch up. The other major bug that was fixed was the issue causing the display not to show up at all! I made the assumption that anything placed in the python script’s directory would be read when the script ran, but it was trying to read from the directory I was currently in (whenever I ran from command line). Adding a parameter to force it to use the directory of the remote_nav package was the fix for this issue, and now it runs consistently.

The paper from the previous user study is completed, and a major task for this week was to have the entire group read through the paper, edit it, and submit their critiques and analyses.

Third Navigation Example

Fireworks and Robots

Second Navigation Example

This week was a short week, as Independence Day was on Friday! Our REU group went to the festival on the river front, saw a fireworks show, ate ice cream, and listened to some local bands playing at the festival. It’s fun to see how festive the city can be.
Anyway, as far as my research goes, I have made more progress. At the beginning of the week, I continued to do research about ways to edit the map as well as handling different kinds of mouse events in the GUI. KnowRob is an interesting package that focuses on semantic mapping, and even though it is probably beyond the scope of the project, I think that the expansion of that package into privacy interfaces could greatly increase usability.
I went into the library and made a map of the space we plan to use in the user study, and I also made a map of the lab so we can continue to do tests. Mapping does take a long time to produce a high quality version, but I think that this pays off in the end (we won’t have to do it again if I don’t accidentally delete the file).

Library Example Map

Above is the map file of a hallway in the library. It has an open area with elevators as well as study rooms along the hall. The points in which the map “spikes” are when the laser scan shines through the glass on the doors and into the study rooms. Most of the doors were closed, so we could not drive inside and map each of the study rooms. With the user interface, I was able to pull in map data (now that I had a map) and also mark the robot’s location within the map. We need to manually pass in a start frame so the robot knows where it is (it cannot initially orient itself without starting coordinates). I have a strange bug in that sometimes it does not read in the configuration or map file and produces a blank screen. I’m not sure what is causing this, and it is incredibly frustrating because it seems to happen entirely sporadically. I have posted this problem on ROS Answers, so hopefully I will find out something soon.

Working with navigation buttons, there were also some quirks that I needed to work through. For example, when you click a button to move, you expect that holding the button would continue movement until it was released. At first, it would only move when the click was completed, and you would have to click multiple times, moving a small distance each time. We reimplemented this functionality to our “move forward” button after a few different approaches. Thus far, movement is very smooth. However, there is still an issue where the loop for the move function causes the rest of the image to lock up (the video freezes during the loop). I will either need to implement some sort of break to let the video catch up, or create a separate thread/process for the move function to run on independently of other GUI functionality.

So far, I feel like I have learned and accomplished a lot. A few weeks ago I was stumped by how to even get an image from the robot to the screen, and now I’m building even higher.

Slow and Steady

From the previous week, I have a few new group members. We are all working on the Privacy interfaces project, but have separate goals. I am working on the user interfaces, Alex is working on localization (how the robot knows where it is accurately), and Penn is working on image filters and products (such as a map overlay for my GUI). The first part of the week was focused on solidifying and presenting our projects to the rest of the REU groups. I think that this was useful in finding my place in the overall project scope, and I have a very concrete concept of what I should be able to accomplish during my time here. Alex is mostly helping with the back-end of the UI, but because my project is in two parts, I can easily switch to the other independent half if one side gets stuck or if I am waiting for someone else to pass along their part.

We also moved forward towards our smaller goal of getting an operator performance paper published. We went into the library to scope out locations for our study. We will be using physical redaction (covering up unwanted objects) to demonstrate how privacy can be protected with using a robot. The spaces we selected were rather small, so I may need to modify my interface plans accordingly. In the short-term it is faster to create a very specific solution to our navigation problem, but in the long term of the project creating a more open-ended and multifunctional solution would be better. My group is leaning towards the specific solution, so I will have to put in the extra effort on the side or later on to make this utility more general purpose.

First Navigation Example I also was able to make a very basic UI that can stream data live from the robot’s kinect camera. It takes data from rviz, a visualization package for ROS. In theory, I should be able to apply this to other visualization tools such as getting laser scan data, odometry data, and map data. It is very exciting to be able to start to pull together the pieces, and I continue to be on track for my work. Next week, I want to look into getting the map aspect incorporated into my GUI. That way, I will have both the image data and the map data going at the same time, which was what I wanted for the end of the fourth week.

Figuring Things Out

This week was the first week that the other NSF REU students arrived. I participated in the orientation as well as the tour in order to get a more formal introduction to how the lab works and what is going on. This week I will be putting together my formal project proposal and submitting my website. My goals this week were to finish my project proposal, read up on literature and background information, and work through tutorials involving visualization in ROS.

To work towards that, I met with Cindy and worked through some mockups of robot interfaces. One issue I’m having so far is how to have the two navigation interfaces (the video feed and the overhead map) but allow the user to know that they can use both. I tested it on paper with some people but they are getting caught up not knowing that they can interact with the overhead map. I will need to plan more into it and have a solid design by sometime next week. I have also begun doing tutorials in PySide (Like PyQt but with better licensing). I seem to be doing okay as far as these tutorials go. I understand these elements pretty well so far and I know I can keep going. I can easily see passing parameters as far as movement/keyboard teleoperation goes, but I’m still working through it and there is a lot I still don’t know how to tackle. I think that I have made a decent amount of progress, and next week I want to take on what I don’t know and try to get some headway into producing something similar to what the final navigational interface will look like.

Things I still don’t know:

  • How to put external video into the feed.

I could try to pull the images and keep uploading the whole image as it comes in. Who knows how slow that actually is, though. I’m already subscribed to the ros data from the kinect, right? maybe I’d be pulling from that.
* How to upload the YAML map into the feed.

There’s a library that I know of that looks into it, but I’m not sure how to display this data This will probably be more into Penn’s area, so I should ask. Also, I need to have it update with the odometry data of the robot, so the robot knows where it is, which is probably kind of hard outside of rviz

The First Week

In this first week, I arrived at Oregon without much of a hitch. We didn’t have very much to do except for practising with ROS tutorials. Apparently, there is an NSF REU group that will be joining me next week, and I will be a part of that group. Due to that, I think a lot of my direction will begin to happen next week instead of this. I’ve worked on tutorials and become acquainted with ROS, but other than that I’m not sure what my main goal will be for this entire project.

ROS uses both Python and C++, and even though I am more comfortable in C++, the majority of the lab programs in Python for efficiency and a cleaner codebase, so I have been brushing up on Python as I have gone through the tutorials. By the end of this week, I have been able to remotely operate a TurtleBot, generate a map from its laser scan and odometer data, and use that map to let it navigate autonomously throughout the lab. As far as interfaces are concerned, I am beginning to think about what methods would be best to display this map data to a user. In its current state, it seems like a bunch of dots that line up to form a wall, but it can be difficult for a new user to understand this image.

I know that I want to work with Cindy on interfaces, but there is no infrastructure or existing code for an interface/GUI with ROS.
Most of what I’ve seen are more implementations for the physical things that the robot can see or do as it pertains to privacy, and less with how the user interacts with it.