This week was the first week that the other NSF REU students arrived. I participated in the orientation as well as the tour in order to get a more formal introduction to how the lab works and what is going on. This week I will be putting together my formal project proposal and submitting my website. My goals this week were to finish my project proposal, read up on literature and background information, and work through tutorials involving visualization in ROS.

To work towards that, I met with Cindy and worked through some mockups of robot interfaces. One issue I’m having so far is how to have the two navigation interfaces (the video feed and the overhead map) but allow the user to know that they can use both. I tested it on paper with some people but they are getting caught up not knowing that they can interact with the overhead map. I will need to plan more into it and have a solid design by sometime next week. I have also begun doing tutorials in PySide (Like PyQt but with better licensing). I seem to be doing okay as far as these tutorials go. I understand these elements pretty well so far and I know I can keep going. I can easily see passing parameters as far as movement/keyboard teleoperation goes, but I’m still working through it and there is a lot I still don’t know how to tackle. I think that I have made a decent amount of progress, and next week I want to take on what I don’t know and try to get some headway into producing something similar to what the final navigational interface will look like.

Things I still don’t know:

  • How to put external video into the feed.

I could try to pull the images and keep uploading the whole image as it comes in. Who knows how slow that actually is, though. I’m already subscribed to the ros data from the kinect, right? maybe I’d be pulling from that.
* How to upload the YAML map into the feed.

There’s a library that I know of that looks into it, but I’m not sure how to display this data This will probably be more into Penn’s area, so I should ask. Also, I need to have it update with the odometry data of the robot, so the robot knows where it is, which is probably kind of hard outside of rviz