Our Project



     The project that I have been assigned to is entitled "Robot Terrain Analysis Using Rotating Laser Ranger and Proprioception". This is technically one of the final stages of a larger project, since the robot had to be built, and software for rather important things such as independent navigation had to be written. This is kind of the heart of the project, though, because enabling the robot to recognize the terrain that it's on is one of the main goals of the project. Let me see if I can explain my bit of the project coherently...

     We have an autonomous robot (see picture) with a spinning laser (the blue box on the front of the robot) that enables it to "see" the terrain in front of it. Other people on the project have begun figuring out how to analyze the data from the laser and to extract certain "features" (statistical distributions, heights and the like) from the data that seem to differentiate between different terrain and terrain cover (vegetation) types that the robot is viewing (bare earth, grass, trees, man-made surfaces, etc.). What I am supposed to do is to take these features and feed them to a terrain-classifying module of my own design, which will then tell us what kind of terrain is in the image. This terrain classifier will need to learn how to differentiate between the different terrains, based upon the "features" that we give it to work with. This terrain classifier will therefore implement some kind of machine learning technique, probably a neural net, since they are practically raved about for their classification-learning abilities.

     The reason that we want the robot to be able to tell what type of terrain it is looking at is fairly simple. The robot will have an internal map of the area that it is traversing stored onboard, and this map will indicate which type of terrain is in each area. If a robot finds itself in a place that looks different from what it expected, that could be an indication that the robot is lost, was given faulty map data, or that the terrain has changed in some manner. A terrain change drastic enough to result in a different terrain classification than expected could also indicate possible danger for the robot. If it was expecting a forest but the trees are gone, perhaps there is logging going on in the area, which the robot should avoid. Militarily speaking, perhaps the enemy has parked a big, mean tank on a formerly empty field, and it is preparing to blast our poor little robot. When the robot realizes that something unexpected has been encountered, it can communicate this discrepancy to a human controller, who could then advise the robot on what to do, such as running away from the big mean tank.

--- --- --- --- --- --- --- --- --- --- --- --- --- ---

     Here's our demo scenario that we are supposed to be able to perform by the end of my stay here. The expected physical demonstration will use two ATRV-jr robots, R1 and R2. The tentative scenario is as follows:

R1 will serve as a scout, tasked to execute a path. The path will be over terrain that is known a priori to be open; R1 will proceed at a fast rate.

R1 will encounter a terrain anomaly: a stand of trees on hilly incline. R1 will begin to adaptively slow through its speed-control behavior and after the anomaly persists (via a policy specified by the user at task time), R1 will declare an anomaly at Loc 1 and contact the human supervisor.

R1 will also share its map with R2 as a checkpoint. R2 will store the map, but will not use the map information because it has not been verified by the human supervisor (security policy) as it could be sensor error or GPS problems.

The supervisor assumes control of R1, teleoperating the robot to explore the terrain and check out the situation. The supervisor confirms that the area is wooded and draws a region on the map. The Trulla system now updates the map, plus marks the remaining terrain patch for the surrounding area which was supposed to be open as "uncertain." The map is shared with R1 and R2.

Fearing a possible ambush, the supervisor orders R1 to continue the mission, but directs a) R1 to checkpoint its position and provide a sensing snapshot every N seconds ("viewframes") and b) R2 to autonomously follow R1 as a drone at a safe distance behind.

R2 moves directly to Loc 1 from its position, but visibly slows as it reaches the "uncertain" area. Once at Loc 1, it follows R1's path.

Contact with R1 is suddenly lost. R2 stops and sends the human supervisor the map and checkpointed viewframes. Using that information, the supervisor is able to see signs that R1 has been ambushed. The supervisor orders R2 to return.