Experimental Results of Testing a Direct Monocular Visual Odometry Algorithm Outdoors on Flat Terrain under Severe Global Illumination Changes for Planetary Exploration Rovers

Geovanni Martínez

Abstract


We present the experimental results obtained by testing a monocular visual odometry algorithm on a real robotic platform outdoors, on flat terrain, and under severe changes of global illumination. The algorithm was proposed as an alternative to the long-established feature based stereo visual odometry algorithms. The rover's 3D position is computed by integrating the frame to frame rover's 3D motion over time. The frames are taken by a single video camera rigidly attached to the rover looking to one side tilted downwards to the planet's surface. The frame to frame rover's 3D motion  is directly estimated by maximizing the likelihood function of the intensity differences at key observation points, without establishing correspondences between features or solving the optical flow as an intermediate step, just directly evaluating the frame to frame intensity differences measured at key observation points. The key observation points are image points with high linear intensity gradients. Comparing the results with the corresponding ground truth data, which was obtained by using a robotic theodolite with a laser range sensor, we concluded that the algorithm is able to deliver the rover's position in average of 0.06 seconds after an image has been captured and with an average absolute position error of 0.9% of distance traveled. These results are quite similar to those reported in scientific literature for traditional feature based stereo visual odometry algorithms, which were successfully used in real rovers here on Earth and on Mars.  We believe that they represent an important step towards the validation of the algorithm and make us think that it may be an excellent tool for any autonomous robotic platform, since it could be very helpful in situations in which the traditional feature based visual odometry algorithms have failed. It may also be an excellent candidate to be merged with other positioning algorithms and/or sensors.

Keywords


Visual-based Autonomous Navigation; Planetary Rover Localization; Ego-Motion Estimation; Visual Odometry; Experimental Validation; Planetary Robots

Full Text: PDF