Experimental Results of Testing a Direct Monocular Visual Odometry Algorithm Outdoors on Flat Terrain under Severe Global Illumination Changes for Planetary Exploration Rovers
DOI:
https://doi.org/10.13053/cys-22-4-2839Palabras clave:
Visual-based autonomous navigation, planetary Rover localization, ego-motion estimation, visual odometry, experimental validation, planetary robotsResumen
We present the experimental results obtained by testing a monocular visual odometry algorithm on are al robotic platform outdoors, on flat terrain, and undersevere changes of global illumination. The algorithm was proposed as an alternative to the long-established feature based stereo visual odometry algorithms. Therover’s 3D position is computed by integrating the frameto frame rover’s 3D motion over time. The frames are taken by a single video camera rigidly attached to the rover looking to one side tilted downwards tothe planet’s surface. The frame to frame rover’s 3D motion is directly estimated by maximizing the like lihood function of the intensity differences at key observation points, without establishing correspondences between features or solving the optical flow as an intermediate step, just directly evaluating the frame to frame intensity differences measured at key observation points. The key observation points are image points with high linear intensity gradients. Comparing the results with the corresponding ground truth data, which was obtained by using a robotic theodolite with a laser range sensor, we concluded that the algorithm is able to deliver therover’s position in average of 0.06 seconds after an image has been captured and with an average absolute position error of 0.9% of distance traveled. These results are quite similar to those reported in scientific literature for traditional feature based stereo visual odometry algorithms, which were success fully used in real rovers here on Earth and on Mars. We believe that theyrepresent an important step towards the validation of the algorithm and make us think that it may be an excellent tool for any autonomous robotic platform, since it could be very helpful in situations in which the traditional feature based visual odometry algorithms have failed. It may also be an excellent candidate to be merged with other positioning algorithms and/or sensors.Descargas
Publicado
Número
Sección
Licencia
Transfiero exclusivamente a la revista “Computación y Sistemas”, editada por el Centro de Investigación en Computación (CIC), los Derechos de Autor del artículo antes mencionado, asimismo acepto que no serán transferidos a ninguna otra publicación, en cualquier formato, idioma, medio existente (incluyendo los electrónicos y multimedios) o por desarrollar.
Certifico que el artículo, no ha sido divulgado previamente o sometido simultáneamente a otra publicación y que no contiene materiales cuya publicación violaría los Derechos de Autor u otros derechos de propiedad de cualquier persona, empresa o institución. Certifico además que tengo autorización de la institución o empresa donde trabajo o estudio para publicar este Trabajo.
El autor, representante acepta la responsabilidad por la publicación del Trabajo en nombre de todos y cada uno de los autores.
Esta Transferencia está sujeta a las siguientes reservas:
- Los autores conservan todos los derechos de propiedad (tales como derechos de patente) de este Trabajo, con excepción de los derechos de publicación transferidos al CIC, mediante este documento.
- Los autores conservan el derecho de publicar el Trabajo total o parcialmente en cualquier libro del que ellos sean autores o editores y hacer uso personal de este trabajo en conferencias, cursos, páginas web personal, etc.