Visual servoing over unknown, unstructured, large-scale scenes
Résumé
This work proposes a new vision-based framework to control a robot within model-free large-scale scenes, where the desired pose has never been attained beforehand. Thus, the desired image is not available. It is important to remark that existing visual servoing techniques cannot be applied in this context. The rigid, unknown scene (i.e. the metric model is also not available) is represented as a collection of planar regions, which may leave the field-of-view continuously as the robot moves toward its distant goal. Hence, a novel approach to detect new planes that enter the field-of-view, which is robust to large camera calibration errors, is then deployed here. In fact, it is well-known that representing the scene as composed by planes, the estimation processes are improved in terms of accuracy, stability, and rate of convergence. This extended 3D vision-based control technique is also based on an efficient second-order method for plane-based tracking and pose reconstruction. The framework is validated by using simulated data with artificially created scenes as well as with real images, and accurate navigation tasks are shown.
Domaines
Robotique [cs.RO]
Fichier principal
2006-ICRA-Silveira_Malis_Rives-Visual_servoing_unknown_unstructured_large_scale_scenes.pdf (1.04 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
licence |