Cooperative Vision-Based Object Transportation by Two Humanoid Robots in a Cluttered Environment

Although in recent years, there have been quite a few studies aimed at the navigation of robots in cluttered environments, few of these have addressed the problem of robots navigating while moving a large or heavy object. Such a functionality is especially useful when transporting objects of differe...

Full description

Saved in:
Bibliographic Details
Published inInternational journal of HR : humanoid robotics Vol. 14; no. 3; pp. 1750018 - 30
Main Authors Rioux, Antoine, Esteves, Claudia, Hayet, Jean-Bernard, Suleiman, Wael
Format Journal Article
LanguageEnglish
Published Singapore World Scientific Publishing Company 01.09.2017
World Scientific Publishing Co. Pte., Ltd
World Scientific Publishing
Subjects
Online AccessGet full text
ISSN0219-8436
1793-6942
DOI10.1142/S0219843617500189

Cover

More Information
Summary:Although in recent years, there have been quite a few studies aimed at the navigation of robots in cluttered environments, few of these have addressed the problem of robots navigating while moving a large or heavy object. Such a functionality is especially useful when transporting objects of different shapes and weights without having to modify the robot hardware. In this work, we tackle the problem of making two humanoid robots navigate in a cluttered environment while transporting a very large object that simply could not be moved by a single robot. We present a complete navigation scheme, from the incremental construction of a map of the environment and the computation of collision-free trajectories to the design of the control to execute those trajectories. We present experiments made on real NAO robots, equipped with RGB-D sensors mounted on their heads, moving an object around obstacles. Our experiments show that a significantly large object can be transported without modifying the robot main hardware, and therefore that our scheme enhances the humanoid robots capacities in real-life situations. Our contributions are: (1) a low-dimension multi-robot motion planning algorithm that finds an obstacle-free trajectory, by using the constructed map of the environment as an input, (2) a framework that produces continuous and consistent odometry data, by fusing the visual and the robot odometry information, (3) a synchronization system that uses the projection of the robots based on their hands positions coupled with the visual feedback error computed from a frontal camera, (4) an efficient real-time whole-body control scheme that controls the motions of the closed-loop robot–object–robot system.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0219-8436
1793-6942
DOI:10.1142/S0219843617500189