Object pose estimation and tracking by fusing visual and tactile information

Robot grasping and manipulation require very accurate knowledge of the object's location within the robotic hand. By itself, a vision system cannot provide very precise and robust pose tracking due to occlusions or hardware limitations. This paper presents a method to estimate a grasped object&...

Full description

Saved in:
Bibliographic Details
Published in2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems pp. 65 - 70
Main Authors Bimbo, J., Rodriguez-Jimenez, S., Hongbin Liu, Xiaojing Song, Burrus, N., Senerivatne, L. D., Abderrahim, M., Althoefer, K.
Format Conference Proceeding
LanguageEnglish
Japanese
Published IEEE 01.09.2012
Subjects
Online AccessGet full text
ISBN1467325104
9781467325103
DOI10.1109/MFI.2012.6343019

Cover

More Information
Summary:Robot grasping and manipulation require very accurate knowledge of the object's location within the robotic hand. By itself, a vision system cannot provide very precise and robust pose tracking due to occlusions or hardware limitations. This paper presents a method to estimate a grasped object's 6D pose by fusing sensor data from vision, tactile sensors and joint encoders. Given an initial pose acquired by the vision system and the contact locations on the fingertips, an iterative process optimises the estimation of the object pose by finding a transformation that fits the grasped object to the finger tips. Experiments were carried out in both simulation and a real system consisting of a Shadow arm and hand with ATI Force/Torque sensors instrumented on the fingertips and a Microsoft Kinect camera. In order to make the method suitable for real-time applications, the performance of the algorithm was investigated in terms of speed and accuracy of convergence.
ISBN:1467325104
9781467325103
DOI:10.1109/MFI.2012.6343019