Towards a humanoid museum guide robot that interacts with multiple persons

The purpose of our research is to develop a humanoid museum guide robot that performs intuitive, multimodal interaction with multiple persons. In this paper, we present a robotic system that makes use of visual perception, sound source localization, and speech recognition to detect, track, and invol...

Full description

Saved in:
Bibliographic Details
Published in5th IEEE-RAS International Conference on Humanoid Robots, 2005 pp. 418 - 423
Main Authors Bennewitz, M., Faber, F., Joho, D., Schreiber, M., Behnke, S.
Format Conference Proceeding
LanguageEnglish
Published IEEE 2005
Subjects
Online AccessGet full text
ISBN0780393201
9780780393202
ISSN2164-0572
DOI10.1109/ICHR.2005.1573603

Cover

More Information
Summary:The purpose of our research is to develop a humanoid museum guide robot that performs intuitive, multimodal interaction with multiple persons. In this paper, we present a robotic system that makes use of visual perception, sound source localization, and speech recognition to detect, track, and involve multiple persons into interaction. Depending on the audio-visual input, our robot shifts its attention between different persons. In order to direct the attention of its communication partners towards exhibits, our robot performs gestures with its eyes and arms. As we demonstrate in practical experiments, our robot is able to interact with multiple persons in a multimodal way and to shift its attention between different people. Furthermore, we discuss experiences made during a two-day public demonstration of our robot
ISBN:0780393201
9780780393202
ISSN:2164-0572
DOI:10.1109/ICHR.2005.1573603