Human-Like Robot Action Policy Through Game-Theoretic Intent Inference for Human-Robot Collaboration

Harmonious human-robot collaboration requires the robot to behave like a human partner, which raises the critical question of what factors make the robot do so. This article proposes a series of policies based on empathetic and nonempathetic intent inference, proactive and reactive action planning,...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on robotics Vol. 41; pp. 5411 - 5430
Main Authors Sheng, Yubo, Wang, Yiwei, Cheng, Haoyuan, Zhao, Huan, Ding, Han
Format Journal Article
LanguageEnglish
Published IEEE 2025
Subjects
Online AccessGet full text
ISSN1552-3098
1941-0468
DOI10.1109/TRO.2025.3603556

Cover

More Information
Summary:Harmonious human-robot collaboration requires the robot to behave like a human partner, which raises the critical question of what factors make the robot do so. This article proposes a series of policies based on empathetic and nonempathetic intent inference, proactive and reactive action planning, and ego and nonego action styles to examine, which modules enable robots to exhibit human-like behaviors. Two series of experiments are conducted with human subjects to test the performance of the proposed controllers. In Experiment 1, the participant must identify whether the collaborating partner is a human, similar to a turing test. The classification results empirically verify that the designed empathetic proactive policies enable the robot to exhibit human-like behaviors. Experiment 2 indicates that the proposed policy can be applied to complex collaborative tasks, and this result is consistent with the findings of Experiment 1. From empirical evidence from the experiments, we believe that empathy and proactive policies are essential elements to enable robots to perform human-like actions.
ISSN:1552-3098
1941-0468
DOI:10.1109/TRO.2025.3603556