Human-Like Robot Action Policy Through Game-Theoretic Intent Inference for Human-Robot Collaboration
Harmonious human-robot collaboration requires the robot to behave like a human partner, which raises the critical question of what factors make the robot do so. This article proposes a series of policies based on empathetic and nonempathetic intent inference, proactive and reactive action planning,...
Saved in:
| Published in | IEEE transactions on robotics Vol. 41; pp. 5411 - 5430 |
|---|---|
| Main Authors | , , , , |
| Format | Journal Article |
| Language | English |
| Published |
IEEE
2025
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 1552-3098 1941-0468 |
| DOI | 10.1109/TRO.2025.3603556 |
Cover
| Summary: | Harmonious human-robot collaboration requires the robot to behave like a human partner, which raises the critical question of what factors make the robot do so. This article proposes a series of policies based on empathetic and nonempathetic intent inference, proactive and reactive action planning, and ego and nonego action styles to examine, which modules enable robots to exhibit human-like behaviors. Two series of experiments are conducted with human subjects to test the performance of the proposed controllers. In Experiment 1, the participant must identify whether the collaborating partner is a human, similar to a turing test. The classification results empirically verify that the designed empathetic proactive policies enable the robot to exhibit human-like behaviors. Experiment 2 indicates that the proposed policy can be applied to complex collaborative tasks, and this result is consistent with the findings of Experiment 1. From empirical evidence from the experiments, we believe that empathy and proactive policies are essential elements to enable robots to perform human-like actions. |
|---|---|
| ISSN: | 1552-3098 1941-0468 |
| DOI: | 10.1109/TRO.2025.3603556 |