Game-Theoretic Modeling of Human Adaptation in Human-Robot Collaboration
In human-robot teams, humans often start with an inaccurate model of the robot capabilities. As they interact with the robot, they infer the robot's capabilities and partially adapt to the robot, i.e., they might change their actions based on the observed outcomes and the robot's actions,...
Saved in:
Published in | 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI pp. 323 - 331 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
New York, NY, USA
ACM
06.03.2017
|
Series | ACM Conferences |
Subjects | |
Online Access | Get full text |
ISBN | 9781450343367 1450343368 |
ISSN | 2167-2148 |
DOI | 10.1145/2909824.3020253 |
Cover
Summary: | In human-robot teams, humans often start with an inaccurate model of the robot capabilities. As they interact with the robot, they infer the robot's capabilities and partially adapt to the robot, i.e., they might change their actions based on the observed outcomes and the robot's actions, without replicating the robot's policy. We present a game-theoretic model of human partial adaptation to the robot, where the human responds to the robot's actions by maximizing a reward function that changes stochastically over time, capturing the evolution of their expectations of the robot's capabilities. The robot can then use this model to decide optimally between taking actions that reveal its capabilities to the human and taking the best action given the information that the human currently has. We prove that under certain observability assumptions, the optimal policy can be computed efficiently. We demonstrate through a human subject experiment that the proposed model significantly improves human-robot team performance, compared to policies that assume complete adaptation of the human to the robot. |
---|---|
ISBN: | 9781450343367 1450343368 |
ISSN: | 2167-2148 |
DOI: | 10.1145/2909824.3020253 |