Game-Theoretic Modeling of Human Adaptation in Human-Robot Collaboration

In human-robot teams, humans often start with an inaccurate model of the robot capabilities. As they interact with the robot, they infer the robot's capabilities and partially adapt to the robot, i.e., they might change their actions based on the observed outcomes and the robot's actions,...

Full description

Saved in:
Bibliographic Details
Published in2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI pp. 323 - 331
Main Authors Nikolaidis, Stefanos, Nath, Swaprava, Procaccia, Ariel D., Srinivasa, Siddhartha
Format Conference Proceeding
LanguageEnglish
Published New York, NY, USA ACM 06.03.2017
SeriesACM Conferences
Subjects
Online AccessGet full text
ISBN9781450343367
1450343368
ISSN2167-2148
DOI10.1145/2909824.3020253

Cover

More Information
Summary:In human-robot teams, humans often start with an inaccurate model of the robot capabilities. As they interact with the robot, they infer the robot's capabilities and partially adapt to the robot, i.e., they might change their actions based on the observed outcomes and the robot's actions, without replicating the robot's policy. We present a game-theoretic model of human partial adaptation to the robot, where the human responds to the robot's actions by maximizing a reward function that changes stochastically over time, capturing the evolution of their expectations of the robot's capabilities. The robot can then use this model to decide optimally between taking actions that reveal its capabilities to the human and taking the best action given the information that the human currently has. We prove that under certain observability assumptions, the optimal policy can be computed efficiently. We demonstrate through a human subject experiment that the proposed model significantly improves human-robot team performance, compared to policies that assume complete adaptation of the human to the robot.
ISBN:9781450343367
1450343368
ISSN:2167-2148
DOI:10.1145/2909824.3020253