Partner-Aware Algorithms in Decentralized Cooperative Bandit Teams
When humans collaborate with each other, they often make decisions by observing others and considering the consequences that their actions may have on the entire team, instead of greedily doing what is best for just themselves. We would like our AI agents to effectively collaborate in a similar way...
        Saved in:
      
    
          | Main Authors | , , , , | 
|---|---|
| Format | Journal Article | 
| Language | English | 
| Published | 
          
        02.10.2021
     | 
| Subjects | |
| Online Access | Get full text | 
| DOI | 10.48550/arxiv.2110.00751 | 
Cover
| Summary: | When humans collaborate with each other, they often make decisions by
observing others and considering the consequences that their actions may have
on the entire team, instead of greedily doing what is best for just themselves.
We would like our AI agents to effectively collaborate in a similar way by
capturing a model of their partners. In this work, we propose and analyze a
decentralized Multi-Armed Bandit (MAB) problem with coupled rewards as an
abstraction of more general multi-agent collaboration. We demonstrate that
naïve extensions of single-agent optimal MAB algorithms fail when applied for
decentralized bandit teams. Instead, we propose a Partner-Aware strategy for
joint sequential decision-making that extends the well-known single-agent Upper
Confidence Bound algorithm. We analytically show that our proposed strategy
achieves logarithmic regret, and provide extensive experiments involving
human-AI and human-robot collaboration to validate our theoretical findings.
Our results show that the proposed partner-aware strategy outperforms other
known methods, and our human subject studies suggest humans prefer to
collaborate with AI agents implementing our partner-aware strategy. | 
|---|---|
| Bibliography: | AIHRI/2021/46 | 
| DOI: | 10.48550/arxiv.2110.00751 |