Exploiting Trust for Resilient Hypothesis Testing With Malicious Robots

In this article, we develop a resilient binary hypothesis testing framework for decision making in adversarial multirobot crowdsensing tasks. This framework exploits stochastic trust observations between robots to arrive at tractable, resilient decision making at a centralized fusion center (FC) eve...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on robotics Vol. 40; pp. 3514 - 3536
Main Authors Cavorsi, Matthew, Akgun, Orhan Eren, Yemini, Michal, Goldsmith, Andrea J., Gil, Stephanie
Format Journal Article
LanguageEnglish
Published IEEE 2024
Subjects
Online AccessGet full text
ISSN1552-3098
1941-0468
DOI10.1109/TRO.2024.3415235

Cover

More Information
Summary:In this article, we develop a resilient binary hypothesis testing framework for decision making in adversarial multirobot crowdsensing tasks. This framework exploits stochastic trust observations between robots to arrive at tractable, resilient decision making at a centralized fusion center (FC) even when, first, there exist malicious robots in the network and their number may be larger than the number of legitimate robots, and second, the FC uses one-shot noisy measurements from all robots. We derive two algorithms to achieve this. The first is the two-stage approach (2SA) that estimates the legitimacy of robots based on received trust observations, and provably minimizes the probability of detection error in the worst-case malicious attack. For the 2SA, we assume that the proportion of malicious robots is known but arbitrary. For the case of an unknown proportion of malicious robots, we develop the adversarial generalized likelihood ratio test (A-GLRT) that uses both the reported robot measurements and trust observations to simultaneously estimate the trustworthiness of robots, their reporting strategy, and the correct hypothesis. We exploit particular structures in the problem to show that this approach remains computationally tractable even with unknown problem parameters. We deploy both algorithms in a hardware experiment where a group of robots conducts crowdsensing of traffic conditions subject to a Sybil attack on a mock-up road network. We extract the trust observations for each robot from communication signals, which provide statistical information on the uniqueness of the sender. We show that even when the malicious robots are in the majority, the FC can reduce the probability of detection error to 30.5% and 29% for the 2SA and the A-GLRT algorithms, respectively.
ISSN:1552-3098
1941-0468
DOI:10.1109/TRO.2024.3415235