The (Im)perfect Automation Schema: Who Is Trusted More, Automated or Human Decision Support?
Objective This study’s purpose was to better understand the dynamics of trust attitude and behavior in human-agent interaction. Background Whereas past research provided evidence for a perfect automation schema, more recent research has provided contradictory evidence. Method To disentangle these co...
Saved in:
Published in | Human factors Vol. 66; no. 8; pp. 1995 - 2007 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Los Angeles, CA
SAGE Publications
01.08.2024
Human Factors and Ergonomics Society |
Subjects | |
Online Access | Get full text |
ISSN | 0018-7208 1547-8181 1547-8181 |
DOI | 10.1177/00187208231197347 |
Cover
Summary: | Objective
This study’s purpose was to better understand the dynamics of trust attitude and behavior in human-agent interaction.
Background
Whereas past research provided evidence for a perfect automation schema, more recent research has provided contradictory evidence.
Method
To disentangle these conflicting findings, we conducted an online experiment using a simulated medical X-ray task. We manipulated the framing of support agents (i.e., artificial intelligence (AI) versus expert versus novice) between-subjects and failure experience (i.e., perfect support, imperfect support, back-to-perfect support) within subjects. Trust attitude and behavior as well as perceived reliability served as dependent variables.
Results
Trust attitude and perceived reliability were higher for the human expert than for the AI than for the human novice. Moreover, the results showed the typical pattern of trust formation, dissolution, and restoration for trust attitude and behavior as well as perceived reliability. Forgiveness after failure experience did not differ between agents.
Conclusion
The results strongly imply the existence of an imperfect automation schema. This illustrates the need to consider agent expertise for human-agent interaction.
Application
When replacing human experts with AI as support agents, the challenge of lower trust attitude towards the novel agent might arise. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ISSN: | 0018-7208 1547-8181 1547-8181 |
DOI: | 10.1177/00187208231197347 |