Challenges for Responsible AI Design and Workflow Integration in Healthcare: A Case Study of Automatic Feeding Tube Qualification in Radiology

Nasogastric tubes (NGTs) are feeding tubes that are inserted through the nose into the stomach to deliver nutrition or medication. If not placed correctly, they can cause serious harm, even death to patients. Recent AI developments demonstrate the feasibility of robustly detecting NGT placement from...

Full description

Saved in:
Bibliographic Details
Published inACM transactions on computer-human interaction Vol. 32; no. 4; pp. 1 - 61
Main Authors Thieme, Anja, Rajamohan, Abhijith, Cooper, Benjamin, Groombridge, Heather, Simister, Robert, Wong, Barney, Woznitza, Nicholas, Pinnock, Mark A., Wetscherek, Maria T., Morrison, Cecily, Richardson, Hannah, Pérez-García, Fernando, Hyland, Stephanie L., Bannur, Shruthi, Coelho de Castro, Daniel, Bouzid, Kenza, Schwaighofer, Anton, Ranjit, Mercy P., Sharma, Harshita, Lungren, Matthew P., Oktay, Ozan, Alvarez-Valle, Javier, Nori, Aditya, Harris, Steve, Jacob, Joseph
Format Journal Article
LanguageEnglish
Published New York, NY ACM 19.08.2025
Subjects
Online AccessGet full text
ISSN1073-0516
1557-7325
DOI10.1145/3716500

Cover

More Information
Summary:Nasogastric tubes (NGTs) are feeding tubes that are inserted through the nose into the stomach to deliver nutrition or medication. If not placed correctly, they can cause serious harm, even death to patients. Recent AI developments demonstrate the feasibility of robustly detecting NGT placement from Chest X-ray images to reduce risks of sub-optimally or critically placed NGTs being missed or delayed in their detection, but gaps remain in clinical practice integration. In this study, we present a human-centered approach to the problem and describe insights derived following contextual inquiry and in-depth interviews with 15 clinical stakeholders. The interviews helped understand challenges in existing workflows, and how best to align technical capabilities with user needs and expectations. We discovered the tradeoffs and complexities that need consideration when choosing suitable workflow stages, target users, and design configurations for different AI proposals. We explored how to balance AI benefits and risks for healthcare staff and patients within broader organizational, technical, and medical-legal constraints. We also identified data issues related to edge cases and data biases that affect model training and evaluation; how data documentation practices influence data preparation and labeling; and how to measure relevant AI outcomes reliably in future evaluations. We discuss how our work informs design and development of AI applications that are clinically useful, ethical, and acceptable in real-world healthcare services.
ISSN:1073-0516
1557-7325
DOI:10.1145/3716500