Algorithmic fairness and bias mitigation for clinical machine learning with deep reinforcement learning

As models based on machine learning continue to be developed for healthcare applications, greater effort is needed to ensure that these technologies do not reflect or exacerbate any unwanted or discriminatory biases that may be present in the data. Here we introduce a reinforcement learning framewor...

Full description

Saved in:
Bibliographic Details
Published inNature machine intelligence Vol. 5; no. 8; pp. 884 - 894
Main Authors Yang, Jenny, Soltan, Andrew A. S., Eyre, David W., Clifton, David A.
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group UK 01.08.2023
Nature Publishing Group
Subjects
Online AccessGet full text
ISSN2522-5839
2522-5839
DOI10.1038/s42256-023-00697-3

Cover

More Information
Summary:As models based on machine learning continue to be developed for healthcare applications, greater effort is needed to ensure that these technologies do not reflect or exacerbate any unwanted or discriminatory biases that may be present in the data. Here we introduce a reinforcement learning framework capable of mitigating biases that may have been acquired during data collection. In particular, we evaluated our model for the task of rapidly predicting COVID-19 for patients presenting to hospital emergency departments and aimed to mitigate any site (hospital)-specific and ethnicity-based biases present in the data. Using a specialized reward function and training procedure, we show that our method achieves clinically effective screening performances, while significantly improving outcome fairness compared with current benchmarks and state-of-the-art machine learning methods. We performed external validation across three independent hospitals, and additionally tested our method on a patient intensive care unit discharge status task, demonstrating model generalizability. The tendency of machine learning algorithms to learn biases from training data calls for methods to mitigate unfairness before deployment to healthcare and other applications. Yang et al. propose a reinforcement-learning-based method for algorithmic bias mitigation and demonstrate it on COVID-19 screening and patient discharge prediction tasks.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2522-5839
2522-5839
DOI:10.1038/s42256-023-00697-3