A multiple testing framework for diagnostic accuracy studies with co‐primary endpoints
Major advances have been made regarding the utilization of machine learning techniques for disease diagnosis and prognosis based on complex and high‐dimensional data. Despite all justified enthusiasm, overoptimistic assessments of predictive performance are still common in this area. However, predic...
Saved in:
| Published in | Statistics in medicine Vol. 41; no. 5; pp. 891 - 909 |
|---|---|
| Main Authors | , , |
| Format | Journal Article |
| Language | English |
| Published |
England
Wiley Subscription Services, Inc
28.02.2022
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 0277-6715 1097-0258 1097-0258 |
| DOI | 10.1002/sim.9308 |
Cover
| Summary: | Major advances have been made regarding the utilization of machine learning techniques for disease diagnosis and prognosis based on complex and high‐dimensional data. Despite all justified enthusiasm, overoptimistic assessments of predictive performance are still common in this area. However, predictive models and medical devices based on such models should undergo a throughout evaluation before being implemented into clinical practice. In this work, we propose a multiple testing framework for (comparative) phase III diagnostic accuracy studies with sensitivity and specificity as co‐primary endpoints. Our approach challenges the frequent recommendation to strictly separate model selection and evaluation, that is, to only assess a single diagnostic model in the evaluation study. We show that our parametric simultaneous test procedure asymptotically allows strong control of the family‐wise error rate. A multiplicity correction is also available for point and interval estimates. Moreover, we demonstrate in an extensive simulation study that our multiple testing strategy on average leads to a better final diagnostic model and increased statistical power. To plan such studies, we propose a Bayesian approach to determine the optimal number of models to evaluate simultaneously. For this purpose, our algorithm optimizes the expected final model performance given previous (hold‐out) data from the model development phase. We conclude that an assessment of multiple promising diagnostic models in the same evaluation study has several advantages when suitable adjustments for multiple comparisons are employed. |
|---|---|
| Bibliography: | Funding information Deutsche Forschungsgemeinschaft, 281474342/GRK2224/1 ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ISSN: | 0277-6715 1097-0258 1097-0258 |
| DOI: | 10.1002/sim.9308 |