Artificial Intelligence Using Open Source BI-RADS Data Exemplifying Potential Future Use
With much hype about artificial intelligence (AI) rendering radiologists redundant, a simple radiologist-augmented AI workflow is evaluated; the premise is that inclusion of a radiologist’s opinion into an AI algorithm would make the algorithm achieve better accuracy than an algorithm trained on ima...
Saved in:
| Published in | Journal of the American College of Radiology Vol. 16; no. 1; pp. 64 - 72 |
|---|---|
| Main Author | |
| Format | Journal Article |
| Language | English |
| Published |
United States
Elsevier Inc
01.01.2019
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 1546-1440 1558-349X 1558-349X |
| DOI | 10.1016/j.jacr.2018.09.040 |
Cover
| Summary: | With much hype about artificial intelligence (AI) rendering radiologists redundant, a simple radiologist-augmented AI workflow is evaluated; the premise is that inclusion of a radiologist’s opinion into an AI algorithm would make the algorithm achieve better accuracy than an algorithm trained on imaging parameters alone. Open-source BI-RADS data sets were evaluated to see whether inclusion of a radiologist’s opinion (in the form of BI-RADS classification) in addition to image parameters improved the accuracy of prediction of histology using three machine learning algorithms vis-à-vis algorithms using image parameters alone.
BI-RADS data sets were obtained from the University of California, Irvine Machine Learning Repository (data set 1) and the Digital Database for Screening Mammography repository (data set 2); three machine learning algorithms were trained using 10-fold cross-validation. Two sets of models were trained: M1, using lesion shape, margin, density, and patient age for data set 1 and image texture parameters for data set 2, and M2, using the previous image parameters and the BI-RADS classification provided by radiologists. The area under the curve and the Gini coefficient for M1 and M2 were compared for the validation data set.
The models using the radiologist-provided BI-RADS classification performed significantly better than the models not using them (P < .0001).
AI and radiologist working together can achieve better results, helping in case-based decision making. Further evaluation of the metrics involved in predictor handling by AI algorithms will provide newer insights into imaging. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
| ISSN: | 1546-1440 1558-349X 1558-349X |
| DOI: | 10.1016/j.jacr.2018.09.040 |