Automatic assessment of mammographic density using a deep transfer learning method

Mammographic breast density is one of the strongest risk factors for cancer. Density assessed by radiologists using visual analogue scales has been shown to provide better risk predictions than other methods. Our purpose is to build automated models using deep learning and train on radiologist score...

Full description

Saved in:
Bibliographic Details
Published inJournal of medical imaging (Bellingham, Wash.) Vol. 10; no. 2; p. 024502
Main Authors Squires, Steven, Harkness, Elaine, Gareth Evans, Dafydd, Astley, Susan M.
Format Journal Article
LanguageEnglish
Published United States Society of Photo-Optical Instrumentation Engineers 01.03.2023
SPIE
Subjects
Online AccessGet full text
ISSN2329-4302
2329-4310
DOI10.1117/1.JMI.10.2.024502

Cover

More Information
Summary:Mammographic breast density is one of the strongest risk factors for cancer. Density assessed by radiologists using visual analogue scales has been shown to provide better risk predictions than other methods. Our purpose is to build automated models using deep learning and train on radiologist scores to make accurate and consistent predictions. We used a dataset of almost 160,000 mammograms, each with two independent density scores made by expert medical practitioners. We used two pretrained deep networks and adapted them to produce feature vectors, which were then used for both linear and nonlinear regression to make density predictions. We also simulated an "optimal method," which allowed us to compare the quality of our results with a simulated upper bound on performance. Our deep learning method produced estimates with a root mean squared error (RMSE) of . The model estimates of cancer risk perform at a similar level to human experts, within uncertainty bounds. We made comparisons between different model variants and demonstrated the high level of consistency of the model predictions. Our modeled "optimal method" produced image predictions with a RMSE of between 7.98 and 8.90 for cranial caudal images. We demonstrated a deep learning framework based upon a transfer learning approach to make density estimates based on radiologists' visual scores. Our approach requires modest computational resources and has the potential to be trained with limited quantities of data.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2329-4302
2329-4310
DOI:10.1117/1.JMI.10.2.024502