Incorporating Radiologist Knowledge Into MRI Quality Metrics for Machine Learning Using Rank‐Based Ratings
Deep learning (DL) often requires an image quality metric; however, widely used metrics are not designed for medical images. To develop an image quality metric that is specific to MRI using radiologists image rankings and DL models. Retrospective. A total of 19,344 rankings on 2916 unique image pair...
Saved in:
| Published in | Journal of magnetic resonance imaging Vol. 61; no. 6; pp. 2572 - 2584 |
|---|---|
| Main Authors | , , , , , , , , , |
| Format | Journal Article |
| Language | English |
| Published |
United States
Wiley Subscription Services, Inc
01.06.2025
John Wiley & Sons, Inc |
| Subjects | |
| Online Access | Get full text |
| ISSN | 1053-1807 1522-2586 1522-2586 |
| DOI | 10.1002/jmri.29672 |
Cover
| Summary: | Deep learning (DL) often requires an image quality metric; however, widely used metrics are not designed for medical images.
To develop an image quality metric that is specific to MRI using radiologists image rankings and DL models.
Retrospective.
A total of 19,344 rankings on 2916 unique image pairs from the NYU fastMRI Initiative neuro database was used for the neural network-based image quality metrics training with an 80%/20% training/validation split and fivefold cross-validation.
1.5 T and 3 T T1, T1 postcontrast, T2, and FLuid Attenuated Inversion Recovery (FLAIR).
Synthetically corrupted image pairs were ranked by radiologists (N = 7), with a subset also scoring images using a Likert scale (N = 2). DL models were trained to match rankings using two architectures (EfficientNet and IQ-Net) with and without reference image subtraction and compared to ranking based on mean squared error (MSE) and structural similarity (SSIM). Image quality assessing DL models were evaluated as alternatives to MSE and SSIM as optimization targets for DL denoising and reconstruction.
Radiologists' agreement was assessed by a percentage metric and quadratic weighted Cohen's kappa. Ranking accuracies were compared using repeated measurements analysis of variance. Reconstruction models trained with IQ-Net score, MSE and SSIM were compared by paired t test. P < 0.05 was considered significant.
Compared to direct Likert scoring, ranking produced a higher level of agreement between radiologists (70.4% vs. 25%). Image ranking was subjective with a high level of intraobserver agreement (
) and lower interobserver agreement (
). IQ-Net and EfficientNet accurately predicted rankings with a reference image (
and
). However, EfficientNet resulted in images with artifacts and high MSE when used in denoising tasks while IQ-Net optimized networks performed well for both denoising and reconstruction tasks.
Image quality networks can be trained from image ranking and used to optimize DL tasks.
3 TECHNICAL EFFICACY: Stage 1. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ISSN: | 1053-1807 1522-2586 1522-2586 |
| DOI: | 10.1002/jmri.29672 |