Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists

Chest radiograph interpretation is critical for the detection of thoracic diseases, including tuberculosis and lung cancer, which affect millions of people worldwide each year. This time-consuming task typically requires expert radiologists to read the images, leading to fatigue-based diagnostic err...

Full description

Saved in:
Bibliographic Details
Published inPLoS medicine Vol. 15; no. 11; p. e1002686
Main Authors Rajpurkar, Pranav, Irvin, Jeremy, Ball, Robyn L., Zhu, Kaylie, Yang, Brandon, Mehta, Hershel, Duan, Tony, Ding, Daisy, Bagul, Aarti, Langlotz, Curtis P., Patel, Bhavik N., Yeom, Kristen W., Shpanskaya, Katie, Blankenberg, Francis G., Seekins, Jayne, Amrhein, Timothy J., Mong, David A., Halabi, Safwan S., Zucker, Evan J., Ng, Andrew Y., Lungren, Matthew P.
Format Journal Article
LanguageEnglish
Published United States Public Library of Science 20.11.2018
Public Library of Science (PLoS)
Subjects
Online AccessGet full text
ISSN1549-1676
1549-1277
1549-1676
DOI10.1371/journal.pmed.1002686

Cover

Abstract Chest radiograph interpretation is critical for the detection of thoracic diseases, including tuberculosis and lung cancer, which affect millions of people worldwide each year. This time-consuming task typically requires expert radiologists to read the images, leading to fatigue-based diagnostic error and lack of diagnostic expertise in areas of the world where radiologists are not available. Recently, deep learning approaches have been able to achieve expert-level performance in medical image interpretation tasks, powered by large network architectures and fueled by the emergence of large labeled datasets. The purpose of this study is to investigate the performance of a deep learning algorithm on the detection of pathologies in chest radiographs compared with practicing radiologists. We developed CheXNeXt, a convolutional neural network to concurrently detect the presence of 14 different pathologies, including pneumonia, pleural effusion, pulmonary masses, and nodules in frontal-view chest radiographs. CheXNeXt was trained and internally validated on the ChestX-ray8 dataset, with a held-out validation set consisting of 420 images, sampled to contain at least 50 cases of each of the original pathology labels. On this validation set, the majority vote of a panel of 3 board-certified cardiothoracic specialist radiologists served as reference standard. We compared CheXNeXt's discriminative performance on the validation set to the performance of 9 radiologists using the area under the receiver operating characteristic curve (AUC). The radiologists included 6 board-certified radiologists (average experience 12 years, range 4-28 years) and 3 senior radiology residents, from 3 academic institutions. We found that CheXNeXt achieved radiologist-level performance on 11 pathologies and did not achieve radiologist-level performance on 3 pathologies. The radiologists achieved statistically significantly higher AUC performance on cardiomegaly, emphysema, and hiatal hernia, with AUCs of 0.888 (95% confidence interval [CI] 0.863-0.910), 0.911 (95% CI 0.866-0.947), and 0.985 (95% CI 0.974-0.991), respectively, whereas CheXNeXt's AUCs were 0.831 (95% CI 0.790-0.870), 0.704 (95% CI 0.567-0.833), and 0.851 (95% CI 0.785-0.909), respectively. CheXNeXt performed better than radiologists in detecting atelectasis, with an AUC of 0.862 (95% CI 0.825-0.895), statistically significantly higher than radiologists' AUC of 0.808 (95% CI 0.777-0.838); there were no statistically significant differences in AUCs for the other 10 pathologies. The average time to interpret the 420 images in the validation set was substantially longer for the radiologists (240 minutes) than for CheXNeXt (1.5 minutes). The main limitations of our study are that neither CheXNeXt nor the radiologists were permitted to use patient history or review prior examinations and that evaluation was limited to a dataset from a single institution. In this study, we developed and validated a deep learning algorithm that classified clinically important abnormalities in chest radiographs at a performance level comparable to practicing radiologists. Once tested prospectively in clinical settings, the algorithm could have the potential to expand patient access to chest radiograph diagnostics.
AbstractList Chest radiograph interpretation is critical for the detection of thoracic diseases, including tuberculosis and lung cancer, which affect millions of people worldwide each year. This time-consuming task typically requires expert radiologists to read the images, leading to fatigue-based diagnostic error and lack of diagnostic expertise in areas of the world where radiologists are not available. Recently, deep learning approaches have been able to achieve expert-level performance in medical image interpretation tasks, powered by large network architectures and fueled by the emergence of large labeled datasets. The purpose of this study is to investigate the performance of a deep learning algorithm on the detection of pathologies in chest radiographs compared with practicing radiologists. We developed CheXNeXt, a convolutional neural network to concurrently detect the presence of 14 different pathologies, including pneumonia, pleural effusion, pulmonary masses, and nodules in frontal-view chest radiographs. CheXNeXt was trained and internally validated on the ChestX-ray8 dataset, with a held-out validation set consisting of 420 images, sampled to contain at least 50 cases of each of the original pathology labels. On this validation set, the majority vote of a panel of 3 board-certified cardiothoracic specialist radiologists served as reference standard. We compared CheXNeXt's discriminative performance on the validation set to the performance of 9 radiologists using the area under the receiver operating characteristic curve (AUC). The radiologists included 6 board-certified radiologists (average experience 12 years, range 4-28 years) and 3 senior radiology residents, from 3 academic institutions. We found that CheXNeXt achieved radiologist-level performance on 11 pathologies and did not achieve radiologist-level performance on 3 pathologies. The radiologists achieved statistically significantly higher AUC performance on cardiomegaly, emphysema, and hiatal hernia, with AUCs of 0.888 (95% confidence interval [CI] 0.863-0.910), 0.911 (95% CI 0.866-0.947), and 0.985 (95% CI 0.974-0.991), respectively, whereas CheXNeXt's AUCs were 0.831 (95% CI 0.790-0.870), 0.704 (95% CI 0.567-0.833), and 0.851 (95% CI 0.785-0.909), respectively. CheXNeXt performed better than radiologists in detecting atelectasis, with an AUC of 0.862 (95% CI 0.825-0.895), statistically significantly higher than radiologists' AUC of 0.808 (95% CI 0.777-0.838); there were no statistically significant differences in AUCs for the other 10 pathologies. The average time to interpret the 420 images in the validation set was substantially longer for the radiologists (240 minutes) than for CheXNeXt (1.5 minutes). The main limitations of our study are that neither CheXNeXt nor the radiologists were permitted to use patient history or review prior examinations and that evaluation was limited to a dataset from a single institution. In this study, we developed and validated a deep learning algorithm that classified clinically important abnormalities in chest radiographs at a performance level comparable to practicing radiologists. Once tested prospectively in clinical settings, the algorithm could have the potential to expand patient access to chest radiograph diagnostics.
Background Chest radiograph interpretation is critical for the detection of thoracic diseases, including tuberculosis and lung cancer, which affect millions of people worldwide each year. This time-consuming task typically requires expert radiologists to read the images, leading to fatigue-based diagnostic error and lack of diagnostic expertise in areas of the world where radiologists are not available. Recently, deep learning approaches have been able to achieve expert-level performance in medical image interpretation tasks, powered by large network architectures and fueled by the emergence of large labeled datasets. The purpose of this study is to investigate the performance of a deep learning algorithm on the detection of pathologies in chest radiographs compared with practicing radiologists. Methods and findings We developed CheXNeXt, a convolutional neural network to concurrently detect the presence of 14 different pathologies, including pneumonia, pleural effusion, pulmonary masses, and nodules in frontal-view chest radiographs. CheXNeXt was trained and internally validated on the ChestX-ray8 dataset, with a held-out validation set consisting of 420 images, sampled to contain at least 50 cases of each of the original pathology labels. On this validation set, the majority vote of a panel of 3 board-certified cardiothoracic specialist radiologists served as reference standard. We compared CheXNeXt’s discriminative performance on the validation set to the performance of 9 radiologists using the area under the receiver operating characteristic curve (AUC). The radiologists included 6 board-certified radiologists (average experience 12 years, range 4–28 years) and 3 senior radiology residents, from 3 academic institutions. We found that CheXNeXt achieved radiologist-level performance on 11 pathologies and did not achieve radiologist-level performance on 3 pathologies. The radiologists achieved statistically significantly higher AUC performance on cardiomegaly, emphysema, and hiatal hernia, with AUCs of 0.888 (95% confidence interval [CI] 0.863–0.910), 0.911 (95% CI 0.866–0.947), and 0.985 (95% CI 0.974–0.991), respectively, whereas CheXNeXt’s AUCs were 0.831 (95% CI 0.790–0.870), 0.704 (95% CI 0.567–0.833), and 0.851 (95% CI 0.785–0.909), respectively. CheXNeXt performed better than radiologists in detecting atelectasis, with an AUC of 0.862 (95% CI 0.825–0.895), statistically significantly higher than radiologists' AUC of 0.808 (95% CI 0.777–0.838); there were no statistically significant differences in AUCs for the other 10 pathologies. The average time to interpret the 420 images in the validation set was substantially longer for the radiologists (240 minutes) than for CheXNeXt (1.5 minutes). The main limitations of our study are that neither CheXNeXt nor the radiologists were permitted to use patient history or review prior examinations and that evaluation was limited to a dataset from a single institution. Conclusions In this study, we developed and validated a deep learning algorithm that classified clinically important abnormalities in chest radiographs at a performance level comparable to practicing radiologists. Once tested prospectively in clinical settings, the algorithm could have the potential to expand patient access to chest radiograph diagnostics.
In their study, Pranav Rajpurkar and colleagues test a deep learning algorithm that classifies clinically important abnormalities in chest radiographs.
Chest radiograph interpretation is critical for the detection of thoracic diseases, including tuberculosis and lung cancer, which affect millions of people worldwide each year. This time-consuming task typically requires expert radiologists to read the images, leading to fatigue-based diagnostic error and lack of diagnostic expertise in areas of the world where radiologists are not available. Recently, deep learning approaches have been able to achieve expert-level performance in medical image interpretation tasks, powered by large network architectures and fueled by the emergence of large labeled datasets. The purpose of this study is to investigate the performance of a deep learning algorithm on the detection of pathologies in chest radiographs compared with practicing radiologists. We developed CheXNeXt, a convolutional neural network to concurrently detect the presence of 14 different pathologies, including pneumonia, pleural effusion, pulmonary masses, and nodules in frontal-view chest radiographs. CheXNeXt was trained and internally validated on the ChestX-ray8 dataset, with a held-out validation set consisting of 420 images, sampled to contain at least 50 cases of each of the original pathology labels. On this validation set, the majority vote of a panel of 3 board-certified cardiothoracic specialist radiologists served as reference standard. We compared CheXNeXt's discriminative performance on the validation set to the performance of 9 radiologists using the area under the receiver operating characteristic curve (AUC). The radiologists included 6 board-certified radiologists (average experience 12 years, range 4-28 years) and 3 senior radiology residents, from 3 academic institutions. We found that CheXNeXt achieved radiologist-level performance on 11 pathologies and did not achieve radiologist-level performance on 3 pathologies. The radiologists achieved statistically significantly higher AUC performance on cardiomegaly, emphysema, and hiatal hernia, with AUCs of 0.888 (95% confidence interval [CI] 0.863-0.910), 0.911 (95% CI 0.866-0.947), and 0.985 (95% CI 0.974-0.991), respectively, whereas CheXNeXt's AUCs were 0.831 (95% CI 0.790-0.870), 0.704 (95% CI 0.567-0.833), and 0.851 (95% CI 0.785-0.909), respectively. CheXNeXt performed better than radiologists in detecting atelectasis, with an AUC of 0.862 (95% CI 0.825-0.895), statistically significantly higher than radiologists' AUC of 0.808 (95% CI 0.777-0.838); there were no statistically significant differences in AUCs for the other 10 pathologies. The average time to interpret the 420 images in the validation set was substantially longer for the radiologists (240 minutes) than for CheXNeXt (1.5 minutes). The main limitations of our study are that neither CheXNeXt nor the radiologists were permitted to use patient history or review prior examinations and that evaluation was limited to a dataset from a single institution. In this study, we developed and validated a deep learning algorithm that classified clinically important abnormalities in chest radiographs at a performance level comparable to practicing radiologists. Once tested prospectively in clinical settings, the algorithm could have the potential to expand patient access to chest radiograph diagnostics.
Chest radiograph interpretation is critical for the detection of thoracic diseases, including tuberculosis and lung cancer, which affect millions of people worldwide each year. This time-consuming task typically requires expert radiologists to read the images, leading to fatigue-based diagnostic error and lack of diagnostic expertise in areas of the world where radiologists are not available. Recently, deep learning approaches have been able to achieve expert-level performance in medical image interpretation tasks, powered by large network architectures and fueled by the emergence of large labeled datasets. The purpose of this study is to investigate the performance of a deep learning algorithm on the detection of pathologies in chest radiographs compared with practicing radiologists.BACKGROUNDChest radiograph interpretation is critical for the detection of thoracic diseases, including tuberculosis and lung cancer, which affect millions of people worldwide each year. This time-consuming task typically requires expert radiologists to read the images, leading to fatigue-based diagnostic error and lack of diagnostic expertise in areas of the world where radiologists are not available. Recently, deep learning approaches have been able to achieve expert-level performance in medical image interpretation tasks, powered by large network architectures and fueled by the emergence of large labeled datasets. The purpose of this study is to investigate the performance of a deep learning algorithm on the detection of pathologies in chest radiographs compared with practicing radiologists.We developed CheXNeXt, a convolutional neural network to concurrently detect the presence of 14 different pathologies, including pneumonia, pleural effusion, pulmonary masses, and nodules in frontal-view chest radiographs. CheXNeXt was trained and internally validated on the ChestX-ray8 dataset, with a held-out validation set consisting of 420 images, sampled to contain at least 50 cases of each of the original pathology labels. On this validation set, the majority vote of a panel of 3 board-certified cardiothoracic specialist radiologists served as reference standard. We compared CheXNeXt's discriminative performance on the validation set to the performance of 9 radiologists using the area under the receiver operating characteristic curve (AUC). The radiologists included 6 board-certified radiologists (average experience 12 years, range 4-28 years) and 3 senior radiology residents, from 3 academic institutions. We found that CheXNeXt achieved radiologist-level performance on 11 pathologies and did not achieve radiologist-level performance on 3 pathologies. The radiologists achieved statistically significantly higher AUC performance on cardiomegaly, emphysema, and hiatal hernia, with AUCs of 0.888 (95% confidence interval [CI] 0.863-0.910), 0.911 (95% CI 0.866-0.947), and 0.985 (95% CI 0.974-0.991), respectively, whereas CheXNeXt's AUCs were 0.831 (95% CI 0.790-0.870), 0.704 (95% CI 0.567-0.833), and 0.851 (95% CI 0.785-0.909), respectively. CheXNeXt performed better than radiologists in detecting atelectasis, with an AUC of 0.862 (95% CI 0.825-0.895), statistically significantly higher than radiologists' AUC of 0.808 (95% CI 0.777-0.838); there were no statistically significant differences in AUCs for the other 10 pathologies. The average time to interpret the 420 images in the validation set was substantially longer for the radiologists (240 minutes) than for CheXNeXt (1.5 minutes). The main limitations of our study are that neither CheXNeXt nor the radiologists were permitted to use patient history or review prior examinations and that evaluation was limited to a dataset from a single institution.METHODS AND FINDINGSWe developed CheXNeXt, a convolutional neural network to concurrently detect the presence of 14 different pathologies, including pneumonia, pleural effusion, pulmonary masses, and nodules in frontal-view chest radiographs. CheXNeXt was trained and internally validated on the ChestX-ray8 dataset, with a held-out validation set consisting of 420 images, sampled to contain at least 50 cases of each of the original pathology labels. On this validation set, the majority vote of a panel of 3 board-certified cardiothoracic specialist radiologists served as reference standard. We compared CheXNeXt's discriminative performance on the validation set to the performance of 9 radiologists using the area under the receiver operating characteristic curve (AUC). The radiologists included 6 board-certified radiologists (average experience 12 years, range 4-28 years) and 3 senior radiology residents, from 3 academic institutions. We found that CheXNeXt achieved radiologist-level performance on 11 pathologies and did not achieve radiologist-level performance on 3 pathologies. The radiologists achieved statistically significantly higher AUC performance on cardiomegaly, emphysema, and hiatal hernia, with AUCs of 0.888 (95% confidence interval [CI] 0.863-0.910), 0.911 (95% CI 0.866-0.947), and 0.985 (95% CI 0.974-0.991), respectively, whereas CheXNeXt's AUCs were 0.831 (95% CI 0.790-0.870), 0.704 (95% CI 0.567-0.833), and 0.851 (95% CI 0.785-0.909), respectively. CheXNeXt performed better than radiologists in detecting atelectasis, with an AUC of 0.862 (95% CI 0.825-0.895), statistically significantly higher than radiologists' AUC of 0.808 (95% CI 0.777-0.838); there were no statistically significant differences in AUCs for the other 10 pathologies. The average time to interpret the 420 images in the validation set was substantially longer for the radiologists (240 minutes) than for CheXNeXt (1.5 minutes). The main limitations of our study are that neither CheXNeXt nor the radiologists were permitted to use patient history or review prior examinations and that evaluation was limited to a dataset from a single institution.In this study, we developed and validated a deep learning algorithm that classified clinically important abnormalities in chest radiographs at a performance level comparable to practicing radiologists. Once tested prospectively in clinical settings, the algorithm could have the potential to expand patient access to chest radiograph diagnostics.CONCLUSIONSIn this study, we developed and validated a deep learning algorithm that classified clinically important abnormalities in chest radiographs at a performance level comparable to practicing radiologists. Once tested prospectively in clinical settings, the algorithm could have the potential to expand patient access to chest radiograph diagnostics.
BackgroundChest radiograph interpretation is critical for the detection of thoracic diseases, including tuberculosis and lung cancer, which affect millions of people worldwide each year. This time-consuming task typically requires expert radiologists to read the images, leading to fatigue-based diagnostic error and lack of diagnostic expertise in areas of the world where radiologists are not available. Recently, deep learning approaches have been able to achieve expert-level performance in medical image interpretation tasks, powered by large network architectures and fueled by the emergence of large labeled datasets. The purpose of this study is to investigate the performance of a deep learning algorithm on the detection of pathologies in chest radiographs compared with practicing radiologists.Methods and findingsWe developed CheXNeXt, a convolutional neural network to concurrently detect the presence of 14 different pathologies, including pneumonia, pleural effusion, pulmonary masses, and nodules in frontal-view chest radiographs. CheXNeXt was trained and internally validated on the ChestX-ray8 dataset, with a held-out validation set consisting of 420 images, sampled to contain at least 50 cases of each of the original pathology labels. On this validation set, the majority vote of a panel of 3 board-certified cardiothoracic specialist radiologists served as reference standard. We compared CheXNeXt's discriminative performance on the validation set to the performance of 9 radiologists using the area under the receiver operating characteristic curve (AUC). The radiologists included 6 board-certified radiologists (average experience 12 years, range 4-28 years) and 3 senior radiology residents, from 3 academic institutions. We found that CheXNeXt achieved radiologist-level performance on 11 pathologies and did not achieve radiologist-level performance on 3 pathologies. The radiologists achieved statistically significantly higher AUC performance on cardiomegaly, emphysema, and hiatal hernia, with AUCs of 0.888 (95% confidence interval [CI] 0.863-0.910), 0.911 (95% CI 0.866-0.947), and 0.985 (95% CI 0.974-0.991), respectively, whereas CheXNeXt's AUCs were 0.831 (95% CI 0.790-0.870), 0.704 (95% CI 0.567-0.833), and 0.851 (95% CI 0.785-0.909), respectively. CheXNeXt performed better than radiologists in detecting atelectasis, with an AUC of 0.862 (95% CI 0.825-0.895), statistically significantly higher than radiologists' AUC of 0.808 (95% CI 0.777-0.838); there were no statistically significant differences in AUCs for the other 10 pathologies. The average time to interpret the 420 images in the validation set was substantially longer for the radiologists (240 minutes) than for CheXNeXt (1.5 minutes). The main limitations of our study are that neither CheXNeXt nor the radiologists were permitted to use patient history or review prior examinations and that evaluation was limited to a dataset from a single institution.ConclusionsIn this study, we developed and validated a deep learning algorithm that classified clinically important abnormalities in chest radiographs at a performance level comparable to practicing radiologists. Once tested prospectively in clinical settings, the algorithm could have the potential to expand patient access to chest radiograph diagnostics.
Background Chest radiograph interpretation is critical for the detection of thoracic diseases, including tuberculosis and lung cancer, which affect millions of people worldwide each year. This time-consuming task typically requires expert radiologists to read the images, leading to fatigue-based diagnostic error and lack of diagnostic expertise in areas of the world where radiologists are not available. Recently, deep learning approaches have been able to achieve expert-level performance in medical image interpretation tasks, powered by large network architectures and fueled by the emergence of large labeled datasets. The purpose of this study is to investigate the performance of a deep learning algorithm on the detection of pathologies in chest radiographs compared with practicing radiologists. Methods and findings We developed CheXNeXt, a convolutional neural network to concurrently detect the presence of 14 different pathologies, including pneumonia, pleural effusion, pulmonary masses, and nodules in frontal-view chest radiographs. CheXNeXt was trained and internally validated on the ChestX-ray8 dataset, with a held-out validation set consisting of 420 images, sampled to contain at least 50 cases of each of the original pathology labels. On this validation set, the majority vote of a panel of 3 board-certified cardiothoracic specialist radiologists served as reference standard. We compared CheXNeXt’s discriminative performance on the validation set to the performance of 9 radiologists using the area under the receiver operating characteristic curve (AUC). The radiologists included 6 board-certified radiologists (average experience 12 years, range 4–28 years) and 3 senior radiology residents, from 3 academic institutions. We found that CheXNeXt achieved radiologist-level performance on 11 pathologies and did not achieve radiologist-level performance on 3 pathologies. The radiologists achieved statistically significantly higher AUC performance on cardiomegaly, emphysema, and hiatal hernia, with AUCs of 0.888 (95% confidence interval [CI] 0.863–0.910), 0.911 (95% CI 0.866–0.947), and 0.985 (95% CI 0.974–0.991), respectively, whereas CheXNeXt’s AUCs were 0.831 (95% CI 0.790–0.870), 0.704 (95% CI 0.567–0.833), and 0.851 (95% CI 0.785–0.909), respectively. CheXNeXt performed better than radiologists in detecting atelectasis, with an AUC of 0.862 (95% CI 0.825–0.895), statistically significantly higher than radiologists' AUC of 0.808 (95% CI 0.777–0.838); there were no statistically significant differences in AUCs for the other 10 pathologies. The average time to interpret the 420 images in the validation set was substantially longer for the radiologists (240 minutes) than for CheXNeXt (1.5 minutes). The main limitations of our study are that neither CheXNeXt nor the radiologists were permitted to use patient history or review prior examinations and that evaluation was limited to a dataset from a single institution. Conclusions In this study, we developed and validated a deep learning algorithm that classified clinically important abnormalities in chest radiographs at a performance level comparable to practicing radiologists. Once tested prospectively in clinical settings, the algorithm could have the potential to expand patient access to chest radiograph diagnostics.
Audience Academic
Author Bagul, Aarti
Mong, David A.
Ball, Robyn L.
Langlotz, Curtis P.
Mehta, Hershel
Rajpurkar, Pranav
Zhu, Kaylie
Lungren, Matthew P.
Seekins, Jayne
Patel, Bhavik N.
Blankenberg, Francis G.
Amrhein, Timothy J.
Irvin, Jeremy
Duan, Tony
Yang, Brandon
Zucker, Evan J.
Shpanskaya, Katie
Ding, Daisy
Halabi, Safwan S.
Yeom, Kristen W.
Ng, Andrew Y.
AuthorAffiliation Edinburgh University, UNITED KINGDOM
2 Department of Medicine, Quantitative Sciences Unit, Stanford University, Stanford, California, United States of America
4 Department of Radiology, Duke University, Durham, North Carolina, United States of America
5 Department of Radiology, University of Colorado, Denver, Colorado, United States of America
3 Department of Radiology, Stanford University, Stanford, California, United States of America
1 Department of Computer Science, Stanford University, Stanford, California, United States of America
AuthorAffiliation_xml – name: 1 Department of Computer Science, Stanford University, Stanford, California, United States of America
– name: 5 Department of Radiology, University of Colorado, Denver, Colorado, United States of America
– name: 3 Department of Radiology, Stanford University, Stanford, California, United States of America
– name: 2 Department of Medicine, Quantitative Sciences Unit, Stanford University, Stanford, California, United States of America
– name: 4 Department of Radiology, Duke University, Durham, North Carolina, United States of America
– name: Edinburgh University, UNITED KINGDOM
Author_xml – sequence: 1
  givenname: Pranav
  orcidid: 0000-0002-8030-3727
  surname: Rajpurkar
  fullname: Rajpurkar, Pranav
– sequence: 2
  givenname: Jeremy
  orcidid: 0000-0002-0395-4403
  surname: Irvin
  fullname: Irvin, Jeremy
– sequence: 3
  givenname: Robyn L.
  surname: Ball
  fullname: Ball, Robyn L.
– sequence: 4
  givenname: Kaylie
  surname: Zhu
  fullname: Zhu, Kaylie
– sequence: 5
  givenname: Brandon
  surname: Yang
  fullname: Yang, Brandon
– sequence: 6
  givenname: Hershel
  surname: Mehta
  fullname: Mehta, Hershel
– sequence: 7
  givenname: Tony
  surname: Duan
  fullname: Duan, Tony
– sequence: 8
  givenname: Daisy
  surname: Ding
  fullname: Ding, Daisy
– sequence: 9
  givenname: Aarti
  surname: Bagul
  fullname: Bagul, Aarti
– sequence: 10
  givenname: Curtis P.
  orcidid: 0000-0002-8972-8051
  surname: Langlotz
  fullname: Langlotz, Curtis P.
– sequence: 11
  givenname: Bhavik N.
  surname: Patel
  fullname: Patel, Bhavik N.
– sequence: 12
  givenname: Kristen W.
  orcidid: 0000-0001-9860-3368
  surname: Yeom
  fullname: Yeom, Kristen W.
– sequence: 13
  givenname: Katie
  orcidid: 0000-0003-2741-4046
  surname: Shpanskaya
  fullname: Shpanskaya, Katie
– sequence: 14
  givenname: Francis G.
  surname: Blankenberg
  fullname: Blankenberg, Francis G.
– sequence: 15
  givenname: Jayne
  surname: Seekins
  fullname: Seekins, Jayne
– sequence: 16
  givenname: Timothy J.
  orcidid: 0000-0002-9354-9486
  surname: Amrhein
  fullname: Amrhein, Timothy J.
– sequence: 17
  givenname: David A.
  surname: Mong
  fullname: Mong, David A.
– sequence: 18
  givenname: Safwan S.
  orcidid: 0000-0003-1317-984X
  surname: Halabi
  fullname: Halabi, Safwan S.
– sequence: 19
  givenname: Evan J.
  surname: Zucker
  fullname: Zucker, Evan J.
– sequence: 20
  givenname: Andrew Y.
  surname: Ng
  fullname: Ng, Andrew Y.
– sequence: 21
  givenname: Matthew P.
  surname: Lungren
  fullname: Lungren, Matthew P.
BackLink https://www.ncbi.nlm.nih.gov/pubmed/30457988$$D View this record in MEDLINE/PubMed
BookMark eNqVk21rFDEQxxep2Af9BqIBQfTFncluNrvXF0KpT4ViQYv0XchlJ7spuWRNckVf-N2da6_SK4cou7DL7G_-M5P_zn6x44OHonjK6JRVDXtzGZbRKzcdF9BNGaWlaMWDYo_VfDZhohE7d953i_2ULpGZ0Rl9VOxWlNfNrG33il_vAEbiQEVvfU9MiEQPkDKJqrOhj2ocSGdV70Oy6ZAckQg5hjSCzvYKiA6LUUWbgifBkDwAOR7g4jNcZKJcH6LNw4LkQMaoMEGvSlwLu9DblNPj4qFRLsGT9fOgOP_w_vz40-T07OPJ8dHpRLcVzRPRdC2IjvNmRucUe4e64Vp3rNOmpU3XGFo2LStZ3bKuQsTQGTOgRVkxNm-qg-L5jezoQpLrg0uyLGu8RcUoEic3RBfUpRyjXaj4UwZl5XUgxF6qiAM4kHPGmFBGgOGGN0y1TUVZVWJorvjcdKj1dl1tOUdvNPgcldsQ3fzi7SD7cCVFyWs0CwVerQVi-L5EM-TCJg3OKQ9hiX2zStQ15RVH9MU9dPt0a6pXOID1JmBdvRKVR7XgtKVt3SI12UL14AGbxH_PWAxv8NMtPF4dLKzemvB6IwGZDD9yr5YpyZOvX_6D_fzv7Nm3TfblHXYA5fKQgltmG3zaBJ_ddfGPfberg8DhDaBxH1IEI7XNaqWDx2CdZFSu9vTWELnaU7neU0zm95Jv9f-a9hsGtUDu
CitedBy_id crossref_primary_10_1002_jmrs_385
crossref_primary_10_1038_s42256_019_0126_0
crossref_primary_10_1038_s41591_024_02850_w
crossref_primary_10_1016_j_crad_2019_08_005
crossref_primary_10_2196_10010
crossref_primary_10_1038_s43856_021_00043_x
crossref_primary_10_1109_JBHI_2020_2974425
crossref_primary_10_1016_j_msksp_2024_103152
crossref_primary_10_1038_s41598_022_24721_5
crossref_primary_10_1109_TIP_2021_3052711
crossref_primary_10_1097_PRA_0000000000000713
crossref_primary_10_1364_BOE_447392
crossref_primary_10_1001_jamanetworkopen_2022_53820
crossref_primary_10_1109_ACCESS_2020_3010800
crossref_primary_10_1016_j_sasc_2023_200068
crossref_primary_10_1007_s00464_021_08331_2
crossref_primary_10_1007_s00330_020_07044_9
crossref_primary_10_1038_s41591_020_1037_7
crossref_primary_10_1109_TMI_2021_3054817
crossref_primary_10_1371_journal_pmed_1003381
crossref_primary_10_1016_j_jns_2022_120454
crossref_primary_10_3389_fdgth_2022_890759
crossref_primary_10_1038_s41598_019_50137_9
crossref_primary_10_1016_j_bone_2020_115561
crossref_primary_10_1024_1661_8157_a003597
crossref_primary_10_1002_jbio_202300486
crossref_primary_10_1155_2020_8876798
crossref_primary_10_1109_JBHI_2022_3162748
crossref_primary_10_3390_jimaging11030090
crossref_primary_10_7759_cureus_72173
crossref_primary_10_1016_S2589_7500_20_30218_1
crossref_primary_10_1038_s41467_021_24464_3
crossref_primary_10_53685_jshmdc_v5i1_227
crossref_primary_10_3390_biomedicines10030551
crossref_primary_10_32604_cmes_2022_022322
crossref_primary_10_1038_s42256_021_00399_8
crossref_primary_10_1016_j_patrec_2020_12_010
crossref_primary_10_1007_s00330_022_08538_4
crossref_primary_10_1148_radiol_2020192224
crossref_primary_10_1186_s12909_021_02870_x
crossref_primary_10_2196_15963
crossref_primary_10_1109_ACCESS_2022_3175311
crossref_primary_10_1148_ryai_230240
crossref_primary_10_1007_s00292_020_00827_3
crossref_primary_10_1016_j_acra_2021_09_013
crossref_primary_10_1148_ryai_2021200267
crossref_primary_10_1007_s10278_024_00990_6
crossref_primary_10_1136_bmjopen_2021_053024
crossref_primary_10_7717_peerj_cs_495
crossref_primary_10_37990_medr_1567242
crossref_primary_10_2214_AJR_22_27487
crossref_primary_10_1038_s42256_022_00536_x
crossref_primary_10_1177_0706743721998044
crossref_primary_10_1007_s12194_024_00871_1
crossref_primary_10_1109_JBHI_2022_3220813
crossref_primary_10_1007_s00330_019_06628_4
crossref_primary_10_1016_S2589_7500_20_30219_3
crossref_primary_10_1109_JPROC_2019_2943836
crossref_primary_10_51537_chaos_1326790
crossref_primary_10_1016_j_pacs_2020_100203
crossref_primary_10_1148_radiol_2021202818
crossref_primary_10_1097_ICU_0000000000001111
crossref_primary_10_1186_s12893_024_02646_2
crossref_primary_10_1038_s41598_021_89848_3
crossref_primary_10_1007_s10278_024_01136_4
crossref_primary_10_1167_jov_24_4_6
crossref_primary_10_35713_aic_v5_i2_97317
crossref_primary_10_1016_j_neucom_2021_04_044
crossref_primary_10_1007_s10278_024_01247_y
crossref_primary_10_58742_bmj_v2i4_140
crossref_primary_10_3389_fnume_2022_1083245
crossref_primary_10_3390_diagnostics11101868
crossref_primary_10_1016_j_media_2021_102125
crossref_primary_10_1038_s41598_022_25062_z
crossref_primary_10_1117_1_JMI_10_4_044504
crossref_primary_10_1117_1_JMI_10_4_044503
crossref_primary_10_1126_sciadv_abb7973
crossref_primary_10_3390_app10093233
crossref_primary_10_1016_j_ibmed_2020_100013
crossref_primary_10_1148_radiol_2019182627
crossref_primary_10_1016_j_health_2023_100206
crossref_primary_10_2174_1875036202114010093
crossref_primary_10_1021_acssensors_4c00636
crossref_primary_10_3389_fonc_2024_1424546
crossref_primary_10_1016_j_ejrad_2020_109188
crossref_primary_10_1053_j_ro_2023_02_001
crossref_primary_10_1016_j_jhazmat_2024_136003
crossref_primary_10_3390_app10186264
crossref_primary_10_1038_s41746_021_00438_z
crossref_primary_10_1016_j_compgeo_2023_105452
crossref_primary_10_1007_s42979_021_00881_5
crossref_primary_10_1038_s41598_022_16514_7
crossref_primary_10_1007_s11654_021_00298_9
crossref_primary_10_1038_s41598_025_90607_x
crossref_primary_10_3390_diagnostics10060417
crossref_primary_10_1038_s41551_024_01257_9
crossref_primary_10_1136_medhum_2021_012318
crossref_primary_10_11648_j_ijdsa_20241001_12
crossref_primary_10_31436_iiumej_v22i2_1752
crossref_primary_10_1007_s00138_020_01101_5
crossref_primary_10_1016_j_media_2022_102721
crossref_primary_10_2329_perio_63_119
crossref_primary_10_2214_AJR_21_26796
crossref_primary_10_1001_jamanetworkopen_2021_41096
crossref_primary_10_55525_tjst_1222836
crossref_primary_10_35401_2541_9897_2023_26_2_21_27
crossref_primary_10_1007_s41030_020_00110_z
crossref_primary_10_3389_fonc_2022_960178
crossref_primary_10_1016_j_bspc_2022_104488
crossref_primary_10_1007_s11042_023_16983_6
crossref_primary_10_1016_j_bspc_2022_104126
crossref_primary_10_1088_1361_6560_ac9510
crossref_primary_10_1016_j_patcog_2020_107613
crossref_primary_10_1038_s41598_022_15341_0
crossref_primary_10_1007_s10278_023_00868_z
crossref_primary_10_3348_kjr_2019_0821
crossref_primary_10_3390_sym14051003
crossref_primary_10_1016_j_annemergmed_2024_01_031
crossref_primary_10_1371_journal_pone_0279349
crossref_primary_10_1016_j_acra_2022_04_022
crossref_primary_10_1259_bjro_20190020
crossref_primary_10_3390_s21237966
crossref_primary_10_1117_1_JMI_11_6_064003
crossref_primary_10_1016_j_jvir_2019_05_026
crossref_primary_10_1016_j_ibmed_2020_100014
crossref_primary_10_1155_2021_5592472
crossref_primary_10_1136_bmj_m3164
crossref_primary_10_1007_s00521_023_09207_3
crossref_primary_10_1016_j_jacr_2024_02_034
crossref_primary_10_1371_journal_pone_0281690
crossref_primary_10_1155_2022_6872045
crossref_primary_10_1007_s40846_023_00828_6
crossref_primary_10_1111_exsy_13750
crossref_primary_10_1002_ima_22715
crossref_primary_10_1007_s11042_023_17215_7
crossref_primary_10_1038_s41598_024_53311_w
crossref_primary_10_1016_j_ebiom_2020_103106
crossref_primary_10_1038_s41598_024_65703_z
crossref_primary_10_1038_s41467_021_22328_4
crossref_primary_10_1038_s41598_020_61055_6
crossref_primary_10_3389_fmed_2023_1280462
crossref_primary_10_1016_j_asoc_2023_110817
crossref_primary_10_1200_EDBK_238891
crossref_primary_10_1038_s41598_022_05572_6
crossref_primary_10_1073_pnas_2001227117
crossref_primary_10_1080_1206212X_2021_1983289
crossref_primary_10_1148_radiol_232746
crossref_primary_10_3348_jksr_2020_0150
crossref_primary_10_1371_journal_pone_0246472
crossref_primary_10_1007_s10462_023_10457_9
crossref_primary_10_1257_jel_20241351
crossref_primary_10_3390_bioengineering11060626
crossref_primary_10_1371_journal_pone_0250952
crossref_primary_10_1016_j_wneu_2023_09_012
crossref_primary_10_1007_s00761_019_00679_4
crossref_primary_10_1148_radiol_2020200165
crossref_primary_10_3389_fradi_2021_713681
crossref_primary_10_1056_NEJMms1904869
crossref_primary_10_1038_s41746_019_0189_7
crossref_primary_10_26633_RPSP_2024_12
crossref_primary_10_26633_RPSP_2024_13
crossref_primary_10_1016_j_cbpa_2021_04_001
crossref_primary_10_1148_ryai_220187
crossref_primary_10_1111_jcpe_13689
crossref_primary_10_1002_mp_16790
crossref_primary_10_1148_ryai_220062
crossref_primary_10_1097_XCS_0000000000000190
crossref_primary_10_1038_s41746_020_0232_8
crossref_primary_10_1038_s41746_022_00698_3
crossref_primary_10_1097_TA_0000000000002320
crossref_primary_10_32604_cmc_2021_014134
crossref_primary_10_1007_s13721_023_00435_0
crossref_primary_10_1016_j_eswa_2020_114054
crossref_primary_10_1016_j_cmpb_2022_107024
crossref_primary_10_1088_1755_1315_794_1_012109
crossref_primary_10_1109_ACCESS_2022_3210468
crossref_primary_10_3390_encyclopedia1010021
crossref_primary_10_2147_CMAR_S279990
crossref_primary_10_1038_s41698_024_00649_z
crossref_primary_10_1371_journal_pone_0257884
crossref_primary_10_7759_cureus_41840
crossref_primary_10_1016_j_media_2024_103107
crossref_primary_10_1148_radiol_212482
crossref_primary_10_1038_s41551_023_01049_7
crossref_primary_10_1038_s41598_021_89194_4
crossref_primary_10_1148_ryai_220056
crossref_primary_10_1371_journal_pone_0242013
crossref_primary_10_1038_s41591_020_1034_x
crossref_primary_10_1038_s41598_024_60429_4
crossref_primary_10_1007_s00521_023_08606_w
crossref_primary_10_1002_ird3_113
crossref_primary_10_3390_diagnostics12071706
crossref_primary_10_1001_jamanetworkopen_2020_17135
crossref_primary_10_48175_IJETIR_1202
crossref_primary_10_1007_s42600_024_00392_1
crossref_primary_10_1186_s13244_019_0738_2
crossref_primary_10_1097_01_CDR_0000804996_57509_75
crossref_primary_10_1148_ryai_220170
crossref_primary_10_1136_bmjopen_2020_044461
crossref_primary_10_3390_app122211750
crossref_primary_10_7759_cureus_50316
crossref_primary_10_1148_ryai_230094
crossref_primary_10_1007_s11042_021_10707_4
crossref_primary_10_1016_j_asoc_2021_108094
crossref_primary_10_1109_ACCESS_2020_2995567
crossref_primary_10_1038_s41598_023_28633_w
crossref_primary_10_1016_j_jbi_2020_103528
crossref_primary_10_1016_j_cjca_2024_07_027
crossref_primary_10_1109_TMI_2024_3382042
crossref_primary_10_1093_pcmedi_pbaa028
crossref_primary_10_32604_csse_2022_021438
crossref_primary_10_3892_br_2019_1199
crossref_primary_10_1038_s41598_023_27397_7
crossref_primary_10_1007_s00330_020_07062_7
crossref_primary_10_1007_s00117_022_01097_1
crossref_primary_10_1016_j_patter_2020_100019
crossref_primary_10_1007_s10278_019_00180_9
crossref_primary_10_1016_j_dajour_2024_100460
crossref_primary_10_1109_RBME_2021_3131358
crossref_primary_10_3390_diagnostics14111081
crossref_primary_10_3138_utlj_2023_0002
crossref_primary_10_1111_vru_12901
crossref_primary_10_1136_bjo_2022_322183
crossref_primary_10_1183_16000617_0259_2022
crossref_primary_10_7717_peerj_8693
crossref_primary_10_3390_diagnostics13030574
crossref_primary_10_1038_s41598_022_27211_w
crossref_primary_10_1136_bmj_2023_076703
crossref_primary_10_1007_s43681_020_00002_7
crossref_primary_10_1111_jep_13510
crossref_primary_10_1038_s41551_022_00898_y
crossref_primary_10_1142_S2196888822500348
crossref_primary_10_1016_j_radi_2022_01_001
crossref_primary_10_1088_1361_6560_ac944d
crossref_primary_10_3348_kjr_2022_0588
crossref_primary_10_1109_ACCESS_2023_3249759
crossref_primary_10_1093_jamia_ocaa164
crossref_primary_10_1007_s10140_022_02019_3
crossref_primary_10_1016_j_acra_2020_01_012
crossref_primary_10_1016_j_bspc_2024_107018
crossref_primary_10_1177_20552076221143903
crossref_primary_10_1371_journal_pone_0293967
crossref_primary_10_56977_jicce_2024_22_2_165
crossref_primary_10_1167_tvst_9_2_7
crossref_primary_10_2478_amset_2022_0018
crossref_primary_10_1038_s41598_021_87762_2
crossref_primary_10_1155_2021_5556635
crossref_primary_10_1088_1742_6596_1951_1_012064
crossref_primary_10_1109_ACCESS_2025_3529206
crossref_primary_10_1007_s10462_024_11033_5
crossref_primary_10_3390_sym13081344
crossref_primary_10_1038_s41598_021_93202_y
crossref_primary_10_1097_SLA_0000000000004229
crossref_primary_10_1080_17476348_2020_1697853
crossref_primary_10_3390_diagnostics11112114
crossref_primary_10_1038_s41598_024_76608_2
crossref_primary_10_2319_031022_210_1
crossref_primary_10_1089_sur_2021_007
crossref_primary_10_1155_2022_7474304
crossref_primary_10_1016_j_tranon_2024_101894
crossref_primary_10_12688_wellcomeopenres_17164_2
crossref_primary_10_3390_diagnostics14121269
crossref_primary_10_1038_s41467_021_22018_1
crossref_primary_10_1186_s12880_022_00847_w
crossref_primary_10_1371_journal_pone_0253239
crossref_primary_10_1007_s00266_021_02698_2
crossref_primary_10_2139_ssrn_3861229
crossref_primary_10_1016_S1470_2045_19_30149_4
crossref_primary_10_3390_diagnostics13203195
crossref_primary_10_1016_j_compbiomed_2022_105466
crossref_primary_10_1038_s41746_023_00811_0
crossref_primary_10_3389_fphy_2024_1445204
crossref_primary_10_1007_s11548_021_02480_4
crossref_primary_10_3389_fbioe_2023_1268543
crossref_primary_10_1145_3625287
crossref_primary_10_1155_2022_7733583
crossref_primary_10_1148_ryai_220012
crossref_primary_10_1148_radiol_2019191293
crossref_primary_10_1371_journal_pone_0264140
crossref_primary_10_1002_cnm_3303
crossref_primary_10_1371_journal_pone_0264383
crossref_primary_10_4236_jcc_2024_126012
crossref_primary_10_1016_j_cmpb_2023_107359
crossref_primary_10_3390_reports2040026
crossref_primary_10_1002_mp_15655
crossref_primary_10_1542_hpeds_2022_007066
crossref_primary_10_7326_M23_1898
crossref_primary_10_1111_ijn_12725
crossref_primary_10_7759_cureus_72646
crossref_primary_10_1371_journal_pone_0232376
crossref_primary_10_1016_j_mcpdig_2024_07_005
crossref_primary_10_1109_ACCESS_2022_3182498
crossref_primary_10_1038_s41568_020_00327_9
crossref_primary_10_21015_vtcs_v10i2_1271
crossref_primary_10_1097_MD_0000000000023568
crossref_primary_10_12688_wellcomeopenres_17164_1
crossref_primary_10_3390_app12073247
crossref_primary_10_1007_s00256_021_03880_y
crossref_primary_10_1007_s10278_021_00543_1
crossref_primary_10_1088_1742_6596_2335_1_012023
crossref_primary_10_1007_s00330_020_07544_8
crossref_primary_10_1016_j_isci_2024_110511
crossref_primary_10_18231_j_jchm_2024_022
crossref_primary_10_1038_s41591_021_01614_0
crossref_primary_10_1253_circj_CJ_21_0265
crossref_primary_10_1016_j_ajpath_2021_05_005
crossref_primary_10_3390_diagnostics13020216
crossref_primary_10_1186_s42836_022_00145_4
crossref_primary_10_1016_j_compag_2025_110104
crossref_primary_10_1016_j_cmpb_2019_06_023
crossref_primary_10_1155_2022_9580991
crossref_primary_10_1016_j_chest_2024_01_039
crossref_primary_10_1055_a_2234_8268
crossref_primary_10_1302_0301_620X_101B12_BJJ_2019_0850_R1
crossref_primary_10_1038_s41598_024_66530_y
crossref_primary_10_1177_0846537120941671
crossref_primary_10_1016_j_rmr_2023_12_001
crossref_primary_10_48175_IJARSCT_22061
crossref_primary_10_1007_s00595_022_02601_9
crossref_primary_10_3390_geriatrics10020049
crossref_primary_10_3390_diagnostics14242865
crossref_primary_10_1016_j_diii_2022_11_007
crossref_primary_10_1117_1_JMI_10_6_064504
crossref_primary_10_1016_j_jpeds_2023_01_010
crossref_primary_10_1177_10935266211059809
crossref_primary_10_3389_fpubh_2021_640598
crossref_primary_10_1001_jamanetworkopen_2019_7416
crossref_primary_10_1007_s10278_024_01005_0
crossref_primary_10_1016_j_cjca_2021_02_016
crossref_primary_10_1016_j_procs_2020_12_015
crossref_primary_10_1016_j_radi_2022_09_011
crossref_primary_10_1109_JBHI_2020_3023476
crossref_primary_10_1016_j_patcog_2021_107856
crossref_primary_10_1371_journal_pmed_1002707
crossref_primary_10_1186_s12859_020_3503_0
crossref_primary_10_1038_s41598_021_90411_3
crossref_primary_10_3390_app12073341
crossref_primary_10_3934_mbe_2022322
crossref_primary_10_1093_labmed_lmaa023
crossref_primary_10_1148_ryai_230397
crossref_primary_10_32628_CSEIT2410116
crossref_primary_10_1109_ACCESS_2024_3400007
crossref_primary_10_1007_s10639_022_11086_5
crossref_primary_10_56977_jicce_2022_20_3_219
crossref_primary_10_1001_jamanetworkopen_2021_17391
crossref_primary_10_1007_s10554_024_03177_w
crossref_primary_10_17946_JRST_2020_43_3_195
crossref_primary_10_1016_j_artmed_2025_103089
crossref_primary_10_1038_s41598_020_74626_4
crossref_primary_10_3390_children11101232
crossref_primary_10_48175_IJARSCT_9564
crossref_primary_10_1002_aic_16260
crossref_primary_10_1007_s11277_024_11587_1
crossref_primary_10_1186_s12890_022_02068_x
crossref_primary_10_1148_ryai_2019190177
crossref_primary_10_1007_s00521_023_09147_y
crossref_primary_10_1186_s40537_024_01018_0
crossref_primary_10_1007_s42979_021_00720_7
crossref_primary_10_1148_ryai_2019190058
crossref_primary_10_3389_fmed_2023_1329087
crossref_primary_10_4081_jphr_2021_1985
crossref_primary_10_2214_AJR_23_29530
crossref_primary_10_1136_jnis_2023_021022
crossref_primary_10_7759_cureus_49723
crossref_primary_10_1227_ons_0000000000000774
crossref_primary_10_1177_20552076241300229
crossref_primary_10_1155_2022_1306664
crossref_primary_10_3389_fmed_2022_830515
crossref_primary_10_3390_cancers13092162
crossref_primary_10_3934_mbe_2022017
crossref_primary_10_48175_IJARSCT_9697
crossref_primary_10_1109_ACCESS_2022_3172706
crossref_primary_10_1136_bmjopen_2022_061519
crossref_primary_10_1111_resp_13676
crossref_primary_10_2214_AJR_22_28802
crossref_primary_10_3389_fmicb_2020_616971
crossref_primary_10_1055_s_0041_1735470
crossref_primary_10_1097_RLI_0000000000000771
crossref_primary_10_1371_journal_pone_0276545
crossref_primary_10_1016_j_jcrc_2024_154794
crossref_primary_10_1259_bjr_20210688
crossref_primary_10_1007_s11547_024_01770_6
crossref_primary_10_1038_s41746_022_00658_x
crossref_primary_10_1016_j_compbiomed_2023_106646
crossref_primary_10_1051_itmconf_20257002022
crossref_primary_10_1016_j_crad_2021_03_021
crossref_primary_10_1071_AH21034
crossref_primary_10_1007_s00330_019_06205_9
crossref_primary_10_1016_j_crad_2020_08_027
crossref_primary_10_1007_s10916_022_01870_8
crossref_primary_10_1016_S2589_7500_19_30123_2
crossref_primary_10_1371_journal_pmed_1002721
crossref_primary_10_1038_s41598_024_76450_6
crossref_primary_10_1080_0886022X_2024_2402075
crossref_primary_10_1016_j_ibmed_2021_100039
crossref_primary_10_3390_diagnostics14131439
crossref_primary_10_1186_s12894_021_00874_9
crossref_primary_10_3390_app10020559
crossref_primary_10_1007_s11042_023_14940_x
crossref_primary_10_11622_smedj_2021054
crossref_primary_10_1259_bjr_20210435
crossref_primary_10_1097_MD_0000000000026270
crossref_primary_10_3390_info15040189
crossref_primary_10_1148_ryai_2019180031
crossref_primary_10_2196_28114
crossref_primary_10_1007_s10916_022_01806_2
crossref_primary_10_3390_biomedinformatics4010044
crossref_primary_10_1016_j_media_2020_101773
crossref_primary_10_3171_2019_6_SPINE19463
crossref_primary_10_1016_j_ajoms_2022_02_004
crossref_primary_10_1109_ACCESS_2025_3525806
crossref_primary_10_3390_jpm13091338
crossref_primary_10_1177_0284185120973630
crossref_primary_10_1016_j_patol_2024_04_003
crossref_primary_10_3390_biomedicines11030760
crossref_primary_10_1148_radiol_2019190613
crossref_primary_10_3390_diagnostics14131456
crossref_primary_10_1148_ryai_2019180041
crossref_primary_10_1117_1_JMI_11_6_064503
crossref_primary_10_2139_ssrn_5044890
crossref_primary_10_1007_s00108_023_01604_z
crossref_primary_10_3390_e24101434
crossref_primary_10_1016_S2589_7500_19_30124_4
crossref_primary_10_1007_s00330_021_08162_8
crossref_primary_10_1016_j_surg_2020_04_049
crossref_primary_10_1038_s41581_022_00562_3
crossref_primary_10_1007_s12530_023_09565_2
crossref_primary_10_1038_s41598_025_93471_x
crossref_primary_10_1001_jamanetworkopen_2019_5600
crossref_primary_10_1097_RTI_0000000000000505
crossref_primary_10_1016_j_vaccine_2024_126370
crossref_primary_10_1007_s10489_020_01900_3
crossref_primary_10_7759_cureus_77391
crossref_primary_10_1097_RTI_0000000000000622
crossref_primary_10_1017_S1047951121004212
crossref_primary_10_1038_s41746_022_00648_z
crossref_primary_10_1186_s12880_022_00827_0
crossref_primary_10_3390_life15030498
crossref_primary_10_35940_ijrte_C7897_0912323
crossref_primary_10_1016_j_bspc_2024_107103
crossref_primary_10_1038_s41467_024_51136_9
crossref_primary_10_1055_s_0040_1701985
crossref_primary_10_1148_ryai_210299
crossref_primary_10_26442_20751753_2021_8_201148
crossref_primary_10_2139_ssrn_4073610
crossref_primary_10_32604_cmes_2023_030806
crossref_primary_10_3390_biology11040490
crossref_primary_10_1007_s42979_024_03131_6
crossref_primary_10_38124_ijisrt_IJISRT24JUL1334
crossref_primary_10_1038_s41598_024_68866_x
crossref_primary_10_2196_43415
crossref_primary_10_1111_coin_12526
crossref_primary_10_1007_s00330_022_08752_0
crossref_primary_10_1038_s41598_023_37270_2
crossref_primary_10_35712_aig_v2_i2_10
crossref_primary_10_1016_j_ejrad_2022_110447
crossref_primary_10_1001_jamanetworkopen_2022_55113
crossref_primary_10_1038_s41746_020_0266_y
crossref_primary_10_1067_j_cpradiol_2022_11_004
crossref_primary_10_1016_j_acra_2021_12_032
crossref_primary_10_1016_S2589_7500_20_30200_4
crossref_primary_10_1148_ryai_210064
crossref_primary_10_1371_journal_pone_0249399
crossref_primary_10_1038_s41597_022_01498_w
crossref_primary_10_1097_MOO_0000000000000697
crossref_primary_10_1097_RTI_0000000000000618
crossref_primary_10_1038_s41598_022_22506_4
crossref_primary_10_3390_diagnostics13010131
crossref_primary_10_1016_j_radi_2023_10_014
crossref_primary_10_1155_2020_9258649
crossref_primary_10_1016_j_neunet_2023_02_020
crossref_primary_10_1016_j_yebeh_2021_108047
crossref_primary_10_1016_j_compbiomed_2024_108922
crossref_primary_10_1109_TMI_2024_3419134
crossref_primary_10_1007_s00521_024_10527_1
crossref_primary_10_1038_s41598_021_93967_2
crossref_primary_10_1177_11779322211037770
crossref_primary_10_1016_j_jidi_2020_12_001
crossref_primary_10_1016_j_rcl_2021_07_006
crossref_primary_10_1097_RLI_0000000000000813
crossref_primary_10_1007_s00408_023_00655_1
crossref_primary_10_1038_s41598_021_00018_x
crossref_primary_10_3390_app10082908
crossref_primary_10_7759_cureus_11137
crossref_primary_10_1021_acs_iecr_2c01982
crossref_primary_10_3389_fgene_2023_1004481
crossref_primary_10_3348_jksr_2019_80_2_176
crossref_primary_10_3390_app11062751
crossref_primary_10_1016_j_media_2020_101855
crossref_primary_10_1063_5_0040315
crossref_primary_10_1109_TMI_2020_2974159
crossref_primary_10_3389_fnagi_2020_603790
crossref_primary_10_1002_inpr_522
crossref_primary_10_1186_s12880_021_00625_0
crossref_primary_10_1109_TNNLS_2021_3105384
crossref_primary_10_1007_s00259_019_04374_9
crossref_primary_10_25259_IJDVL_518_19
crossref_primary_10_3390_s22083049
crossref_primary_10_1097_RLI_0000000000000707
crossref_primary_10_17816_socm106054
crossref_primary_10_1007_s11042_023_16555_8
crossref_primary_10_1038_s41746_022_00709_3
crossref_primary_10_1016_j_molmed_2024_11_009
crossref_primary_10_1016_j_crbiot_2023_100164
crossref_primary_10_1007_s10140_020_01767_4
crossref_primary_10_3390_electronics9010190
crossref_primary_10_3390_math10193646
crossref_primary_10_1016_j_nec_2022_02_012
crossref_primary_10_1016_j_jcf_2019_04_016
crossref_primary_10_1016_j_patrec_2019_11_013
crossref_primary_10_1183_13993003_00625_2021
crossref_primary_10_3390_diagnostics13010076
crossref_primary_10_1148_ryai_2021210136
crossref_primary_10_1371_journal_pone_0273445
crossref_primary_10_1177_0271678X211029178
crossref_primary_10_1016_j_jid_2020_01_019
crossref_primary_10_1148_ryai_2021210014
crossref_primary_10_1007_s10278_024_01309_1
crossref_primary_10_1007_s42058_021_00078_y
crossref_primary_10_1016_j_bspc_2020_102365
crossref_primary_10_1007_s12539_023_00562_2
crossref_primary_10_1016_j_compbiomed_2023_107569
crossref_primary_10_3390_diagnostics12092084
crossref_primary_10_1007_s42979_024_02751_2
crossref_primary_10_3389_fmed_2022_945698
crossref_primary_10_1109_ACCESS_2024_3454537
crossref_primary_10_3390_jcm13144042
crossref_primary_10_4028_www_scientific_net_JBBBE_45_57
crossref_primary_10_1111_2041_210X_14001
crossref_primary_10_1186_s13244_020_00955_7
crossref_primary_10_1007_s10462_024_10714_5
crossref_primary_10_1016_j_cjca_2021_09_028
crossref_primary_10_7717_peerj_12598
crossref_primary_10_36740_EmeMS202402109
crossref_primary_10_3390_jcm13144180
crossref_primary_10_1111_1754_9485_13105
crossref_primary_10_1063_5_0188187
crossref_primary_10_3390_computers13120343
crossref_primary_10_1016_j_media_2020_101839
crossref_primary_10_1016_j_sciaf_2023_e01989
crossref_primary_10_3390_tomography9020052
crossref_primary_10_1002_widm_1510
crossref_primary_10_1371_journal_pone_0236378
crossref_primary_10_1016_j_ecoleng_2020_105816
crossref_primary_10_1109_ACCESS_2023_3283216
crossref_primary_10_1016_j_phrs_2023_106706
crossref_primary_10_1016_j_jacr_2019_05_048
crossref_primary_10_1007_s00259_021_05270_x
crossref_primary_10_2339_politeknik_1395811
crossref_primary_10_1007_s00247_021_05146_0
crossref_primary_10_1371_journal_pone_0229963
crossref_primary_10_1183_13993003_03061_2020
crossref_primary_10_1007_s10278_023_00882_1
crossref_primary_10_1007_s13755_020_00116_6
crossref_primary_10_1016_j_jet_2025_105970
crossref_primary_10_1016_j_arbres_2020_10_008
crossref_primary_10_1016_j_neucom_2020_03_127
crossref_primary_10_1016_j_compeleceng_2022_108325
crossref_primary_10_1007_s00120_020_01272_z
crossref_primary_10_1109_ACCESS_2021_3133338
crossref_primary_10_1109_TMI_2020_2992546
crossref_primary_10_1038_s41746_021_00393_9
crossref_primary_10_1016_j_scib_2023_03_031
crossref_primary_10_1038_s41598_024_58220_6
crossref_primary_10_7759_cureus_58607
crossref_primary_10_1007_s00204_021_03188_9
crossref_primary_10_1016_j_oor_2024_100365
crossref_primary_10_61969_jai_1469589
crossref_primary_10_1167_tvst_9_2_64
crossref_primary_10_1021_acs_iecr_1c04669
crossref_primary_10_1080_0142159X_2019_1679361
crossref_primary_10_1177_2058460119830222
crossref_primary_10_1016_j_neucom_2021_08_157
crossref_primary_10_1016_j_ejrad_2020_108925
crossref_primary_10_1007_s00330_023_10124_1
crossref_primary_10_1155_2020_8828855
crossref_primary_10_3389_fmed_2024_1445069
crossref_primary_10_1148_radiol_221894
crossref_primary_10_37699_2308_7005_4_2024_21
crossref_primary_10_1038_s41598_020_77924_z
crossref_primary_10_1007_s00330_020_06771_3
crossref_primary_10_1007_s00223_022_01035_2
crossref_primary_10_3390_ai6020037
crossref_primary_10_1007_s00264_024_06369_0
crossref_primary_10_3390_ai6020038
crossref_primary_10_1097_CCM_0000000000004397
crossref_primary_10_30897_ijegeo_710913
crossref_primary_10_1016_j_media_2021_102087
crossref_primary_10_1038_s41598_023_41463_0
crossref_primary_10_3389_fonc_2022_938413
crossref_primary_10_3389_fpsyg_2021_710982
crossref_primary_10_1111_1754_9485_13274
crossref_primary_10_2196_59045
crossref_primary_10_1109_ACCESS_2020_3010287
crossref_primary_10_1038_s41597_023_02102_5
crossref_primary_10_3390_forecast6020022
crossref_primary_10_1001_jamadermatol_2019_3807
crossref_primary_10_1016_j_jacr_2019_05_010
crossref_primary_10_1016_j_rcl_2021_11_011
crossref_primary_10_1016_j_cjca_2021_12_019
crossref_primary_10_1055_s_0040_1718584
crossref_primary_10_1016_j_eswa_2025_126806
crossref_primary_10_1038_s41746_020_0273_z
crossref_primary_10_3390_jimaging9070128
crossref_primary_10_1007_s11548_022_02607_1
crossref_primary_10_1111_1754_9485_13393
crossref_primary_10_1111_1754_9485_13273
crossref_primary_10_1177_02841851231202323
crossref_primary_10_1016_j_cmpb_2022_106651
crossref_primary_10_1109_ACCESS_2021_3095312
crossref_primary_10_1007_s13755_021_00169_1
crossref_primary_10_3390_jcm9123860
crossref_primary_10_3389_fmed_2021_706794
crossref_primary_10_1259_bjr_20200975
crossref_primary_10_1186_s13244_019_0830_7
crossref_primary_10_14316_pmp_2019_30_2_39
crossref_primary_10_4103_jdmimsu_jdmimsu_303_20
crossref_primary_10_1051_shsconf_202213903008
crossref_primary_10_1136_bmjinnov_2020_000593
crossref_primary_10_1016_j_semcancer_2020_12_005
crossref_primary_10_1038_s41467_021_25503_9
crossref_primary_10_1016_j_media_2020_101911
crossref_primary_10_2196_39565
crossref_primary_10_1002_mp_16188
crossref_primary_10_1002_psp4_12418
crossref_primary_10_1016_j_ejrad_2019_108774
crossref_primary_10_1177_0846537120971745
crossref_primary_10_1097_TA_0000000000004030
crossref_primary_10_1016_j_jacr_2019_05_007
crossref_primary_10_1371_journal_pone_0261307
crossref_primary_10_1002_jmri_27266
crossref_primary_10_48084_etasr_3503
crossref_primary_10_1097_OGX_0000000000000902
crossref_primary_10_1016_j_patrec_2019_11_040
crossref_primary_10_2147_IDR_S404786
crossref_primary_10_2196_39536
crossref_primary_10_2196_38325
crossref_primary_10_2147_IJGM_S325609
crossref_primary_10_32604_cmc_2024_051420
crossref_primary_10_1148_ryai_2021200190
crossref_primary_10_1097_RTI_0000000000000485
crossref_primary_10_1016_j_jacr_2023_02_031
crossref_primary_10_1016_j_jacr_2021_08_018
crossref_primary_10_3390_cancers15225389
crossref_primary_10_1007_s11042_024_18975_6
crossref_primary_10_3390_jcm10020254
crossref_primary_10_1007_s00330_019_06214_8
crossref_primary_10_1016_j_media_2022_102470
crossref_primary_10_1038_s41551_022_00936_9
crossref_primary_10_1016_j_jacr_2019_05_036
crossref_primary_10_1080_08164622_2021_2022961
crossref_primary_10_5664_jcsm_8388
crossref_primary_10_1097_RLI_0000000000000775
crossref_primary_10_1109_JBHI_2024_3362243
crossref_primary_10_1016_j_athoracsur_2019_09_042
crossref_primary_10_3390_electronics12173551
crossref_primary_10_1136_bmj_m3210
crossref_primary_10_1007_s00330_019_06589_8
crossref_primary_10_1038_s41746_020_00341_z
crossref_primary_10_1038_s41467_024_45599_z
crossref_primary_10_3389_fnins_2022_889808
crossref_primary_10_1080_15265161_2022_2040647
crossref_primary_10_1007_s11042_022_13486_8
crossref_primary_10_1016_j_engappai_2024_108516
crossref_primary_10_3390_jcm13020344
crossref_primary_10_1111_1754_9485_13282
crossref_primary_10_1016_j_jneumeth_2021_109098
crossref_primary_10_1115_1_4062808
crossref_primary_10_1259_bjr_20210979
crossref_primary_10_1109_JIOT_2021_3126471
crossref_primary_10_1148_radiol_212631
crossref_primary_10_1148_radiol_2021210902
crossref_primary_10_3390_diagnostics14050500
crossref_primary_10_1016_j_healun_2021_02_016
crossref_primary_10_31083_j_rcm2312402
crossref_primary_10_1007_s10140_021_01954_x
crossref_primary_10_1007_s13139_023_00821_6
crossref_primary_10_1148_ryai_2021200172
crossref_primary_10_1117_1_JMI_8_6_064501
Cites_doi 10.4187/respcare.01475
10.1109/CVPR.2017.243
10.1136/bmj.j4683
10.1001/jama.2016.17216
10.1177/001316446002000104
10.1371/journal.pmed.0030442
10.1109/TMI.2016.2536809
10.1259/bjr/28883951
10.1109/CVPR.2016.319
10.1097/00004424-198503000-00004
10.1109/ISBI.2015.7163871
10.1007/s10278-016-9937-2
10.1097/00004424-199008000-00004
10.1097/RLI.0000000000000341
10.1148/radiol.2017162326
10.1259/bjr.74.886.740949
10.1378/chest.10-1302
10.1214/aoms/1177706443
10.1002/acp.2869
10.1097/00004424-199009000-00006
10.1068/p090339
10.1016/j.amjmed.2006.10.025
10.1038/nature21056
10.1001/jama.2017.14585
10.1186/1471-2105-12-77
10.4103/jcis.JCIS_75_16
10.1109/CVPR.2017.369
10.5588/ijtld.13.0325
10.1016/j.jacr.2011.01.011
10.1109/CVPR.2009.5206848
10.1148/radiol.14131315
10.1097/RTI.0b013e3181f240bc
10.1007/s10278-012-9565-4
ContentType Journal Article
Copyright COPYRIGHT 2018 Public Library of Science
2018 Rajpurkar et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
2018 Rajpurkar et al 2018 Rajpurkar et al
Copyright_xml – notice: COPYRIGHT 2018 Public Library of Science
– notice: 2018 Rajpurkar et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
– notice: 2018 Rajpurkar et al 2018 Rajpurkar et al
DBID AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
IOV
ISN
ISR
3V.
7TK
7X7
7XB
88E
8FI
8FJ
8FK
ABUWG
AFKRA
AZQEC
BENPR
CCPQU
DWQXO
FYUFA
GHDGH
K9.
M0S
M1P
PHGZM
PHGZT
PIMPY
PJZUB
PKEHL
PPXIY
PQEST
PQQKQ
PQUKI
PRINS
7X8
5PM
DOA
CZK
DOI 10.1371/journal.pmed.1002686
DatabaseName CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
Opposing Viewpoints in Context
Gale In Context: Canada
Gale In Context: Science
ProQuest Central (Corporate)
Neurosciences Abstracts
ProQuest Central Health & Medical Collection (via ProQuest)
ProQuest Central (purchase pre-March 2016)
Medical Database (Alumni Edition)
Hospital Premium Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
ProQuest Central Essentials
ProQuest Central
ProQuest One Community College
ProQuest Central
Health Research Premium Collection
Health Research Premium Collection (Alumni)
ProQuest Health & Medical Complete (Alumni)
ProQuest Health & Medical Collection
Proquest Medical Database
ProQuest Central Premium
ProQuest One Academic (New)
ProQuest Publicly Available Content Database
ProQuest Health & Medical Research Collection
ProQuest One Academic Middle East (New)
ProQuest One Health & Nursing
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Academic
ProQuest One Academic UKI Edition
ProQuest Central China
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ Directory of Open Access Journals
PLoS Medicine
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
Publicly Available Content Database
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
ProQuest One Community College
ProQuest One Health & Nursing
ProQuest Central China
ProQuest Central
ProQuest Health & Medical Research Collection
Health Research Premium Collection
Health and Medicine Complete (Alumni Edition)
ProQuest Central Korea
Health & Medical Research Collection
ProQuest Central (New)
ProQuest Medical Library (Alumni)
ProQuest One Academic Eastern Edition
ProQuest Hospital Collection
Health Research Premium Collection (Alumni)
Neurosciences Abstracts
ProQuest Hospital Collection (Alumni)
ProQuest Health & Medical Complete
ProQuest Medical Library
ProQuest One Academic UKI Edition
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList

Publicly Available Content Database




MEDLINE
MEDLINE - Academic




Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 3
  dbid: EIF
  name: MEDLINE
  url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search
  sourceTypes: Index Database
– sequence: 4
  dbid: BENPR
  name: ProQuest Central
  url: http://www.proquest.com/pqcentral?accountid=15518
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Medicine
Computer Science
DocumentTitleAlternate Deep learning for chest radiograph diagnosis
EISSN 1549-1676
ExternalDocumentID 2252256310
oai_doaj_org_article_b1116af6ef4f471a87301326afba4bfd
PMC6245676
A564080858
30457988
10_1371_journal_pmed_1002686
Genre Validation Study
Comparative Study
Research Support, Non-U.S. Gov't
Journal Article
GeographicLocations United States--US
California
GeographicLocations_xml – name: United States--US
– name: California
GrantInformation_xml – fundername: NIBIB NIH HHS
  grantid: R01 EB000898
GroupedDBID ---
123
29O
2WC
53G
5VS
7X7
88E
8FI
8FJ
AAFWJ
AAUCC
AAWOE
AAWTL
AAYXX
ABDBF
ABUWG
ACGFO
ACIHN
ACPRK
ACUHS
ADBBV
ADRAZ
AEAQA
AENEX
AFKRA
AFPKN
AFRAH
AFXKF
AHMBA
AKRSQ
ALIPV
ALMA_UNASSIGNED_HOLDINGS
AOIJS
B0M
BAWUL
BCNDV
BENPR
BPHCQ
BVXVI
BWKFM
CCPQU
CITATION
CS3
DIK
DU5
E3Z
EAP
EAS
EBD
EBS
EJD
EMK
EMOBN
ESX
F5P
FPL
FYUFA
GROUPED_DOAJ
GX1
HMCUK
HYE
IAO
IHR
IHW
INH
INR
IOF
IOV
IPO
ISN
ISR
ITC
KQ8
M1P
M48
MK0
O5R
O5S
OK1
OVT
P2P
PHGZM
PHGZT
PIMPY
PQQKQ
PROAC
PSQYO
PV9
RNS
RPM
RZL
SV3
TR2
TUS
UKHRP
WOW
XSB
YZZ
~8M
ADXHL
CGR
CUY
CVF
ECM
EIF
H13
IPNFZ
NPM
PJZUB
PPXIY
RIG
WOQ
PMFND
3V.
7TK
7XB
8FK
AZQEC
DWQXO
K9.
PKEHL
PQEST
PQUKI
PRINS
7X8
PUEGO
5PM
AAPBV
ABPTK
BCGST
CZK
ICW
M~E
ID FETCH-LOGICAL-c830t-67d8e6d44790b0045e574ccd1dcf807d7f0278121581d30b0f091fec62311b73
IEDL.DBID M48
ISSN 1549-1676
1549-1277
IngestDate Sun Jun 04 13:13:08 EDT 2023
Wed Aug 27 01:25:33 EDT 2025
Thu Aug 21 18:29:06 EDT 2025
Fri Sep 05 07:53:00 EDT 2025
Fri Jul 25 20:01:36 EDT 2025
Tue Jun 17 21:33:25 EDT 2025
Thu Jun 12 23:50:44 EDT 2025
Tue Jun 10 20:47:16 EDT 2025
Fri Jun 27 03:56:04 EDT 2025
Fri Jun 27 05:11:27 EDT 2025
Fri Jun 27 03:50:37 EDT 2025
Thu May 22 21:21:11 EDT 2025
Mon Jul 21 05:21:51 EDT 2025
Thu Apr 24 22:56:08 EDT 2025
Tue Jul 01 03:17:38 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 11
Language English
License This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Creative Commons Attribution License
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c830t-67d8e6d44790b0045e574ccd1dcf807d7f0278121581d30b0f091fec62311b73
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ObjectType-Article-2
ObjectType-Feature-1
content type line 23
ObjectType-Undefined-3
These authors share first authorship on, and contributed equally to, this work.
I have read the journal's policy and the authors of this manuscript have the following competing interests: CPL holds shares in whiterabbit.ai and Nines.ai, is on the Advisory Board of Nuance Communications and on the Board of Directors for the Radiological Society of North America, and has other research support from Philips, GE Healthcare, and Philips Healthcare. MPL holds shares in and serves on the Advisory Board for Nines.ai. None of these organizations have a financial interest in the results of this study.
ORCID 0000-0003-2741-4046
0000-0002-0395-4403
0000-0003-1317-984X
0000-0001-9860-3368
0000-0002-8972-8051
0000-0002-9354-9486
0000-0002-8030-3727
OpenAccessLink http://journals.scholarsportal.info/openUrl.xqy?doi=10.1371/journal.pmed.1002686
PMID 30457988
PQID 2252256310
PQPubID 1436338
ParticipantIDs plos_journals_2252256310
doaj_primary_oai_doaj_org_article_b1116af6ef4f471a87301326afba4bfd
pubmedcentral_primary_oai_pubmedcentral_nih_gov_6245676
proquest_miscellaneous_2136550434
proquest_journals_2252256310
gale_infotracmisc_A564080858
gale_infotracgeneralonefile_A564080858
gale_infotracacademiconefile_A564080858
gale_incontextgauss_ISR_A564080858
gale_incontextgauss_ISN_A564080858
gale_incontextgauss_IOV_A564080858
gale_healthsolutions_A564080858
pubmed_primary_30457988
crossref_citationtrail_10_1371_journal_pmed_1002686
crossref_primary_10_1371_journal_pmed_1002686
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 20181120
PublicationDateYYYYMMDD 2018-11-20
PublicationDate_xml – month: 11
  year: 2018
  text: 20181120
  day: 20
PublicationDecade 2010
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: San Francisco
– name: San Francisco, CA USA
PublicationTitle PLoS medicine
PublicationTitleAlternate PLoS Med
PublicationYear 2018
Publisher Public Library of Science
Public Library of Science (PLoS)
Publisher_xml – name: Public Library of Science
– name: Public Library of Science (PLoS)
References S Bastawrous (ref32) 2017; 30
ref15
ref14
A Esteva (ref4) 2017; 542
T Donovan (ref34) 2013; 27
DP Carmody (ref37) 1980; 9
M Gamer (ref23) 2012
ref16
ref18
B Ehteshami Bejnordi (ref5) 2017; 318
V Gulshan (ref3) 2016; 316
OJ Dunn (ref21) 1958; 29
CD Mathers (ref2) 2006; 3
M Meziane (ref44) 2012; 27
P Goddard (ref33) 2001; 74
M Monney (ref42) 2005; 135
G Laifer (ref41) 2007; 120
Z Mor (ref39) 2015; 17
J Cohen (ref19) 1960; 20
ref7
S Raoof (ref1) 2012; 141
HL Kundel (ref38) 1990; 25
JC Bass (ref36) 1990; 25
(ref22) 2017
S Schalekamp (ref46) 2014; 272
N Dellios (ref47) 2017; 7
P Lakhani (ref9) 2017; 284
E Pesce (ref12) 2017
M Gopal (ref43) 2010; 5
X Robin (ref26) 2011; 12
CT Ekstrøm (ref27) 2018
Z Mor (ref40) 2012; 57
M Cicero (ref6) 2017; 52
RD Welling (ref30) 2011; 8
A Rimmer (ref31) 2017; 359
L Yao (ref11) 2017
R Tibshirani (ref20) 1994
A Canty (ref24) 2017
RD Novak (ref45) 2013; 26
DP Kingma (ref17) 2014
Q Guan (ref13) 2018
MC Meyer (ref25) 2017
E Potchen (ref50) 1979; 14
AAA Setio (ref10) 2016; 35
H Wickham (ref28) 2009
DJ Manning (ref35) 2004; 77
K Berbaum (ref49) 1985; 20
B Auguie (ref29) 2017
P Maduskar (ref8) 2013; 17
S Quadrelli (ref48) 2015
References_xml – volume: 57
  start-page: 1137
  issue: 7
  year: 2012
  ident: ref40
  article-title: Chest radiography validity in screening pulmonary tuberculosis in immigrants from a high-burden country
  publication-title: Respir Care
  doi: 10.4187/respcare.01475
– ident: ref15
  doi: 10.1109/CVPR.2017.243
– year: 2009
  ident: ref28
– year: 2014
  ident: ref17
  article-title: Adam: A Method for Stochastic Optimization
  publication-title: Proc 3rd Int Conf Learn Represent ICLR
– volume: 359
  start-page: j4683
  year: 2017
  ident: ref31
  article-title: Radiologist shortage leaves patient care at risk, warns royal college
  publication-title: BMJ
  doi: 10.1136/bmj.j4683
– volume: 316
  year: 2016
  ident: ref3
  article-title: Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs
  publication-title: JAMA
  doi: 10.1001/jama.2016.17216
– year: 2017
  ident: ref11
  article-title: Learning to diagnose from scratch by exploiting dependencies among labels
  publication-title: ArXiv171010501 Cs
– volume: 20
  start-page: 37
  issue: 1
  year: 1960
  ident: ref19
  article-title: A Coefficient of Agreement for Nominal Scales
  publication-title: Educ Psychol Meas
  doi: 10.1177/001316446002000104
– volume: 17
  start-page: 11
  issue: 1
  year: 2015
  ident: ref39
  article-title: The yield of tuberculosis screening of undocumented migrants from the Horn of Africa based on chest radiography
  publication-title: Isr Med Assoc J IMAJ
– year: 2017
  ident: ref29
  article-title: gridExtra: Miscellaneous Functions for “Grid” Graphics
– volume: 3
  start-page: e442
  issue: 11
  year: 2006
  ident: ref2
  article-title: Projections of Global Mortality and Burden of Disease from 2002 to 2030
  publication-title: PLOS Med
  doi: 10.1371/journal.pmed.0030442
– volume: 35
  start-page: 1160
  issue: 5
  year: 2016
  ident: ref10
  article-title: Pulmonary Nodule Detection in CT Images: False Positive Reduction Using Multi-View Convolutional Networks
  publication-title: IEEE Trans Med Imaging
  doi: 10.1109/TMI.2016.2536809
– year: 1994
  ident: ref20
– volume: 77
  start-page: 231
  issue: 915
  year: 2004
  ident: ref35
  article-title: Detection or decision errors? Missed lung cancer from the posteroanterior chest radiograph
  publication-title: Br J Radiol
  doi: 10.1259/bjr/28883951
– year: 2017
  ident: ref12
  article-title: Learning to detect chest radiographs containing lung nodules using visual attention networks
  publication-title: ArXiv171200996 Cs Stat
– ident: ref18
  doi: 10.1109/CVPR.2016.319
– volume: 20
  start-page: 124
  issue: 2
  year: 1985
  ident: ref49
  article-title: The effect of comparison films upon resident interpretation of pediatric chest radiographs
  publication-title: Invest Radiol
  doi: 10.1097/00004424-198503000-00004
– volume: 14
  start-page: 404
  year: 1979
  ident: ref50
  article-title: Effect of clinical history data on chest film interpretation-direction or distraction
  publication-title: Invest Radiol
– ident: ref7
  doi: 10.1109/ISBI.2015.7163871
– volume: 30
  start-page: 309
  issue: 3
  year: 2017
  ident: ref32
  article-title: Improving Patient Safety: Avoiding Unread Imaging Exams in the National VA Enterprise Electronic Health Record
  publication-title: J Digit Imaging
  doi: 10.1007/s10278-016-9937-2
– volume: 25
  start-page: 890
  issue: 8
  year: 1990
  ident: ref38
  article-title: Computer-displayed eye position as a visual aid to pulmonary nodule interpretation
  publication-title: Invest Radiol
  doi: 10.1097/00004424-199008000-00004
– volume: 52
  start-page: 281
  issue: 5
  year: 2017
  ident: ref6
  article-title: Training and Validating a Deep Convolutional Neural Network for Computer-Aided Detection and Classification of Abnormalities on Frontal Chest Radiographs
  publication-title: Invest Radiol
  doi: 10.1097/RLI.0000000000000341
– volume: 284
  start-page: 574
  issue: 2
  year: 2017
  ident: ref9
  article-title: Deep Learning at Chest Radiography: Automated Classification of Pulmonary Tuberculosis by Using Convolutional Neural Networks
  publication-title: Radiology
  doi: 10.1148/radiol.2017162326
– volume: 74
  start-page: 949
  issue: 886
  year: 2001
  ident: ref33
  article-title: Error in radiology
  publication-title: Br J Radiol
  doi: 10.1259/bjr.74.886.740949
– volume: 5
  start-page: 1233
  issue: 8
  year: 2010
  ident: ref43
  article-title: Screening for lung cancer with low-dose computed tomography: a systematic review and meta-analysis of the baseline findings of randomized controlled trials
  publication-title: J Thorac Oncol Off Publ Int Assoc Study Lung Cancer
– volume: 141
  start-page: 545
  issue: 2
  year: 2012
  ident: ref1
  article-title: Interpretation of plain chest roentgenogram
  publication-title: Chest
  doi: 10.1378/chest.10-1302
– volume: 29
  start-page: 1095
  issue: 4
  year: 1958
  ident: ref21
  article-title: Estimation of the Means of Dependent Variables
  publication-title: Ann Math Stat
  doi: 10.1214/aoms/1177706443
– volume: 27
  start-page: 43
  issue: 1
  year: 2013
  ident: ref34
  article-title: Looking for Cancer: Expertise Related Differences in Searching and Decision Making
  publication-title: Appl Cogn Psychol
  doi: 10.1002/acp.2869
– volume: 135
  start-page: 469
  issue: 31–32
  year: 2005
  ident: ref42
  article-title: Active and passive screening for tuberculosis in Vaud Canton, Switzerland
  publication-title: Swiss Med Wkly
– year: 2018
  ident: ref27
  article-title: MESS: Miscellaneous Esoteric Statistical Scripts
– volume: 25
  start-page: 994
  issue: 9
  year: 1990
  ident: ref36
  article-title: Visual skill. Correlation with detection of solitary pulmonary nodules
  publication-title: Invest Radiol
  doi: 10.1097/00004424-199009000-00006
– volume: 9
  start-page: 339
  issue: 3
  year: 1980
  ident: ref37
  article-title: An analysis of perceptual and cognitive factors in radiographic interpretation
  publication-title: Perception
  doi: 10.1068/p090339
– volume: 120
  start-page: 350
  issue: 4
  year: 2007
  ident: ref41
  article-title: TB in a low-incidence country: differences between new immigrants, foreign-born residents and native residents
  publication-title: Am J Med
  doi: 10.1016/j.amjmed.2006.10.025
– volume: 542
  start-page: 115
  issue: 7639
  year: 2017
  ident: ref4
  article-title: Dermatologist-level classification of skin cancer with deep neural networks
  publication-title: Nature
  doi: 10.1038/nature21056
– volume: 318
  start-page: 2199
  issue: 22
  year: 2017
  ident: ref5
  article-title: Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer
  publication-title: JAMA
  doi: 10.1001/jama.2017.14585
– year: 2015
  ident: ref48
  article-title: Clinical Characteristics and Prognosis of Incidentally Detected Lung Cancers
  publication-title: Int J Surg Oncol
– volume: 12
  start-page: 77
  year: 2011
  ident: ref26
  article-title: pROC: an open-source package for R and S+ to analyze and compare ROC curves
  publication-title: BMC Bioinformatics
  doi: 10.1186/1471-2105-12-77
– volume: 7
  year: 2017
  ident: ref47
  article-title: Computer-aided Detection Fidelity of Pulmonary Nodules in Chest Radiograph
  publication-title: J Clin Imaging Sci
  doi: 10.4103/jcis.JCIS_75_16
– year: 2017
  ident: ref24
  article-title: boot: Bootstrap R (S-Plus) Functions
– ident: ref14
  doi: 10.1109/CVPR.2017.369
– year: 2018
  ident: ref13
  article-title: Diagnose like a Radiologist: Attention Guided Convolutional Neural Network for Thorax Disease Classification
– volume: 17
  start-page: 1613
  issue: 12
  year: 2013
  ident: ref8
  article-title: Detection of tuberculosis using digital chest radiography: automated reading vs. interpretation by clinical officers
  publication-title: Int J Tuberc Lung Dis Off J Int Union Tuberc Lung Dis
  doi: 10.5588/ijtld.13.0325
– volume: 8
  start-page: 556
  issue: 8
  year: 2011
  ident: ref30
  article-title: White Paper Report of the 2010 RAD-AID Conference on International Radiology for Developing Countries: Identifying Sustainable Strategies for Imaging Services in the Developing World
  publication-title: J Am Coll Radiol JACR
  doi: 10.1016/j.jacr.2011.01.011
– year: 2017
  ident: ref22
– ident: ref16
  doi: 10.1109/CVPR.2009.5206848
– volume: 272
  start-page: 252
  issue: 1
  year: 2014
  ident: ref46
  article-title: Computer-aided detection improves detection of pulmonary nodules in chest radiographs beyond the support by bone-suppressed images
  publication-title: Radiology
  doi: 10.1148/radiol.14131315
– volume: 27
  start-page: 58
  issue: 1
  year: 2012
  ident: ref44
  article-title: A comparison of four versions of a computer-aided detection system for pulmonary nodules on chest radiographs.
  publication-title: J Thorac Imaging
  doi: 10.1097/RTI.0b013e3181f240bc
– volume: 26
  start-page: 651
  issue: 4
  year: 2013
  ident: ref45
  article-title: Comparison of Computer-Aided Detection (CAD) Effectiveness in Pulmonary Nodule Identification Using Different Methods of Bone Suppression in Chest Radiographs
  publication-title: J Digit Imaging
  doi: 10.1007/s10278-012-9565-4
– year: 2012
  ident: ref23
  article-title: irr: Various Coefficients of Interrater Reliability and Agreement
– year: 2017
  ident: ref25
  article-title: ConSpline: Partial Linear Least-Squares Regression using Constrained Splines
RelatedPersons Ng, Matthew
RelatedPersons_xml – fullname: Ng, Matthew
SSID ssj0029090
Score 2.7117987
Snippet Chest radiograph interpretation is critical for the detection of thoracic diseases, including tuberculosis and lung cancer, which affect millions of people...
Background Chest radiograph interpretation is critical for the detection of thoracic diseases, including tuberculosis and lung cancer, which affect millions of...
In their study, Pranav Rajpurkar and colleagues test a deep learning algorithm that classifies clinically important abnormalities in chest radiographs.
BackgroundChest radiograph interpretation is critical for the detection of thoracic diseases, including tuberculosis and lung cancer, which affect millions of...
Background Chest radiograph interpretation is critical for the detection of thoracic diseases, including tuberculosis and lung cancer, which affect millions of...
SourceID plos
doaj
pubmedcentral
proquest
gale
pubmed
crossref
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
Enrichment Source
StartPage e1002686
SubjectTerms Algorithms
Analysis
Artificial neural networks
Atelectasis
Authorship
Chest
Chest x-rays
Clinical Competence
Comparative analysis
Computer and Information Sciences
Computer science
Data mining
Datasets
Decision making
Deep Learning
Diabetic retinopathy
Diagnosis, Computer-Assisted - methods
Effusion
Emphysema
Hernia
Hiatal hernia
Humans
Learning
Lung cancer
Lung diseases
Lung nodules
Machine learning
Medical errors
Medical imaging
Medical imaging equipment
Medicine and Health Sciences
Methods
Network architectures
Neural networks
Ng, Matthew
Nodules
Patients
People and Places
Physical Sciences
Pleural effusion
Pneumonia
Pneumonia - diagnostic imaging
Practice
Predictive Value of Tests
Radiographic Image Interpretation, Computer-Assisted - methods
Radiography
Radiography, Thoracic - methods
Radiologists
Radiology
Reproducibility of Results
Research and Analysis Methods
Retrospective Studies
Software
Statistical analysis
Supervision
Systematic review
Thorax
Tuberculosis
Visualization
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Lb9NAEF6hHBAXxLuGFhaE4GRqO2uvzS0UqoLUIEFBuVn7TCIFO4qdY_87M96NFaNI7QHJh8j72VFm9vFtduYbQt6mBbIM2JaYjCchK7QKRSJ4mIhUMyNSKzs5hstpdvGLfZuls71SXxgT5uSBneFOJQzGTNjMWGZhIhU5dkngHMJKwaTVOPtGRbTbTPmtVhF1_66g_lgYJ5z7pLkxj0-9jz6sYbXpBEgzzKPeW5Q67f5-hh6tV3VziH7-G0W5tyydPyD3PZ-kE_c7HpI7pnpE7l76E_PH5PqzMWvqa0PMKVBU2pXIohuhl06ummoXbrdsPtIJ3Zh2U-_yL6nqyxTS2lIgi_RsYWZTM2upWM3rzbJd_KFtTX2yFX5F92KXWNQ8IVfnX67OLkJfciFU-Thqw4zr3GSaMV5EOKBTk3KmlI61snnENbd4UomKFMBzxwCxwDesUUCi4ljy8VMyqurKHBEqgXcBPuFSF6xQUkbSwIfIgL0F53FAxjuTl8rLkWNVjFXZnbFx2JY4C5boqNI7KiBh_9TayXHcgP-E3uyxKKbd3YAuVvouVt7UxQLyCvtC6TJT-ymhnKQZA8Kdp3lA3nQIFNSoMGJnLrZNU379_vsWoJ_T24B-DEDvPcjWYDMlfCoFWB7VvAbIdwPk3GmZHwIeD4AwyahB8xGOg52NmxKWAbgy2BzAk7uxcbj5dd-ML8WYvsrUW8BghCUK6LGAPHNDqfcTnt-jkl5A-GCQDRw5bKmWi04dPcOjfJ49_x-ef0HuAUHOMfc0iY7JqN1szQmQ0Fa-7Oabvwp2hLs
  priority: 102
  providerName: Directory of Open Access Journals
– databaseName: ProQuest Central Health & Medical Collection (via ProQuest)
  dbid: 7X7
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9NAEF5BkBAXHuVRQ4EFITi5tR171-aCQqEqSA0SFJSbtd5HEinYIXaO_Hdm1mu3RhFUyiHKfrajHc_st4_5hpBXSYYsA6YlmvHIjzMlfREJ7kciUbEWiSmsHMPZlJ1-jz_PkplbcKvdscouJtpArSqJa-RH8N7BhwEbebf-5WPVKNxddSU0rpMbITARLN3AZxcTriywayyoQuaHEecudW7MwyNnqcM1jDlWhpRhNvWlockq-PdxerReVfUuEvr3WcpLg9PJXXLbsUo6aV-De-SaLvfIna5iA3UOvEdunrmt9Pvk9wet19QVjZhT4K7U1s6iG6GWrY41Ve05vGX9lk7oRjebqkvMpLKvX0grQ4FF0uOFnk31rKFiNYeOaxY_aVNRl4WFj7A3bjOO6gfk_OTj-fGp72ox-DIdB43PuEo1U3HMswA9PdEJj6VUoZImDbjiBrcwUaoCCPAYIAaIiNES2FUYFnz8kIzKqtT7hBZAyAAf8UJlcSaLIig0fAk0mEBwHnpk3Fkhl06nHMtlrHK7-cZhvtJ2ao62y53tPOL3V61bnY7_4N-jgXssqmzbH6rNPHdOmxcwEDBhmDaxgUFcpBgOge8KU4i4MMojz_H1yNuU1T5W5JOExcDE0yT1yEuLQKWNEo_yzMW2rvNPX35cAfRtehXQ1wHojQOZCvpMCpdjAT2PMl8D5OsBct6KnO8CHgyAEH3koHkfXaPr4zq_8FO4snOX3c0v-ma8KR72K3W1BQwevURlvdgjj1rv6u2EG_sosecRPvC7gSGHLeVyYWXTGe7xc_b433_rCbkFnDjFdNMoOCCjZrPVT4F3NsUzG1z-AEOqga0
  priority: 102
  providerName: ProQuest
Title Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists
URI https://www.ncbi.nlm.nih.gov/pubmed/30457988
https://www.proquest.com/docview/2252256310
https://www.proquest.com/docview/2136550434
https://pubmed.ncbi.nlm.nih.gov/PMC6245676
https://doaj.org/article/b1116af6ef4f471a87301326afba4bfd
http://dx.doi.org/10.1371/journal.pmed.1002686
Volume 15
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3fb9MwELa2Tpr2gvi9wCgGIXjKlKSOnSAh1I1NA7QCY0N9i5zEbiuVpCSpBA_879w5P0RQERNSVVX150Q65-zPOd93hDzzQ2QZsC1RXHg2C9PElp4Utif9lCnp69jIMZxP-NkVezf1p1ukrdnaGLDcuLXDelJXxfLw-7cfr8HhX5mqDcJtOx2uYP0wkqI84Ntkx0SM8DAf6-IKXuiYty6oS2a7nhBNMt3frrJHdjGSiJpevXXLyPt3k_hgtczLTQz1z4OWv61cpzfJjYZy0nH9jNwiWyq7TXbPm6D6HfLzjVIr2pSPmFFgsdRU0aKFTBe1ojVN6xN5i_IlHdNCVUXepmjSpKtkSHNNgU_S47maTtS0onI5y4tFNf9Kq5w2-Vh4C3PhOveovEsuT08uj8_spiqDnQQjp7K5SAPFU8ZE6KDP-8oXLElSN0104IhUaAxmomgFUOERQDRQEq0S4FmuG4vRPTLI8kztExoDNQO8J-I0ZGESx06s4IejwPRSCNcio9bkUdIolmPhjGVkwnACdi61BSMcs6gZM4vYXa9VrdjxD_wRjmaHRb1t80dezKLGfaMYlgQuNVeaaVjOZYATIzBfqWPJYp1a5DE-C1GdvNrNGtHY5ww4eeAHFnlqEKi5keGhnplcl2X09sOXa4A-T64DuuiBXjQgnYPNEtlkW4DlUfCrh3zeQ85qufNNwIMeEOahpNe8j37Q2riMYKWAD4f9A_RsfWNz85OuGS-Kx_4yla8Bg4cwUWOPWeR-7UrdOLWOaRHRc7LeQPZbssXcCKhzjPYL_uC_ez4ke0CcA8xJ9ZwDMqiKtXoE5LSKh2RbTMWQ7BydTD5eDM0rHvh-_ykYmpnoF1yVkxE
linkProvider Scholars Portal
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1bb9MwFLZGJwEvXMZlhcEM4vIUlqRJnCBNqLupZWtBo6C-WU7stJVKUppUiAd-Gv-NcxInLKiCvUzqQ1V_SSof-_hzfM53CHnhBsgyYFuiPGYbTiAjQ9iCGbZwpaOEG4eFHMNg6PU-O-_H7niD_KpyYTCssvKJhaOWaYTvyPdg3MHHAzbybvHNwKpReLpaldAQurSC3C8kxnRix6n68R22cNl-_wjs_dK2T45Hhz1DVxkwIr9j5obHpK886TgsMHEMu8plThRJS0axbzLJYjycQxEGoHYdgMSwxMYqAt5gWSHrwG2vkU0H35-0yObB8fDjeb3jC8ziJQ_KoBmWzZjO3eswa08PlTcLWPQKHVQP07kvrI1FCYF6oWgt5mm2jgX_Hcx5YXU8uUNuaVpLu-U4vEs2VLJFblclI6j2IFvk-kCf5d8jP4-UWlBdtWJCgTzTongXXQo5K4W0qSwDAWfZW9qlS5Uv0yozlEZ1AUWaxhRoLD2cqvFQjXMq5hOwXD79SvOU6jQwfERx4zLlKbtPRldhpgeklaSJ2iY0BEYIeJuFMnCCKAzNUMEXU4EJBGNWm3QqK_BIC6VjvY45L07_GGyYyk7laDuubdcmRn3VohQK-Q_-AA1cY1Hmu_ghXU649ho8hJXIE7GnYicGFiF89MdAuEUcCieMZZvs4vDgZc5s7ax41_Uc2Ar4rt8mzwsESn0kGEs0Eass4_0PXy4B-jS8DOi8AXqtQXEKfRYJneQBPY86Yw3kqwZyUqqsrwPuNIDg_qJG8zZOjaqPM_7HUcCV1XRZ3_ysbsabYrRhotIVYDD2E6X9nDZ5WM6u2k4YWYAaf23CGvOuYchmSzKbFrrtHgYZMO_Rv__WLrnRGw3O-Fl_ePqY3ASC7mPuq23ukFa-XKknQILz8Kl2NZTwK3ZuvwHVPMLx
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1bb9MwFLZGkSZeuIzLAoMZxOUpNEmdOEFCqKxUK2MFsYH6FjmJ3VYqSWlSIR74Yfw7zkmcbEEV7GVSH6r6S1L52Mef43O-Q8hTN0CWAdsS6XHHZEESm8IR3HSEmzApXBWVcgzHY-_wC3s_cSdb5HedC4NhlbVPLB11ksX4jrwL4w4-HrCRrtJhEZ8GwzfL7yZWkMKT1rqcRjVEjuTPH7B9y1-PBmDrZ44zfHd6cGjqCgNm7PeswvR44ksvYYwHFo5fV7qcxXFiJ7HyLZ5whQdzKMAAtK4HEAXLq5IxcAbbjngPbnuFXOU9xrBqBJ-c7fUCq3y9gwJopu1wrrP2etzu6kHycgnLXamA6mEi97lVsSwe0CwRneUiyzfx37_DOM-ti8Ob5LomtLRfjcBbZEumO-RGXSyCat-xQ7aP9Sn-bfJrIOWS6noVUwq0mZZlu-hKJPNKQpsmVQjgPH9F-3Qli1VW54TSuCmdSDNFgcDSg5mcjOWkoGIxBTsVs2-0yKhOAMNHlDeukp3yO-T0Mox0l3TSLJW7hEbABQHv8CgJWBBHkRVJ-GJJMIHg3DZIr7ZCGGuJdKzUsQjLcz8OW6WqU0O0XahtZxCzuWpZSYT8B_8WDdxgUeC7_CFbTUPtL8II1iBPKE8qpoA_CB89MVBtoSLBIpUYZB-HR1hlyzZuKuy7HoNNgO_6BnlSIlDkI8XpMhXrPA9HH79eAHQyvgjocwv0QoNUBn0WC53eAT2PCmMt5PMWclrpq28C7rWA4PjiVvMuTo26j_PwzEXAlfV02dz8uGnGm2KcYSqzNWAw6hNF_ZhB7lWzq7ETxhSgup9BeGvetQzZbknns1Kx3cPwAu7d__ff2ifb4NLCD6Px0QNyDZi5j0mvjrVHOsVqLR8C-y2iR6WfoSS8ZL_2B4GSwI0
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+learning+for+chest+radiograph+diagnosis%3A+A+retrospective+comparison+of+the+CheXNeXt+algorithm+to+practicing+radiologists&rft.jtitle=PLoS+medicine&rft.au=Rajpurkar%2C+Pranav&rft.au=Irvin%2C+Jeremy&rft.au=Ball%2C+Robyn+L.&rft.au=Zhu%2C+Kaylie&rft.date=2018-11-20&rft.pub=Public+Library+of+Science&rft.issn=1549-1277&rft.eissn=1549-1676&rft.volume=15&rft.issue=11&rft_id=info:doi/10.1371%2Fjournal.pmed.1002686&rft_id=info%3Apmid%2F30457988&rft.externalDocID=PMC6245676
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1549-1676&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1549-1676&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1549-1676&client=summon