Automated deep learning design for medical image classification by health-care professionals with no coding experience: a feasibility study
Deep learning has the potential to transform health care; however, substantial expertise is required to train such models. We sought to evaluate the utility of automated deep learning software to develop medical image diagnostic classifiers by health-care professionals with no coding—and no deep lea...
Saved in:
| Published in | The Lancet. Digital health Vol. 1; no. 5; pp. e232 - e242 |
|---|---|
| Main Authors | , , , , , , , , , , , , , , , , |
| Format | Journal Article |
| Language | English |
| Published |
England
Elsevier Ltd
01.09.2019
Elsevier |
| Subjects | |
| Online Access | Get full text |
| ISSN | 2589-7500 2589-7500 |
| DOI | 10.1016/S2589-7500(19)30108-6 |
Cover
| Abstract | Deep learning has the potential to transform health care; however, substantial expertise is required to train such models. We sought to evaluate the utility of automated deep learning software to develop medical image diagnostic classifiers by health-care professionals with no coding—and no deep learning—expertise.
We used five publicly available open-source datasets: retinal fundus images (MESSIDOR); optical coherence tomography (OCT) images (Guangzhou Medical University and Shiley Eye Institute, version 3); images of skin lesions (Human Against Machine [HAM] 10000), and both paediatric and adult chest x-ray (CXR) images (Guangzhou Medical University and Shiley Eye Institute, version 3 and the National Institute of Health [NIH] dataset, respectively) to separately feed into a neural architecture search framework, hosted through Google Cloud AutoML, that automatically developed a deep learning architecture to classify common diseases. Sensitivity (recall), specificity, and positive predictive value (precision) were used to evaluate the diagnostic properties of the models. The discriminative performance was assessed using the area under the precision recall curve (AUPRC). In the case of the deep learning model developed on a subset of the HAM10000 dataset, we did external validation using the Edinburgh Dermofit Library dataset.
Diagnostic properties and discriminative performance from internal validations were high in the binary classification tasks (sensitivity 73·3–97·0%; specificity 67–100%; AUPRC 0·87–1·00). In the multiple classification tasks, the diagnostic properties ranged from 38% to 100% for sensitivity and from 67% to 100% for specificity. The discriminative performance in terms of AUPRC ranged from 0·57 to 1·00 in the five automated deep learning models. In an external validation using the Edinburgh Dermofit Library dataset, the automated deep learning model showed an AUPRC of 0·47, with a sensitivity of 49% and a positive predictive value of 52%.
All models, except the automated deep learning model trained on the multilabel classification task of the NIH CXR14 dataset, showed comparable discriminative performance and diagnostic properties to state-of-the-art performing deep learning algorithms. The performance in the external validation study was low. The quality of the open-access datasets (including insufficient information about patient flow and demographics) and the absence of measurement for precision, such as confidence intervals, constituted the major limitations of this study. The availability of automated deep learning platforms provide an opportunity for the medical community to enhance their understanding in model development and evaluation. Although the derivation of classification models without requiring a deep understanding of the mathematical, statistical, and programming principles is attractive, comparable performance to expertly designed models is limited to more elementary classification tasks. Furthermore, care should be placed in adhering to ethical principles when using these automated models to avoid discrimination and causing harm. Future studies should compare several application programming interfaces on thoroughly curated datasets.
National Institute for Health Research and Moorfields Eye Charity. |
|---|---|
| AbstractList | Deep learning has the potential to transform health care; however, substantial expertise is required to train such models. We sought to evaluate the utility of automated deep learning software to develop medical image diagnostic classifiers by health-care professionals with no coding—and no deep learning—expertise.
We used five publicly available open-source datasets: retinal fundus images (MESSIDOR); optical coherence tomography (OCT) images (Guangzhou Medical University and Shiley Eye Institute, version 3); images of skin lesions (Human Against Machine [HAM] 10000), and both paediatric and adult chest x-ray (CXR) images (Guangzhou Medical University and Shiley Eye Institute, version 3 and the National Institute of Health [NIH] dataset, respectively) to separately feed into a neural architecture search framework, hosted through Google Cloud AutoML, that automatically developed a deep learning architecture to classify common diseases. Sensitivity (recall), specificity, and positive predictive value (precision) were used to evaluate the diagnostic properties of the models. The discriminative performance was assessed using the area under the precision recall curve (AUPRC). In the case of the deep learning model developed on a subset of the HAM10000 dataset, we did external validation using the Edinburgh Dermofit Library dataset.
Diagnostic properties and discriminative performance from internal validations were high in the binary classification tasks (sensitivity 73·3–97·0%; specificity 67–100%; AUPRC 0·87–1·00). In the multiple classification tasks, the diagnostic properties ranged from 38% to 100% for sensitivity and from 67% to 100% for specificity. The discriminative performance in terms of AUPRC ranged from 0·57 to 1·00 in the five automated deep learning models. In an external validation using the Edinburgh Dermofit Library dataset, the automated deep learning model showed an AUPRC of 0·47, with a sensitivity of 49% and a positive predictive value of 52%.
All models, except the automated deep learning model trained on the multilabel classification task of the NIH CXR14 dataset, showed comparable discriminative performance and diagnostic properties to state-of-the-art performing deep learning algorithms. The performance in the external validation study was low. The quality of the open-access datasets (including insufficient information about patient flow and demographics) and the absence of measurement for precision, such as confidence intervals, constituted the major limitations of this study. The availability of automated deep learning platforms provide an opportunity for the medical community to enhance their understanding in model development and evaluation. Although the derivation of classification models without requiring a deep understanding of the mathematical, statistical, and programming principles is attractive, comparable performance to expertly designed models is limited to more elementary classification tasks. Furthermore, care should be placed in adhering to ethical principles when using these automated models to avoid discrimination and causing harm. Future studies should compare several application programming interfaces on thoroughly curated datasets.
National Institute for Health Research and Moorfields Eye Charity. Background: Deep learning has the potential to transform health care; however, substantial expertise is required to train such models. We sought to evaluate the utility of automated deep learning software to develop medical image diagnostic classifiers by health-care professionals with no coding—and no deep learning—expertise. Methods: We used five publicly available open-source datasets: retinal fundus images (MESSIDOR); optical coherence tomography (OCT) images (Guangzhou Medical University and Shiley Eye Institute, version 3); images of skin lesions (Human Against Machine [HAM] 10000), and both paediatric and adult chest x-ray (CXR) images (Guangzhou Medical University and Shiley Eye Institute, version 3 and the National Institute of Health [NIH] dataset, respectively) to separately feed into a neural architecture search framework, hosted through Google Cloud AutoML, that automatically developed a deep learning architecture to classify common diseases. Sensitivity (recall), specificity, and positive predictive value (precision) were used to evaluate the diagnostic properties of the models. The discriminative performance was assessed using the area under the precision recall curve (AUPRC). In the case of the deep learning model developed on a subset of the HAM10000 dataset, we did external validation using the Edinburgh Dermofit Library dataset. Findings: Diagnostic properties and discriminative performance from internal validations were high in the binary classification tasks (sensitivity 73·3–97·0%; specificity 67–100%; AUPRC 0·87–1·00). In the multiple classification tasks, the diagnostic properties ranged from 38% to 100% for sensitivity and from 67% to 100% for specificity. The discriminative performance in terms of AUPRC ranged from 0·57 to 1·00 in the five automated deep learning models. In an external validation using the Edinburgh Dermofit Library dataset, the automated deep learning model showed an AUPRC of 0·47, with a sensitivity of 49% and a positive predictive value of 52%. Interpretation: All models, except the automated deep learning model trained on the multilabel classification task of the NIH CXR14 dataset, showed comparable discriminative performance and diagnostic properties to state-of-the-art performing deep learning algorithms. The performance in the external validation study was low. The quality of the open-access datasets (including insufficient information about patient flow and demographics) and the absence of measurement for precision, such as confidence intervals, constituted the major limitations of this study. The availability of automated deep learning platforms provide an opportunity for the medical community to enhance their understanding in model development and evaluation. Although the derivation of classification models without requiring a deep understanding of the mathematical, statistical, and programming principles is attractive, comparable performance to expertly designed models is limited to more elementary classification tasks. Furthermore, care should be placed in adhering to ethical principles when using these automated models to avoid discrimination and causing harm. Future studies should compare several application programming interfaces on thoroughly curated datasets. Funding: National Institute for Health Research and Moorfields Eye Charity. Deep learning has the potential to transform health care; however, substantial expertise is required to train such models. We sought to evaluate the utility of automated deep learning software to develop medical image diagnostic classifiers by health-care professionals with no coding-and no deep learning-expertise.BACKGROUNDDeep learning has the potential to transform health care; however, substantial expertise is required to train such models. We sought to evaluate the utility of automated deep learning software to develop medical image diagnostic classifiers by health-care professionals with no coding-and no deep learning-expertise.We used five publicly available open-source datasets: retinal fundus images (MESSIDOR); optical coherence tomography (OCT) images (Guangzhou Medical University and Shiley Eye Institute, version 3); images of skin lesions (Human Against Machine [HAM] 10000), and both paediatric and adult chest x-ray (CXR) images (Guangzhou Medical University and Shiley Eye Institute, version 3 and the National Institute of Health [NIH] dataset, respectively) to separately feed into a neural architecture search framework, hosted through Google Cloud AutoML, that automatically developed a deep learning architecture to classify common diseases. Sensitivity (recall), specificity, and positive predictive value (precision) were used to evaluate the diagnostic properties of the models. The discriminative performance was assessed using the area under the precision recall curve (AUPRC). In the case of the deep learning model developed on a subset of the HAM10000 dataset, we did external validation using the Edinburgh Dermofit Library dataset.METHODSWe used five publicly available open-source datasets: retinal fundus images (MESSIDOR); optical coherence tomography (OCT) images (Guangzhou Medical University and Shiley Eye Institute, version 3); images of skin lesions (Human Against Machine [HAM] 10000), and both paediatric and adult chest x-ray (CXR) images (Guangzhou Medical University and Shiley Eye Institute, version 3 and the National Institute of Health [NIH] dataset, respectively) to separately feed into a neural architecture search framework, hosted through Google Cloud AutoML, that automatically developed a deep learning architecture to classify common diseases. Sensitivity (recall), specificity, and positive predictive value (precision) were used to evaluate the diagnostic properties of the models. The discriminative performance was assessed using the area under the precision recall curve (AUPRC). In the case of the deep learning model developed on a subset of the HAM10000 dataset, we did external validation using the Edinburgh Dermofit Library dataset.Diagnostic properties and discriminative performance from internal validations were high in the binary classification tasks (sensitivity 73·3-97·0%; specificity 67-100%; AUPRC 0·87-1·00). In the multiple classification tasks, the diagnostic properties ranged from 38% to 100% for sensitivity and from 67% to 100% for specificity. The discriminative performance in terms of AUPRC ranged from 0·57 to 1·00 in the five automated deep learning models. In an external validation using the Edinburgh Dermofit Library dataset, the automated deep learning model showed an AUPRC of 0·47, with a sensitivity of 49% and a positive predictive value of 52%.FINDINGSDiagnostic properties and discriminative performance from internal validations were high in the binary classification tasks (sensitivity 73·3-97·0%; specificity 67-100%; AUPRC 0·87-1·00). In the multiple classification tasks, the diagnostic properties ranged from 38% to 100% for sensitivity and from 67% to 100% for specificity. The discriminative performance in terms of AUPRC ranged from 0·57 to 1·00 in the five automated deep learning models. In an external validation using the Edinburgh Dermofit Library dataset, the automated deep learning model showed an AUPRC of 0·47, with a sensitivity of 49% and a positive predictive value of 52%.All models, except the automated deep learning model trained on the multilabel classification task of the NIH CXR14 dataset, showed comparable discriminative performance and diagnostic properties to state-of-the-art performing deep learning algorithms. The performance in the external validation study was low. The quality of the open-access datasets (including insufficient information about patient flow and demographics) and the absence of measurement for precision, such as confidence intervals, constituted the major limitations of this study. The availability of automated deep learning platforms provide an opportunity for the medical community to enhance their understanding in model development and evaluation. Although the derivation of classification models without requiring a deep understanding of the mathematical, statistical, and programming principles is attractive, comparable performance to expertly designed models is limited to more elementary classification tasks. Furthermore, care should be placed in adhering to ethical principles when using these automated models to avoid discrimination and causing harm. Future studies should compare several application programming interfaces on thoroughly curated datasets.INTERPRETATIONAll models, except the automated deep learning model trained on the multilabel classification task of the NIH CXR14 dataset, showed comparable discriminative performance and diagnostic properties to state-of-the-art performing deep learning algorithms. The performance in the external validation study was low. The quality of the open-access datasets (including insufficient information about patient flow and demographics) and the absence of measurement for precision, such as confidence intervals, constituted the major limitations of this study. The availability of automated deep learning platforms provide an opportunity for the medical community to enhance their understanding in model development and evaluation. Although the derivation of classification models without requiring a deep understanding of the mathematical, statistical, and programming principles is attractive, comparable performance to expertly designed models is limited to more elementary classification tasks. Furthermore, care should be placed in adhering to ethical principles when using these automated models to avoid discrimination and causing harm. Future studies should compare several application programming interfaces on thoroughly curated datasets.National Institute for Health Research and Moorfields Eye Charity.FUNDINGNational Institute for Health Research and Moorfields Eye Charity. SummaryBackgroundDeep learning has the potential to transform health care; however, substantial expertise is required to train such models. We sought to evaluate the utility of automated deep learning software to develop medical image diagnostic classifiers by health-care professionals with no coding—and no deep learning—expertise. MethodsWe used five publicly available open-source datasets: retinal fundus images (MESSIDOR); optical coherence tomography (OCT) images (Guangzhou Medical University and Shiley Eye Institute, version 3); images of skin lesions (Human Against Machine [HAM] 10000), and both paediatric and adult chest x-ray (CXR) images (Guangzhou Medical University and Shiley Eye Institute, version 3 and the National Institute of Health [NIH] dataset, respectively) to separately feed into a neural architecture search framework, hosted through Google Cloud AutoML, that automatically developed a deep learning architecture to classify common diseases. Sensitivity (recall), specificity, and positive predictive value (precision) were used to evaluate the diagnostic properties of the models. The discriminative performance was assessed using the area under the precision recall curve (AUPRC). In the case of the deep learning model developed on a subset of the HAM10000 dataset, we did external validation using the Edinburgh Dermofit Library dataset. FindingsDiagnostic properties and discriminative performance from internal validations were high in the binary classification tasks (sensitivity 73·3–97·0%; specificity 67–100%; AUPRC 0·87–1·00). In the multiple classification tasks, the diagnostic properties ranged from 38% to 100% for sensitivity and from 67% to 100% for specificity. The discriminative performance in terms of AUPRC ranged from 0·57 to 1·00 in the five automated deep learning models. In an external validation using the Edinburgh Dermofit Library dataset, the automated deep learning model showed an AUPRC of 0·47, with a sensitivity of 49% and a positive predictive value of 52%. InterpretationAll models, except the automated deep learning model trained on the multilabel classification task of the NIH CXR14 dataset, showed comparable discriminative performance and diagnostic properties to state-of-the-art performing deep learning algorithms. The performance in the external validation study was low. The quality of the open-access datasets (including insufficient information about patient flow and demographics) and the absence of measurement for precision, such as confidence intervals, constituted the major limitations of this study. The availability of automated deep learning platforms provide an opportunity for the medical community to enhance their understanding in model development and evaluation. Although the derivation of classification models without requiring a deep understanding of the mathematical, statistical, and programming principles is attractive, comparable performance to expertly designed models is limited to more elementary classification tasks. Furthermore, care should be placed in adhering to ethical principles when using these automated models to avoid discrimination and causing harm. Future studies should compare several application programming interfaces on thoroughly curated datasets. FundingNational Institute for Health Research and Moorfields Eye Charity. |
| Author | Faes, Livia Wagner, Siegfried K Balaskas, Konstantinos Moraes, Gabriella Denniston, Alastair K Keane, Pearse A Fu, Dun Jack Korot, Edward Ledsam, Joseph R Back, Trevor Chopra, Reena Liu, Xiaoxuan Pontikos, Nikolas Kern, Christoph Sim, Dawn Schmid, Martin K Bachmann, Lucas M |
| Author_xml | – sequence: 1 givenname: Livia surname: Faes fullname: Faes, Livia organization: Department of Ophthalmology, Cantonal Hospital Lucerne, Lucerne, Switzerland – sequence: 2 givenname: Siegfried K surname: Wagner fullname: Wagner, Siegfried K organization: National Institute of Health Research Biomedical Research Center, Moorfields Eye Hospital National Health Service Foundation Trust, and University College London Institute of Ophthalmology, London, UK – sequence: 3 givenname: Dun Jack surname: Fu fullname: Fu, Dun Jack organization: Medical Retina Department, Moorfields Eye Hospital National Health Service Foundation Trust, London, UK – sequence: 4 givenname: Xiaoxuan surname: Liu fullname: Liu, Xiaoxuan organization: National Institute of Health Research Biomedical Research Center, Moorfields Eye Hospital National Health Service Foundation Trust, and University College London Institute of Ophthalmology, London, UK – sequence: 5 givenname: Edward surname: Korot fullname: Korot, Edward organization: Medical Retina Department, Moorfields Eye Hospital National Health Service Foundation Trust, London, UK – sequence: 6 givenname: Joseph R surname: Ledsam fullname: Ledsam, Joseph R organization: DeepMind, London, UK – sequence: 7 givenname: Trevor surname: Back fullname: Back, Trevor organization: DeepMind, London, UK – sequence: 8 givenname: Reena surname: Chopra fullname: Chopra, Reena organization: National Institute of Health Research Biomedical Research Center, Moorfields Eye Hospital National Health Service Foundation Trust, and University College London Institute of Ophthalmology, London, UK – sequence: 9 givenname: Nikolas surname: Pontikos fullname: Pontikos, Nikolas organization: National Institute of Health Research Biomedical Research Center, Moorfields Eye Hospital National Health Service Foundation Trust, and University College London Institute of Ophthalmology, London, UK – sequence: 10 givenname: Christoph surname: Kern fullname: Kern, Christoph organization: Medical Retina Department, Moorfields Eye Hospital National Health Service Foundation Trust, London, UK – sequence: 11 givenname: Gabriella surname: Moraes fullname: Moraes, Gabriella organization: Medical Retina Department, Moorfields Eye Hospital National Health Service Foundation Trust, London, UK – sequence: 12 givenname: Martin K surname: Schmid fullname: Schmid, Martin K organization: Department of Ophthalmology, Cantonal Hospital Lucerne, Lucerne, Switzerland – sequence: 13 givenname: Dawn surname: Sim fullname: Sim, Dawn organization: National Institute of Health Research Biomedical Research Center, Moorfields Eye Hospital National Health Service Foundation Trust, and University College London Institute of Ophthalmology, London, UK – sequence: 14 givenname: Konstantinos surname: Balaskas fullname: Balaskas, Konstantinos organization: National Institute of Health Research Biomedical Research Center, Moorfields Eye Hospital National Health Service Foundation Trust, and University College London Institute of Ophthalmology, London, UK – sequence: 15 givenname: Lucas M surname: Bachmann fullname: Bachmann, Lucas M organization: Medigntion, Zurich, Switzerland – sequence: 16 givenname: Alastair K surname: Denniston fullname: Denniston, Alastair K organization: National Institute of Health Research Biomedical Research Center, Moorfields Eye Hospital National Health Service Foundation Trust, and University College London Institute of Ophthalmology, London, UK – sequence: 17 givenname: Pearse A surname: Keane fullname: Keane, Pearse A email: pearse.keane1@nhs.net organization: National Institute of Health Research Biomedical Research Center, Moorfields Eye Hospital National Health Service Foundation Trust, and University College London Institute of Ophthalmology, London, UK |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/33323271$$D View this record in MEDLINE/PubMed |
| BookMark | eNqFks1u1DAUhSNUREvpI4C8LIuAf5I4AYFUVfxUqsSisLYc53rGxWMPtkPJM_DSOJMyQpVGrOzcnPP5Xh8_LY6cd1AUzwl-RTBpXt_Quu1KXmN8TrqXDBPcls2j4mRfPvpnf1ycxXiLMaaUMM75k-KYMUYZ5eSk-H0xJr-RCQY0AGyRBRmccav8Fc3KIe0D2sBglLTIbOQKkLIyRqNzJRnvUD-hNUib1qWSAdA2eA35v3fSRnRn0ho5j5QfZib82kIw4BS8QRJpkNH0xpo0oZjGYXpWPNbZBWf362nx7eOHr5efy-svn64uL65LVTOSyrZllWp7qhXXHLdNV9EG80bBQKHpJPQDZpWWSnGKO056rPpuaBnNGkoIVuy0uFq4g5e3YhvyXGESXhqxK_iwEjIkoyyIXvOhogPva6UqJqt57XWdt_m8VrWZ1Sys0W3ldCet3QMJFnNYIs5JiDkJQTqxC0s02Xi-GPON_RghJrExUYG10oEfo6AVxw0jHeNZ-uJeOvY5i_0Bf1PMgreLQAUfYwAtlEm7eFKQxu47uTnQSf3A_XCCQ773iw9yVj8NBKGscfNL-Q4TxFs_hvkRCJJvQOAFMjPyTDNhBrw7DMhZmP808AdrCO1N |
| CitedBy_id | crossref_primary_10_1148_ryai_220062 crossref_primary_10_1016_j_apjo_2024_100089 crossref_primary_10_1542_peds_2020_034546 crossref_primary_10_1109_ACCESS_2020_3004766 crossref_primary_10_25056_JCM_2023_7_2_60 crossref_primary_10_1016_j_xops_2024_100470 crossref_primary_10_1016_S2589_7500_19_30112_8 crossref_primary_10_7759_cureus_46454 crossref_primary_10_1016_j_imu_2022_100853 crossref_primary_10_1007_s42835_024_01919_3 crossref_primary_10_3390_bdcc8110157 crossref_primary_10_1016_j_ceramint_2024_01_298 crossref_primary_10_1038_s41598_022_06127_5 crossref_primary_10_1016_S0140_6736_20_30813_8 crossref_primary_10_21923_jesd_1121792 crossref_primary_10_1038_s41591_023_02293_9 crossref_primary_10_1186_s40942_024_00555_3 crossref_primary_10_1016_j_media_2021_102306 crossref_primary_10_1038_s41598_024_60429_4 crossref_primary_10_1136_bjophthalmol_2021_319030 crossref_primary_10_3390_diagnostics10110910 crossref_primary_10_1007_s12194_019_00552_4 crossref_primary_10_3390_biomedicines10071544 crossref_primary_10_1016_j_jns_2022_120454 crossref_primary_10_1001_jamaophthalmol_2023_6318 crossref_primary_10_1109_ACCESS_2024_3441469 crossref_primary_10_1007_s11547_024_01770_6 crossref_primary_10_1016_j_compbiomed_2023_106649 crossref_primary_10_32604_cmc_2022_024965 crossref_primary_10_5772_dmht_20 crossref_primary_10_1007_s10916_023_01928_1 crossref_primary_10_1111_exsy_12690 crossref_primary_10_1016_j_csbj_2020_08_003 crossref_primary_10_3389_frobt_2022_896028 crossref_primary_10_1007_s00521_021_05943_6 crossref_primary_10_1097_BRS_0000000000003844 crossref_primary_10_4274_dir_2024_242972 crossref_primary_10_1002_wsbm_1501 crossref_primary_10_1167_tvst_13_4_4 crossref_primary_10_32604_csse_2023_035900 crossref_primary_10_1093_pcmedi_pbaa029 crossref_primary_10_32604_cmc_2022_028560 crossref_primary_10_1007_s12350_020_02119_y crossref_primary_10_3390_ijerph191912200 crossref_primary_10_1371_journal_pone_0273508 crossref_primary_10_1016_j_acra_2022_07_011 crossref_primary_10_1016_j_xops_2024_100495 crossref_primary_10_1002_aesr_202300004 crossref_primary_10_1002_jum_16194 crossref_primary_10_3390_ai3030043 crossref_primary_10_1016_j_patcog_2021_107825 crossref_primary_10_1186_s12880_023_01017_2 crossref_primary_10_1016_j_ejrad_2022_110369 crossref_primary_10_1016_j_ophtha_2022_01_002 crossref_primary_10_1016_j_engappai_2023_107164 crossref_primary_10_2196_40167 crossref_primary_10_3390_jcm13144141 crossref_primary_10_1097_ICU_0000000000000779 crossref_primary_10_1038_s41598_020_76665_3 crossref_primary_10_1016_j_ajoms_2022_02_004 crossref_primary_10_1016_j_apenergy_2021_118049 crossref_primary_10_1080_09273948_2024_2319281 crossref_primary_10_1145_3506695 crossref_primary_10_1016_j_artmed_2023_102547 crossref_primary_10_1093_jnen_nlab005 crossref_primary_10_1097_MOU_0000000000000813 crossref_primary_10_1038_s41598_021_89369_z crossref_primary_10_1016_S2214_109X_23_00323_6 crossref_primary_10_1136_rmdopen_2023_003105 crossref_primary_10_1186_s12911_025_02950_8 crossref_primary_10_1093_comjnl_bxaa145 crossref_primary_10_1093_jamiaopen_ooac094 crossref_primary_10_1097_01_APO_0000769904_75814_b5 crossref_primary_10_3390_app142411926 crossref_primary_10_1080_10106049_2023_2236576 crossref_primary_10_1097_RUQ_0000000000000683 crossref_primary_10_1016_j_oret_2023_03_003 crossref_primary_10_1016_S2589_7500_23_00050_X crossref_primary_10_1097_ICU_0000000000000785 crossref_primary_10_1001_jamaophthalmol_2023_4508 crossref_primary_10_2196_49949 crossref_primary_10_1145_3533378 crossref_primary_10_3171_2022_1_FOCUS21652 crossref_primary_10_3390_computers12090174 crossref_primary_10_1016_j_imr_2022_100888 crossref_primary_10_32604_cmc_2020_013125 crossref_primary_10_1016_j_xops_2021_100036 crossref_primary_10_3390_su12093612 crossref_primary_10_1016_j_cels_2023_05_007 crossref_primary_10_1007_s10462_024_11080_y crossref_primary_10_2174_1875036202114010093 crossref_primary_10_1016_j_bspc_2020_102329 crossref_primary_10_1109_JSTARS_2022_3232583 crossref_primary_10_3390_jcm11030614 crossref_primary_10_1016_j_bspc_2024_107108 crossref_primary_10_3389_fradi_2025_1503625 crossref_primary_10_1109_RBME_2020_3013489 crossref_primary_10_32604_cmc_2023_039518 crossref_primary_10_1097_ICU_0000000000000677 crossref_primary_10_1007_s40820_024_01536_9 crossref_primary_10_1097_ICU_0000000000000678 crossref_primary_10_1002_jcu_23143 crossref_primary_10_1007_s12652_022_03835_8 crossref_primary_10_1038_s42256_021_00305_2 crossref_primary_10_1016_j_ebiom_2024_105463 crossref_primary_10_1038_s41746_023_00759_1 crossref_primary_10_1038_s41598_023_32118_1 crossref_primary_10_1097_APO_0000000000000398 crossref_primary_10_1007_s00330_023_09566_4 crossref_primary_10_1038_s41598_021_89743_x crossref_primary_10_1146_annurev_bioeng_110220_012203 crossref_primary_10_1007_s10278_022_00724_6 crossref_primary_10_1051_0004_6361_202142998 crossref_primary_10_1007_s11082_023_06076_x crossref_primary_10_2329_perio_63_119 crossref_primary_10_1080_08820538_2023_2168486 crossref_primary_10_3390_vision8030048 crossref_primary_10_1016_j_knosys_2020_106622 crossref_primary_10_1016_j_preteyeres_2025_101350 crossref_primary_10_1016_j_compbiomed_2023_107777 crossref_primary_10_1093_bioinformatics_btab380 crossref_primary_10_1093_psyrad_kkab009 crossref_primary_10_1016_j_gastha_2022_02_025 crossref_primary_10_1016_j_eswa_2023_121245 crossref_primary_10_1292_jvms_23_0299 crossref_primary_10_2139_ssrn_4074672 crossref_primary_10_3389_fneur_2021_735142 crossref_primary_10_1007_s11082_023_06018_7 crossref_primary_10_1002_cpe_6751 crossref_primary_10_3390_diagnostics11020233 crossref_primary_10_1007_s00417_021_05544_y crossref_primary_10_2196_43638 crossref_primary_10_1109_MCSE_2020_3009765 crossref_primary_10_1186_s12880_024_01543_7 crossref_primary_10_1159_000525929 crossref_primary_10_1007_s11082_023_06168_8 crossref_primary_10_1016_j_slast_2024_100192 crossref_primary_10_1097_ICU_0000000000000693 crossref_primary_10_1136_bmjophth_2022_000992 crossref_primary_10_1088_1742_6596_2096_1_012028 crossref_primary_10_1167_tvst_10_7_14 crossref_primary_10_32604_cmc_2022_019529 crossref_primary_10_1002_acr2_11665 crossref_primary_10_1016_j_mechmachtheory_2022_104742 crossref_primary_10_1002_int_22449 crossref_primary_10_1016_j_measurement_2020_107703 crossref_primary_10_1371_journal_pdig_0000058 crossref_primary_10_47164_ijngc_v13i3_663 crossref_primary_10_1007_s00417_022_05741_3 crossref_primary_10_3390_app13095472 crossref_primary_10_1590_2318_0889202436e2410917 crossref_primary_10_1136_bjophthalmol_2022_321141 crossref_primary_10_3390_pr8020224 crossref_primary_10_1016_j_neucom_2024_129182 crossref_primary_10_3390_su122310124 crossref_primary_10_1016_j_compmedimag_2024_102441 crossref_primary_10_3390_computers10020024 crossref_primary_10_1038_s41598_024_72889_9 crossref_primary_10_1007_s12652_021_02948_w crossref_primary_10_4103_sjopt_sjopt_106_22 crossref_primary_10_1038_s41598_024_60807_y crossref_primary_10_5051_jpis_2104080204 crossref_primary_10_1136_bmjophth_2024_001873 crossref_primary_10_1093_jnen_nlac131 crossref_primary_10_1093_bjd_ljae040 crossref_primary_10_1177_03000605231200371 crossref_primary_10_32604_cmc_2023_041722 crossref_primary_10_3390_healthcare10101940 crossref_primary_10_1007_s11042_025_20633_4 crossref_primary_10_1016_S0140_6736_21_00722_4 |
| Cites_doi | 10.1038/nature14539 10.1002/rob.20276 10.1016/j.cell.2018.02.010 10.1109/IROS.2008.4651217 10.5566/ias.1155 10.1109/MSP.2012.2205597 10.1038/nrc1550 10.1056/NEJMp1714229 10.1038/s41591-018-0107-6 10.1016/j.media.2017.12.002 10.1111/acem.12255 10.1038/s41746-019-0079-z 10.1038/nature14236 10.1038/sdata.2018.161 10.1136/bmj.g7594 10.1038/s41591-018-0268-3 |
| ContentType | Journal Article |
| Copyright | 2019 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license. The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license. Copyright © 2019 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license. Published by Elsevier Ltd.. All rights reserved. |
| Copyright_xml | – notice: 2019 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license. – notice: The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license. – notice: Copyright © 2019 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license. Published by Elsevier Ltd.. All rights reserved. |
| DBID | AAYXX CITATION CGR CUY CVF ECM EIF NPM 7X8 ADTOC UNPAY DOA |
| DOI | 10.1016/S2589-7500(19)30108-6 |
| DatabaseName | CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed MEDLINE - Academic Unpaywall for CDI: Periodical Content Unpaywall DOAJ Directory of Open Access Journals |
| DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) MEDLINE - Academic |
| DatabaseTitleList | MEDLINE MEDLINE - Academic |
| Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals (ODIN) url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 3 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database – sequence: 4 dbid: UNPAY name: Unpaywall url: https://proxy.k.utb.cz/login?url=https://unpaywall.org/ sourceTypes: Open Access Repository |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Public Health |
| EISSN | 2589-7500 |
| EndPage | e242 |
| ExternalDocumentID | oai_doaj_org_article_bf7d42d7b5cc43a4b5ccbf543a6ce8c8 10.1016/s2589-7500(19)30108-6 33323271 10_1016_S2589_7500_19_30108_6 1_s2_0_S2589750019301086 S2589750019301086 |
| Genre | Research Support, Non-U.S. Gov't Journal Article |
| GrantInformation | National Institute for Health Research and Moorfields Eye Charity. |
| GrantInformation_xml | – fundername: Department of Health grantid: NIHR-CS-2014-14-023 – fundername: Medical Research Council grantid: MC_PC_19005 – fundername: Department of Health grantid: CS-2014-14-023 |
| GroupedDBID | .1- .FO 0R~ 53G AAEDW AALRI AAMRU AAXUO ACLIJ ACVFH ADCNI AEUPX AEXQZ AFPUW AFRHN AIGII AITUG AJUYK AKBMS AKYEP ALMA_UNASSIGNED_HOLDINGS APXCP EBS EJD FDB GROUPED_DOAJ M41 M~E OK1 ROL W2D Z5R NCXOZ AAYXX CITATION CGR CUY CVF ECM EIF NPM 7X8 ADTOC UNPAY |
| ID | FETCH-LOGICAL-c531t-8834c8b2fc7f70869426076ced2e69aebd034facc720971b0cb9d8320762110c3 |
| IEDL.DBID | DOA |
| ISSN | 2589-7500 |
| IngestDate | Fri Oct 03 12:52:00 EDT 2025 Tue Aug 19 17:25:59 EDT 2025 Wed Oct 01 12:49:55 EDT 2025 Mon Jul 21 05:34:59 EDT 2025 Tue Jul 01 02:13:49 EDT 2025 Thu Apr 24 22:51:35 EDT 2025 Sun Feb 23 10:19:02 EST 2025 Tue Aug 26 20:17:41 EDT 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 5 |
| Language | English |
| License | This is an open access article under the CC BY license. Copyright © 2019 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license. Published by Elsevier Ltd.. All rights reserved. cc-by |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c531t-8834c8b2fc7f70869426076ced2e69aebd034facc720971b0cb9d8320762110c3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
| OpenAccessLink | https://doaj.org/article/bf7d42d7b5cc43a4b5ccbf543a6ce8c8 |
| PMID | 33323271 |
| PQID | 2470631937 |
| PQPubID | 23479 |
| ParticipantIDs | doaj_primary_oai_doaj_org_article_bf7d42d7b5cc43a4b5ccbf543a6ce8c8 unpaywall_primary_10_1016_s2589_7500_19_30108_6 proquest_miscellaneous_2470631937 pubmed_primary_33323271 crossref_citationtrail_10_1016_S2589_7500_19_30108_6 crossref_primary_10_1016_S2589_7500_19_30108_6 elsevier_clinicalkeyesjournals_1_s2_0_S2589750019301086 elsevier_clinicalkey_doi_10_1016_S2589_7500_19_30108_6 |
| ProviderPackageCode | CITATION AAYXX |
| PublicationCentury | 2000 |
| PublicationDate | 2019-09-01 |
| PublicationDateYYYYMMDD | 2019-09-01 |
| PublicationDate_xml | – month: 09 year: 2019 text: 2019-09-01 day: 01 |
| PublicationDecade | 2010 |
| PublicationPlace | England |
| PublicationPlace_xml | – name: England |
| PublicationTitle | The Lancet. Digital health |
| PublicationTitleAlternate | Lancet Digit Health |
| PublicationYear | 2019 |
| Publisher | Elsevier Ltd Elsevier |
| Publisher_xml | – name: Elsevier Ltd – name: Elsevier |
| References | Nicholson Price (bib35) 2017 Char, Shah, Magnus (bib43) 2018; 378 Luo, Phung, Tran (bib31) 2016; 18 Jouppi (bib11) May 18, 2016 Vinyals, Toshev, Bengio, Erhan (bib4) 2014 Stupple, Singerman, Celi (bib36) 2019; 2 (bib15) Jan 1, 2019 Hannun, Rajpurkar, Haghpanahi (bib12) 2019; 25 LeCun, Bengio, Hinton (bib1) 2015; 521 Guan, Huang, Zhong, Zheng, Zheng, Yang (bib28) 2018 Li, Wang, Liu, Latecki (bib41) 2018; 45 Tschandl, Rosendahl, Kittler (bib22) 2018; 5 Oakden-Rayner (bib39) Le, Zoph (bib26) May 17, 2017 Mnih, Kavukcuoglu, Silver (bib3) 2015; 518 Kermany, Goldbaum, Cai (bib20) 2018; 172 (bib24) Feb 19, 2018 Fogel, Kvedar (bib2) 2018; 1 Kohn, Carpenter, Newman (bib32) 2013; 20 Hinton, Deng, Yu (bib5) 2012; 29 Hadsell, Sermanet, Ben (bib8) 2009; 26 Li, Pang, Xiong, Liu, Liang, Wang (bib29) Tonekaboni, Joshi, McCradden, Goldenberg (bib37) 2019 (bib10) Dec 2, 2015 Thomas (bib17) Lipton (bib38) 2016 Hadsell R, Erkan A, Sermanet P, Scoffier M, Muller U, LeCun Y. Deep belief net learning in a long-range vision system for autonomous off-road driving. 2008 IEEE/RSJ International Conference on Intelligent Robot Systems, IROS; Nice, France; Sept 22–26, 2018 (4651217). Collobert, Weston, Bottou, Karlen, Kavukcuoglu, Kuksa (bib6) 2011; 12 Zoph, Le (bib18) 2016 bib40 (bib13) April 2, 2019 Ransohoff (bib42) 2005; 5 Oakden-Rayner (bib33) Collins, Reitsma, Altman, Moons (bib23) 2015; 350 Wang, Peng, Lu, Lu, Bagheri, Summers (bib21) Elsken, Metzen, Hutter (bib25) 2018 Krizhevsky, Sutskever, Hinton (bib27) 2012 Krizhevsky, Sutskever, Hinton (bib9) 2012; 1 De Fauw, Ledsam, Romera-Paredes (bib34) 2018; 24 Decencière, Zhang, Cazuguel (bib19) 2014; 33 Zoph, Vasudevan, Shlens, Le (bib16) Nov 2, 2017 Codella NC, Gutman D, Celebi ME, et al. Skin lesion analysis toward melanoma detection: a challenge at the 2017 international symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC). 2018 IEEE 15th International Symposium on Biomedical Imaging; Washington DC, USA; April 4–7, 2018: 168–72. MSV (bib14) April 15, 2018 Jouppi (10.1016/S2589-7500(19)30108-6_bib11) 2016 Hadsell (10.1016/S2589-7500(19)30108-6_bib8) 2009; 26 De Fauw (10.1016/S2589-7500(19)30108-6_bib34) 2018; 24 Lipton (10.1016/S2589-7500(19)30108-6_bib38) 2016 10.1016/S2589-7500(19)30108-6_bib7 Hannun (10.1016/S2589-7500(19)30108-6_bib12) 2019; 25 Thomas (10.1016/S2589-7500(19)30108-6_bib17) Stupple (10.1016/S2589-7500(19)30108-6_bib36) 2019; 2 Vinyals (10.1016/S2589-7500(19)30108-6_bib4) 2014 Ransohoff (10.1016/S2589-7500(19)30108-6_bib42) 2005; 5 Le (10.1016/S2589-7500(19)30108-6_bib26) 2017 Decencière (10.1016/S2589-7500(19)30108-6_bib19) 2014; 33 (10.1016/S2589-7500(19)30108-6_bib24) 2018 MSV (10.1016/S2589-7500(19)30108-6_bib14) 2018 Luo (10.1016/S2589-7500(19)30108-6_bib31) 2016; 18 (10.1016/S2589-7500(19)30108-6_bib13) 2019 Oakden-Rayner (10.1016/S2589-7500(19)30108-6_bib39) 10.1016/S2589-7500(19)30108-6_bib30 Fogel (10.1016/S2589-7500(19)30108-6_bib2) 2018; 1 Collins (10.1016/S2589-7500(19)30108-6_bib23) 2015; 350 Guan (10.1016/S2589-7500(19)30108-6_bib28) 2018 Li (10.1016/S2589-7500(19)30108-6_bib29) Mnih (10.1016/S2589-7500(19)30108-6_bib3) 2015; 518 Char (10.1016/S2589-7500(19)30108-6_bib43) 2018; 378 Elsken (10.1016/S2589-7500(19)30108-6_bib25) 2018 LeCun (10.1016/S2589-7500(19)30108-6_bib1) 2015; 521 (10.1016/S2589-7500(19)30108-6_bib10) 2015 Nicholson Price (10.1016/S2589-7500(19)30108-6_bib35) 2017 Zoph (10.1016/S2589-7500(19)30108-6_bib16) 2017 Oakden-Rayner (10.1016/S2589-7500(19)30108-6_bib33) Krizhevsky (10.1016/S2589-7500(19)30108-6_bib27) 2012 Tschandl (10.1016/S2589-7500(19)30108-6_bib22) 2018; 5 Collobert (10.1016/S2589-7500(19)30108-6_bib6) 2011; 12 Zoph (10.1016/S2589-7500(19)30108-6_bib18) 2016 Li (10.1016/S2589-7500(19)30108-6_bib41) 2018; 45 Hinton (10.1016/S2589-7500(19)30108-6_bib5) 2012; 29 Kermany (10.1016/S2589-7500(19)30108-6_bib20) 2018; 172 Wang (10.1016/S2589-7500(19)30108-6_bib21) Tonekaboni (10.1016/S2589-7500(19)30108-6_bib37) 2019 (10.1016/S2589-7500(19)30108-6_bib15) 2019 Kohn (10.1016/S2589-7500(19)30108-6_bib32) 2013; 20 Krizhevsky (10.1016/S2589-7500(19)30108-6_bib9) 2012; 1 33323266 - Lancet Digit Health. 2019 Sep;1(5):e198-e199. doi: 10.1016/S2589-7500(19)30112-8. |
| References_xml | – year: Dec 2, 2015 ident: bib10 article-title: World changing ideas of 2015 publication-title: Scientific American – ident: bib33 article-title: Explain yourself, machine. Producing simple text descriptions for AI interpretability – volume: 1 start-page: 5 year: 2018 ident: bib2 article-title: Artificial intelligence powers digital medicine publication-title: Dig Med – year: 2014 ident: bib4 article-title: Show and tell: a neural image caption generator publication-title: arXiv – volume: 12 start-page: 2493 year: 2011 end-page: 2537 ident: bib6 article-title: Natural language processing (almost) from scratch publication-title: J Mach Learning Res – ident: bib39 article-title: Exploring the ChestXray14 dataset: problems – volume: 518 start-page: 529 year: 2015 end-page: 533 ident: bib3 article-title: Human-level control through deep reinforcement learning publication-title: Nature – year: April 2, 2019 ident: bib13 article-title: 2019 Global AI talent report publication-title: ElementAI – volume: 2 start-page: 2 year: 2019 ident: bib36 article-title: The reproducibility crisis in the age of digital medicine publication-title: Digit Med – year: 2019 ident: bib37 article-title: What clinicians want: contextualizing explainable machine learning for clinical end use publication-title: ArXiv – volume: 25 start-page: 65 year: 2019 end-page: 69 ident: bib12 article-title: Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network publication-title: Nat Med – volume: 350 year: 2015 ident: bib23 article-title: Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement publication-title: BMJ – ident: bib40 article-title: Open source research data repository software – year: 2018 ident: bib28 article-title: Diagnose like a radiologist: attention guided convolutional neural network for thorax disease classification publication-title: arXiv – volume: 33 start-page: 231 year: 2014 end-page: 234 ident: bib19 article-title: Feedback on a publicly distributed image database: the Messidor database publication-title: Image Anal Stereol – year: Nov 2, 2017 ident: bib16 article-title: AutoML for large scale image classification and object detection publication-title: Google AI Blog – volume: 20 start-page: 1194 year: 2013 end-page: 1206 ident: bib32 article-title: Understanding the direction of bias in studies of diagnostic test accuracy publication-title: Acad Emerge Med – year: April 15, 2018 ident: bib14 article-title: Why AutoML is set to become the future of artificial intelligence publication-title: Forbes – year: 2012 ident: bib27 article-title: ImageNet classification with deep convolutional neural networks publication-title: Advances in Neural Information Processing Systems – reference: Hadsell R, Erkan A, Sermanet P, Scoffier M, Muller U, LeCun Y. Deep belief net learning in a long-range vision system for autonomous off-road driving. 2008 IEEE/RSJ International Conference on Intelligent Robot Systems, IROS; Nice, France; Sept 22–26, 2018 (4651217). – ident: bib21 article-title: Chestx-ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases – volume: 24 start-page: 1342 year: 2018 ident: bib34 article-title: Clinically applicable deep learning for diagnosis and referral in retinal disease publication-title: Nat Med – volume: 378 start-page: 981 year: 2018 end-page: 983 ident: bib43 article-title: Implementing machine learning in health care–addressing ethical challenges publication-title: N Engl J Med – year: Jan 1, 2019 ident: bib15 article-title: Auto machine learning software/tools in 2019: in-depth guide publication-title: AI Multiple – volume: 172 start-page: 1122 year: 2018 end-page: 1131 ident: bib20 article-title: Identifying medical diagnoses and treatable diseases by image-based deep learning publication-title: Cell – ident: bib29 article-title: Convolutional neural networks based transfer learning for diabetic retinopathy fundus image classification. 2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics; Shanghai, China; Oct 14–16, 2017 (abstract) – volume: 45 start-page: 121 year: 2018 end-page: 133 ident: bib41 article-title: DeepMitosis: Mitosis detection via deep detection, verification and segmentation networks publication-title: Med Image Anal – year: 2016 ident: bib18 article-title: Neural architecture search with reinforcement learning publication-title: arXiv – year: May 17, 2017 ident: bib26 article-title: Using Machine Learning to Explore Neural Network Architecture publication-title: Google AI Blog – volume: 18 start-page: e323 year: 2016 ident: bib31 article-title: Guidelines for developing and reporting machine learning predictive models in biomedical research: a multidisciplinary view publication-title: JIMR – ident: bib17 article-title: fast.ai, making neural nets uncool again – year: May 18, 2016 ident: bib11 article-title: Google supercharges machine learning tasks with TPU custom chip publication-title: Google Cloud – year: 2016 ident: bib38 article-title: The Mythos of model interpretability publication-title: ArXiv – volume: 29 start-page: 82 year: 2012 end-page: 97 ident: bib5 article-title: Deep neural networks for acoustic modeling in speech recognition publication-title: IEEE Signal Process Mag – volume: 521 start-page: 436 year: 2015 end-page: 444 ident: bib1 article-title: Deep learning publication-title: Nature – volume: 26 start-page: 120 year: 2009 end-page: 144 ident: bib8 article-title: Learning long-range vision for autonomous off-road driving publication-title: J Field Rob – year: Feb 19, 2018 ident: bib24 article-title: What covered entities should know about cloud computing and HIPAA compliance publication-title: HIPAA J – reference: Codella NC, Gutman D, Celebi ME, et al. Skin lesion analysis toward melanoma detection: a challenge at the 2017 international symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC). 2018 IEEE 15th International Symposium on Biomedical Imaging; Washington DC, USA; April 4–7, 2018: 168–72. – volume: 5 start-page: 142 year: 2005 end-page: 149 ident: bib42 article-title: Bias as a threat to the validity of cancer molecular-marker research publication-title: Nat Rev Cancer – start-page: 116 year: 2017 ident: bib35 article-title: Regulating Black-Box Medicine publication-title: Mich L Rev – year: 2018 ident: bib25 article-title: Neural architecture search: a survey publication-title: arXiv – volume: 1 start-page: 1097 year: 2012 end-page: 1105 ident: bib9 article-title: ImageNet classification with deep convolutional neural networks publication-title: Proceedings of the 25th International Conference on Neural Information Processing Systems – volume: 5 year: 2018 ident: bib22 article-title: The HAM10000 dataset: a large collection of multi-source dermatoscopic images of common pigmented skin lesions publication-title: Sci Data – year: 2016 ident: 10.1016/S2589-7500(19)30108-6_bib38 article-title: The Mythos of model interpretability publication-title: ArXiv – volume: 1 start-page: 1097 year: 2012 ident: 10.1016/S2589-7500(19)30108-6_bib9 article-title: ImageNet classification with deep convolutional neural networks publication-title: Proceedings of the 25th International Conference on Neural Information Processing Systems – start-page: 116 year: 2017 ident: 10.1016/S2589-7500(19)30108-6_bib35 article-title: Regulating Black-Box Medicine publication-title: Mich L Rev – volume: 18 start-page: e323 year: 2016 ident: 10.1016/S2589-7500(19)30108-6_bib31 article-title: Guidelines for developing and reporting machine learning predictive models in biomedical research: a multidisciplinary view publication-title: JIMR – volume: 521 start-page: 436 year: 2015 ident: 10.1016/S2589-7500(19)30108-6_bib1 article-title: Deep learning publication-title: Nature doi: 10.1038/nature14539 – volume: 1 start-page: 5 year: 2018 ident: 10.1016/S2589-7500(19)30108-6_bib2 article-title: Artificial intelligence powers digital medicine publication-title: Dig Med – ident: 10.1016/S2589-7500(19)30108-6_bib30 – volume: 26 start-page: 120 year: 2009 ident: 10.1016/S2589-7500(19)30108-6_bib8 article-title: Learning long-range vision for autonomous off-road driving publication-title: J Field Rob doi: 10.1002/rob.20276 – volume: 12 start-page: 2493 year: 2011 ident: 10.1016/S2589-7500(19)30108-6_bib6 article-title: Natural language processing (almost) from scratch publication-title: J Mach Learning Res – volume: 172 start-page: 1122 year: 2018 ident: 10.1016/S2589-7500(19)30108-6_bib20 article-title: Identifying medical diagnoses and treatable diseases by image-based deep learning publication-title: Cell doi: 10.1016/j.cell.2018.02.010 – year: 2018 ident: 10.1016/S2589-7500(19)30108-6_bib25 article-title: Neural architecture search: a survey publication-title: arXiv – ident: 10.1016/S2589-7500(19)30108-6_bib21 – year: 2019 ident: 10.1016/S2589-7500(19)30108-6_bib37 article-title: What clinicians want: contextualizing explainable machine learning for clinical end use publication-title: ArXiv – ident: 10.1016/S2589-7500(19)30108-6_bib7 doi: 10.1109/IROS.2008.4651217 – year: 2019 ident: 10.1016/S2589-7500(19)30108-6_bib15 article-title: Auto machine learning software/tools in 2019: in-depth guide publication-title: AI Multiple – volume: 33 start-page: 231 year: 2014 ident: 10.1016/S2589-7500(19)30108-6_bib19 article-title: Feedback on a publicly distributed image database: the Messidor database publication-title: Image Anal Stereol doi: 10.5566/ias.1155 – year: 2018 ident: 10.1016/S2589-7500(19)30108-6_bib28 article-title: Diagnose like a radiologist: attention guided convolutional neural network for thorax disease classification publication-title: arXiv – volume: 29 start-page: 82 year: 2012 ident: 10.1016/S2589-7500(19)30108-6_bib5 article-title: Deep neural networks for acoustic modeling in speech recognition publication-title: IEEE Signal Process Mag doi: 10.1109/MSP.2012.2205597 – year: 2019 ident: 10.1016/S2589-7500(19)30108-6_bib13 article-title: 2019 Global AI talent report publication-title: ElementAI – volume: 5 start-page: 142 year: 2005 ident: 10.1016/S2589-7500(19)30108-6_bib42 article-title: Bias as a threat to the validity of cancer molecular-marker research publication-title: Nat Rev Cancer doi: 10.1038/nrc1550 – year: 2014 ident: 10.1016/S2589-7500(19)30108-6_bib4 article-title: Show and tell: a neural image caption generator publication-title: arXiv – volume: 378 start-page: 981 year: 2018 ident: 10.1016/S2589-7500(19)30108-6_bib43 article-title: Implementing machine learning in health care–addressing ethical challenges publication-title: N Engl J Med doi: 10.1056/NEJMp1714229 – year: 2016 ident: 10.1016/S2589-7500(19)30108-6_bib11 article-title: Google supercharges machine learning tasks with TPU custom chip publication-title: Google Cloud – ident: 10.1016/S2589-7500(19)30108-6_bib17 – ident: 10.1016/S2589-7500(19)30108-6_bib39 – year: 2012 ident: 10.1016/S2589-7500(19)30108-6_bib27 article-title: ImageNet classification with deep convolutional neural networks publication-title: Advances in Neural Information Processing Systems – volume: 24 start-page: 1342 year: 2018 ident: 10.1016/S2589-7500(19)30108-6_bib34 article-title: Clinically applicable deep learning for diagnosis and referral in retinal disease publication-title: Nat Med doi: 10.1038/s41591-018-0107-6 – year: 2016 ident: 10.1016/S2589-7500(19)30108-6_bib18 article-title: Neural architecture search with reinforcement learning publication-title: arXiv – year: 2017 ident: 10.1016/S2589-7500(19)30108-6_bib26 article-title: Using Machine Learning to Explore Neural Network Architecture publication-title: Google AI Blog – year: 2018 ident: 10.1016/S2589-7500(19)30108-6_bib14 article-title: Why AutoML is set to become the future of artificial intelligence publication-title: Forbes – volume: 45 start-page: 121 year: 2018 ident: 10.1016/S2589-7500(19)30108-6_bib41 article-title: DeepMitosis: Mitosis detection via deep detection, verification and segmentation networks publication-title: Med Image Anal doi: 10.1016/j.media.2017.12.002 – year: 2015 ident: 10.1016/S2589-7500(19)30108-6_bib10 article-title: World changing ideas of 2015 publication-title: Scientific American – year: 2018 ident: 10.1016/S2589-7500(19)30108-6_bib24 article-title: What covered entities should know about cloud computing and HIPAA compliance publication-title: HIPAA J – volume: 20 start-page: 1194 year: 2013 ident: 10.1016/S2589-7500(19)30108-6_bib32 article-title: Understanding the direction of bias in studies of diagnostic test accuracy publication-title: Acad Emerge Med doi: 10.1111/acem.12255 – year: 2017 ident: 10.1016/S2589-7500(19)30108-6_bib16 article-title: AutoML for large scale image classification and object detection publication-title: Google AI Blog – volume: 2 start-page: 2 year: 2019 ident: 10.1016/S2589-7500(19)30108-6_bib36 article-title: The reproducibility crisis in the age of digital medicine publication-title: Digit Med doi: 10.1038/s41746-019-0079-z – volume: 518 start-page: 529 year: 2015 ident: 10.1016/S2589-7500(19)30108-6_bib3 article-title: Human-level control through deep reinforcement learning publication-title: Nature doi: 10.1038/nature14236 – volume: 5 year: 2018 ident: 10.1016/S2589-7500(19)30108-6_bib22 article-title: The HAM10000 dataset: a large collection of multi-source dermatoscopic images of common pigmented skin lesions publication-title: Sci Data doi: 10.1038/sdata.2018.161 – ident: 10.1016/S2589-7500(19)30108-6_bib29 – ident: 10.1016/S2589-7500(19)30108-6_bib33 – volume: 350 year: 2015 ident: 10.1016/S2589-7500(19)30108-6_bib23 article-title: Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement publication-title: BMJ doi: 10.1136/bmj.g7594 – volume: 25 start-page: 65 year: 2019 ident: 10.1016/S2589-7500(19)30108-6_bib12 article-title: Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network publication-title: Nat Med doi: 10.1038/s41591-018-0268-3 – reference: 33323266 - Lancet Digit Health. 2019 Sep;1(5):e198-e199. doi: 10.1016/S2589-7500(19)30112-8. |
| SSID | ssj0002213777 |
| Score | 2.5566797 |
| Snippet | Deep learning has the potential to transform health care; however, substantial expertise is required to train such models. We sought to evaluate the utility of... SummaryBackgroundDeep learning has the potential to transform health care; however, substantial expertise is required to train such models. We sought to... Background: Deep learning has the potential to transform health care; however, substantial expertise is required to train such models. We sought to evaluate... |
| SourceID | doaj unpaywall proquest pubmed crossref elsevier |
| SourceType | Open Website Open Access Repository Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | e232 |
| SubjectTerms | Adult Algorithms Data Interpretation, Statistical Deep Learning Feasibility Studies Fundus Oculi Humans Informatics Internal Medicine Public Health Skin Neoplasms - diagnosis Software Tomography, Optical Coherence - statistics & numerical data |
| SummonAdditionalLinks | – databaseName: Unpaywall dbid: UNPAY link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lj9MwELagewAOPMQrvGQkDnBINw_HD24FsVohUXGg0nKybMdZregmFUmEyl_gTzOTF1u20sKpaeVJ4vF45nM985mQV0qqIubchFyqJEQjCY2UJoyll4USTtkum_DTkh-v2MeT7GQoVsdamJ39-5gf1kkmVQhhLXodqzdgjZEM-XVywDOA3jNysFp-XnzFA-TGZn-qdPbL7sSfjqZ_Jwxdhpm3yI223JjtD7NeXwg9R3fIcnzpPuPk27xt7Nz9_IvP8Z97dZfcHkAoXfRWc49c8-V98mvRNhXgV5_T3PsNHc6TOIVvmORBAd3S835bh56dgxuiDpE3php1o0vtlvZVlSGmk9HNBdKPmuL_vbSsqKswWFI_MSy_pYYW3gxZulva8d0-IKujD1_eH4fDUQ2hg0nchFKmzEmbFE4UAlZJConvBXc-TzxXxts8SllhnBMJklbZyFmVgzOBNrgCdelDMiur0j8mlOWFTAHmcW8Es7FReeycLQysmGEZL3xA2DiA2g085nicxlpPCWuoXY3a1bHSnXY1D8h8Etv0RB5XCbxD65gaIw939wMMpB6mtbaFyFmSC5s5x1LD8NMWGVxC36WTAeGjbemx1BWcM9zo7Kqni32Cvh5cTK1jXSc66qVRGJA4ioLky9GGNbgK3P8xpa_aWidMACCFdiIgj3rjnnqXgsrTRMQBOZys_ZKe6n1v-uS_JZ6SmwA7h0y9Z2TWfG_9c4B2jX0xTOjf2SNAVg priority: 102 providerName: Unpaywall |
| Title | Automated deep learning design for medical image classification by health-care professionals with no coding experience: a feasibility study |
| URI | https://www.clinicalkey.com/#!/content/1-s2.0-S2589750019301086 https://www.clinicalkey.es/playcontent/1-s2.0-S2589750019301086 https://www.ncbi.nlm.nih.gov/pubmed/33323271 https://www.proquest.com/docview/2470631937 https://doi.org/10.1016/s2589-7500(19)30108-6 https://doaj.org/article/bf7d42d7b5cc43a4b5ccbf543a6ce8c8 |
| UnpaywallVersion | publishedVersion |
| Volume | 1 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAON databaseName: DOAJ Directory of Open Access Journals (ODIN) customDbUrl: eissn: 2589-7500 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0002213777 issn: 2589-7500 databaseCode: DOA dateStart: 20190101 isFulltext: true titleUrlDefault: https://www.doaj.org/ providerName: Directory of Open Access Journals – providerCode: PRVHPJ databaseName: ROAD: Directory of Open Access Scholarly Resources (ISSN International Center) customDbUrl: eissn: 2589-7500 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0002213777 issn: 2589-7500 databaseCode: M~E dateStart: 20190101 isFulltext: true titleUrlDefault: https://road.issn.org providerName: ISSN International Centre |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Lb9NAEF6h9gAcKhAvA60WiQMclvix2UdvKWpVIbVCgkjltNqXUavUjnCiKhf-AH-aGdsxqVopFy5xYu0k9szszjfxzLeEvNdKl5kQlgmlc4ZOwqxSlmUqqlJLr11bTXh2Lk6n_MvF-GJjqy-sCevogTvFjVwpA8-DdGPveWE5Hl05hrfCR-XbNt9U6Y1k6qoldUEmPeyVzsdKMwiL6b_2ndG34eSHTH8EF08VE7cCU8vffys-3cWfj8nDZTW3qxs7m23EpJMnZK8Hk3TS3cRT8iBWz8ifyXJRAw6NgYYY57TfF-InfMJiDQoolV53j2fo5TUsJ9QjgsaSodZK1K1o1x3JsCyMzjfIOxqK_9vSqqa-xqBH48CUfEgtLaPtq21XtOWtfU6mJ8ffP5-yfssF5mEyLphSBffK5aWXpYRsRyOBvQRNhzwKbaMLacFL673MkXzKpd7pAIsCjMFM0hcvyE5VV_EVoTyUqgC4JqKV3GVWhwwtZyHzhXRcxoTwtb6N7_nIcVuMmRkKz9BMBs1kMm1aMxmRkE-D2Lwj5NgmcITGHAYjn3Z7ArzM9F5mtnlZQsTaFcy6ZRUWWfiiy22_Lu8TjE2_VDQmM01u0k4ahQFRoyhIvlu7nIEpj89xbBXrZWNyLgFYwjiZkJedLw53V4DKi1xmCRkNznlHT819V_r6f-jpDXkEiLIvwntLdha_lnEfUNvCHbQTFF7Pfh8fkN3p-dfJj789rTzT |
| linkProvider | Directory of Open Access Journals |
| linkToUnpaywall | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lj9MwELagewAOPMQrvGQkDnBINw_HD24FsVohUXGg0nKybMdZregmFUmEyl_gTzOTF1u20sKpaeVJ4vF45nM985mQV0qqIubchFyqJEQjCY2UJoyll4USTtkum_DTkh-v2MeT7GQoVsdamJ39-5gf1kkmVQhhLXodqzdgjZEM-XVywDOA3jNysFp-XnzFA-TGZn-qdPbL7sSfjqZ_Jwxdhpm3yI223JjtD7NeXwg9R3fIcnzpPuPk27xt7Nz9_IvP8Z97dZfcHkAoXfRWc49c8-V98mvRNhXgV5_T3PsNHc6TOIVvmORBAd3S835bh56dgxuiDpE3php1o0vtlvZVlSGmk9HNBdKPmuL_vbSsqKswWFI_MSy_pYYW3gxZulva8d0-IKujD1_eH4fDUQ2hg0nchFKmzEmbFE4UAlZJConvBXc-TzxXxts8SllhnBMJklbZyFmVgzOBNrgCdelDMiur0j8mlOWFTAHmcW8Es7FReeycLQysmGEZL3xA2DiA2g085nicxlpPCWuoXY3a1bHSnXY1D8h8Etv0RB5XCbxD65gaIw939wMMpB6mtbaFyFmSC5s5x1LD8NMWGVxC36WTAeGjbemx1BWcM9zo7Kqni32Cvh5cTK1jXSc66qVRGJA4ioLky9GGNbgK3P8xpa_aWidMACCFdiIgj3rjnnqXgsrTRMQBOZys_ZKe6n1v-uS_JZ6SmwA7h0y9Z2TWfG_9c4B2jX0xTOjf2SNAVg |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Automated+deep+learning+design+for+medical+image+classification+by+health-care+professionals+with+no+coding+experience%3A+a+feasibility+study&rft.jtitle=The+Lancet.+Digital+health&rft.au=Faes%2C+Livia%2C+MD&rft.au=Wagner%2C+Siegfried+K%2C+BMBCh&rft.au=Fu%2C+Dun+Jack%2C+PhD&rft.au=Liu%2C+Xiaoxuan%2C+MBChB&rft.date=2019-09-01&rft.issn=2589-7500&rft.volume=1&rft.issue=5&rft.spage=e232&rft.epage=e242&rft_id=info:doi/10.1016%2FS2589-7500%2819%2930108-6&rft.externalDBID=ECK1-s2.0-S2589750019301086&rft.externalDocID=1_s2_0_S2589750019301086 |
| thumbnail_m | http://utb.summon.serialssolutions.com/2.0.0/image/custom?url=https%3A%2F%2Fcdn.clinicalkey.com%2Fck-thumbnails%2F25897500%2FS2589750019X00060%2Fcov150h.gif |