Implementing Large Language Models in Health Care: Clinician-Focused Review With Interactive Guideline
Large language models (LLMs) can generate outputs understandable by humans, such as answers to medical questions and radiology reports. With the rapid development of LLMs, clinicians face a growing challenge in determining the most suitable algorithms to support their work. We aimed to provide clini...
Saved in:
| Published in | Journal of medical Internet research Vol. 27; no. 8; p. e71916 |
|---|---|
| Main Authors | , , |
| Format | Journal Article |
| Language | English |
| Published |
Canada
Journal of Medical Internet Research
11.07.2025
JMIR Publications |
| Subjects | |
| Online Access | Get full text |
| ISSN | 1438-8871 1439-4456 1438-8871 |
| DOI | 10.2196/71916 |
Cover
| Abstract | Large language models (LLMs) can generate outputs understandable by humans, such as answers to medical questions and radiology reports. With the rapid development of LLMs, clinicians face a growing challenge in determining the most suitable algorithms to support their work.
We aimed to provide clinicians and other health care practitioners with systematic guidance in selecting an LLM that is relevant and appropriate to their needs and facilitate the integration process of LLMs in health care.
We conducted a literature search of full-text publications in English on clinical applications of LLMs published between January 1, 2022, and March 31, 2025, on PubMed, ScienceDirect, Scopus, and IEEE Xplore. We excluded papers from journals below a set citation threshold, as well as papers that did not focus on LLMs, were not research based, or did not involve clinical applications. We also conducted a literature search on arXiv within the same investigated period and included papers on the clinical applications of innovative multimodal LLMs. This led to a total of 270 studies.
We collected 330 LLMs and recorded their application frequency in clinical tasks and frequency of best performance in their context. On the basis of a 5-stage clinical workflow, we found that stages 2, 3, and 4 are key stages in the clinical workflow, involving numerous clinical subtasks and LLMs. However, the diversity of LLMs that may perform optimally in each context remains limited. GPT-3.5 and GPT-4 were the most versatile models in the 5-stage clinical workflow, applied to 52% (29/56) and 71% (40/56) of the clinical subtasks, respectively, and they performed best in 29% (16/56) and 54% (30/56) of the clinical subtasks, respectively. General-purpose LLMs may not perform well in specialized areas as they often require lightweight prompt engineering methods or fine-tuning techniques based on specific datasets to improve model performance. Most LLMs with multimodal abilities are closed-source models and, therefore, lack of transparency, model customization, and fine-tuning for specific clinical tasks and may also pose challenges regarding data protection and privacy, which are common requirements in clinical settings.
In this review, we found that LLMs may help clinicians in a variety of clinical tasks. However, we did not find evidence of generalist clinical LLMs successfully applicable to a wide range of clinical tasks. Therefore, their clinical deployment remains challenging. On the basis of this review, we propose an interactive online guideline for clinicians to select suitable LLMs by clinical task. With a clinical perspective and free of unnecessary technical jargon, this guideline may be used as a reference to successfully apply LLMs in clinical settings. |
|---|---|
| AbstractList | Large language models (LLMs) can generate outputs understandable by humans, such as answers to medical questions and radiology reports. With the rapid development of LLMs, clinicians face a growing challenge in determining the most suitable algorithms to support their work.
We aimed to provide clinicians and other health care practitioners with systematic guidance in selecting an LLM that is relevant and appropriate to their needs and facilitate the integration process of LLMs in health care.
We conducted a literature search of full-text publications in English on clinical applications of LLMs published between January 1, 2022, and March 31, 2025, on PubMed, ScienceDirect, Scopus, and IEEE Xplore. We excluded papers from journals below a set citation threshold, as well as papers that did not focus on LLMs, were not research based, or did not involve clinical applications. We also conducted a literature search on arXiv within the same investigated period and included papers on the clinical applications of innovative multimodal LLMs. This led to a total of 270 studies.
We collected 330 LLMs and recorded their application frequency in clinical tasks and frequency of best performance in their context. On the basis of a 5-stage clinical workflow, we found that stages 2, 3, and 4 are key stages in the clinical workflow, involving numerous clinical subtasks and LLMs. However, the diversity of LLMs that may perform optimally in each context remains limited. GPT-3.5 and GPT-4 were the most versatile models in the 5-stage clinical workflow, applied to 52% (29/56) and 71% (40/56) of the clinical subtasks, respectively, and they performed best in 29% (16/56) and 54% (30/56) of the clinical subtasks, respectively. General-purpose LLMs may not perform well in specialized areas as they often require lightweight prompt engineering methods or fine-tuning techniques based on specific datasets to improve model performance. Most LLMs with multimodal abilities are closed-source models and, therefore, lack of transparency, model customization, and fine-tuning for specific clinical tasks and may also pose challenges regarding data protection and privacy, which are common requirements in clinical settings.
In this review, we found that LLMs may help clinicians in a variety of clinical tasks. However, we did not find evidence of generalist clinical LLMs successfully applicable to a wide range of clinical tasks. Therefore, their clinical deployment remains challenging. On the basis of this review, we propose an interactive online guideline for clinicians to select suitable LLMs by clinical task. With a clinical perspective and free of unnecessary technical jargon, this guideline may be used as a reference to successfully apply LLMs in clinical settings. BackgroundLarge language models (LLMs) can generate outputs understandable by humans, such as answers to medical questions and radiology reports. With the rapid development of LLMs, clinicians face a growing challenge in determining the most suitable algorithms to support their work. ObjectiveWe aimed to provide clinicians and other health care practitioners with systematic guidance in selecting an LLM that is relevant and appropriate to their needs and facilitate the integration process of LLMs in health care. MethodsWe conducted a literature search of full-text publications in English on clinical applications of LLMs published between January 1, 2022, and March 31, 2025, on PubMed, ScienceDirect, Scopus, and IEEE Xplore. We excluded papers from journals below a set citation threshold, as well as papers that did not focus on LLMs, were not research based, or did not involve clinical applications. We also conducted a literature search on arXiv within the same investigated period and included papers on the clinical applications of innovative multimodal LLMs. This led to a total of 270 studies. ResultsWe collected 330 LLMs and recorded their application frequency in clinical tasks and frequency of best performance in their context. On the basis of a 5-stage clinical workflow, we found that stages 2, 3, and 4 are key stages in the clinical workflow, involving numerous clinical subtasks and LLMs. However, the diversity of LLMs that may perform optimally in each context remains limited. GPT-3.5 and GPT-4 were the most versatile models in the 5-stage clinical workflow, applied to 52% (29/56) and 71% (40/56) of the clinical subtasks, respectively, and they performed best in 29% (16/56) and 54% (30/56) of the clinical subtasks, respectively. General-purpose LLMs may not perform well in specialized areas as they often require lightweight prompt engineering methods or fine-tuning techniques based on specific datasets to improve model performance. Most LLMs with multimodal abilities are closed-source models and, therefore, lack of transparency, model customization, and fine-tuning for specific clinical tasks and may also pose challenges regarding data protection and privacy, which are common requirements in clinical settings. ConclusionsIn this review, we found that LLMs may help clinicians in a variety of clinical tasks. However, we did not find evidence of generalist clinical LLMs successfully applicable to a wide range of clinical tasks. Therefore, their clinical deployment remains challenging. On the basis of this review, we propose an interactive online guideline for clinicians to select suitable LLMs by clinical task. With a clinical perspective and free of unnecessary technical jargon, this guideline may be used as a reference to successfully apply LLMs in clinical settings. Large language models (LLMs) can generate outputs understandable by humans, such as answers to medical questions and radiology reports. With the rapid development of LLMs, clinicians face a growing challenge in determining the most suitable algorithms to support their work.BACKGROUNDLarge language models (LLMs) can generate outputs understandable by humans, such as answers to medical questions and radiology reports. With the rapid development of LLMs, clinicians face a growing challenge in determining the most suitable algorithms to support their work.We aimed to provide clinicians and other health care practitioners with systematic guidance in selecting an LLM that is relevant and appropriate to their needs and facilitate the integration process of LLMs in health care.OBJECTIVEWe aimed to provide clinicians and other health care practitioners with systematic guidance in selecting an LLM that is relevant and appropriate to their needs and facilitate the integration process of LLMs in health care.We conducted a literature search of full-text publications in English on clinical applications of LLMs published between January 1, 2022, and March 31, 2025, on PubMed, ScienceDirect, Scopus, and IEEE Xplore. We excluded papers from journals below a set citation threshold, as well as papers that did not focus on LLMs, were not research based, or did not involve clinical applications. We also conducted a literature search on arXiv within the same investigated period and included papers on the clinical applications of innovative multimodal LLMs. This led to a total of 270 studies.METHODSWe conducted a literature search of full-text publications in English on clinical applications of LLMs published between January 1, 2022, and March 31, 2025, on PubMed, ScienceDirect, Scopus, and IEEE Xplore. We excluded papers from journals below a set citation threshold, as well as papers that did not focus on LLMs, were not research based, or did not involve clinical applications. We also conducted a literature search on arXiv within the same investigated period and included papers on the clinical applications of innovative multimodal LLMs. This led to a total of 270 studies.We collected 330 LLMs and recorded their application frequency in clinical tasks and frequency of best performance in their context. On the basis of a 5-stage clinical workflow, we found that stages 2, 3, and 4 are key stages in the clinical workflow, involving numerous clinical subtasks and LLMs. However, the diversity of LLMs that may perform optimally in each context remains limited. GPT-3.5 and GPT-4 were the most versatile models in the 5-stage clinical workflow, applied to 52% (29/56) and 71% (40/56) of the clinical subtasks, respectively, and they performed best in 29% (16/56) and 54% (30/56) of the clinical subtasks, respectively. General-purpose LLMs may not perform well in specialized areas as they often require lightweight prompt engineering methods or fine-tuning techniques based on specific datasets to improve model performance. Most LLMs with multimodal abilities are closed-source models and, therefore, lack of transparency, model customization, and fine-tuning for specific clinical tasks and may also pose challenges regarding data protection and privacy, which are common requirements in clinical settings.RESULTSWe collected 330 LLMs and recorded their application frequency in clinical tasks and frequency of best performance in their context. On the basis of a 5-stage clinical workflow, we found that stages 2, 3, and 4 are key stages in the clinical workflow, involving numerous clinical subtasks and LLMs. However, the diversity of LLMs that may perform optimally in each context remains limited. GPT-3.5 and GPT-4 were the most versatile models in the 5-stage clinical workflow, applied to 52% (29/56) and 71% (40/56) of the clinical subtasks, respectively, and they performed best in 29% (16/56) and 54% (30/56) of the clinical subtasks, respectively. General-purpose LLMs may not perform well in specialized areas as they often require lightweight prompt engineering methods or fine-tuning techniques based on specific datasets to improve model performance. Most LLMs with multimodal abilities are closed-source models and, therefore, lack of transparency, model customization, and fine-tuning for specific clinical tasks and may also pose challenges regarding data protection and privacy, which are common requirements in clinical settings.In this review, we found that LLMs may help clinicians in a variety of clinical tasks. However, we did not find evidence of generalist clinical LLMs successfully applicable to a wide range of clinical tasks. Therefore, their clinical deployment remains challenging. On the basis of this review, we propose an interactive online guideline for clinicians to select suitable LLMs by clinical task. With a clinical perspective and free of unnecessary technical jargon, this guideline may be used as a reference to successfully apply LLMs in clinical settings.CONCLUSIONSIn this review, we found that LLMs may help clinicians in a variety of clinical tasks. However, we did not find evidence of generalist clinical LLMs successfully applicable to a wide range of clinical tasks. Therefore, their clinical deployment remains challenging. On the basis of this review, we propose an interactive online guideline for clinicians to select suitable LLMs by clinical task. With a clinical perspective and free of unnecessary technical jargon, this guideline may be used as a reference to successfully apply LLMs in clinical settings. Background Large language models (LLMs) can generate outputs understandable by humans, such as answers to medical questions and radiology reports. With the rapid development of LLMs, clinicians face a growing challenge in determining the most suitable algorithms to support their work. Objective We aimed to provide clinicians and other health care practitioners with systematic guidance in selecting an LLM that is relevant and appropriate to their needs and facilitate the integration process of LLMs in health care. Methods We conducted a literature search of full-text publications in English on clinical applications of LLMs published between January 1, 2022, and March 31, 2025, on PubMed, ScienceDirect, Scopus, and IEEE Xplore. We excluded papers from journals below a set citation threshold, as well as papers that did not focus on LLMs, were not research based, or did not involve clinical applications. We also conducted a literature search on arXiv within the same investigated period and included papers on the clinical applications of innovative multimodal LLMs. This led to a total of 270 studies. Results We collected 330 LLMs and recorded their application frequency in clinical tasks and frequency of best performance in their context. On the basis of a 5-stage clinical workflow, we found that stages 2, 3, and 4 are key stages in the clinical workflow, involving numerous clinical subtasks and LLMs. However, the diversity of LLMs that may perform optimally in each context remains limited. GPT-3.5 and GPT-4 were the most versatile models in the 5-stage clinical workflow, applied to 52% (29/56) and 71% (40/56) of the clinical subtasks, respectively, and they performed best in 29% (16/56) and 54% (30/56) of the clinical subtasks, respectively. General-purpose LLMs may not perform well in specialized areas as they often require lightweight prompt engineering methods or fine-tuning techniques based on specific datasets to improve model performance. Most LLMs with multimodal abilities are closed-source models and, therefore, lack of transparency, model customization, and fine-tuning for specific clinical tasks and may also pose challenges regarding data protection and privacy, which are common requirements in clinical settings. Conclusions In this review, we found that LLMs may help clinicians in a variety of clinical tasks. However, we did not find evidence of generalist clinical LLMs successfully applicable to a wide range of clinical tasks. Therefore, their clinical deployment remains challenging. On the basis of this review, we propose an interactive online guideline for clinicians to select suitable LLMs by clinical task. With a clinical perspective and free of unnecessary technical jargon, this guideline may be used as a reference to successfully apply LLMs in clinical settings. Large language models (LLMs) can generate outputs understandable by humans, such as answers to medical questions and radiology reports. With the rapid development of LLMs, clinicians face a growing challenge in determining the most suitable algorithms to support their work. We aimed to provide clinicians and other health care practitioners with systematic guidance in selecting an LLM that is relevant and appropriate to their needs and facilitate the integration process of LLMs in health care. We conducted a literature search of full-text publications in English on clinical applications of LLMs published between January 1, 2022, and March 31, 2025, on PubMed, ScienceDirect, Scopus, and IEEE Xplore. We excluded papers from journals below a set citation threshold, as well as papers that did not focus on LLMs, were not research based, or did not involve clinical applications. We also conducted a literature search on arXiv within the same investigated period and included papers on the clinical applications of innovative multimodal LLMs. This led to a total of 270 studies. We collected 330 LLMs and recorded their application frequency in clinical tasks and frequency of best performance in their context. On the basis of a 5-stage clinical workflow, we found that stages 2, 3, and 4 are key stages in the clinical workflow, involving numerous clinical subtasks and LLMs. However, the diversity of LLMs that may perform optimally in each context remains limited. GPT-3.5 and GPT-4 were the most versatile models in the 5-stage clinical workflow, applied to 52% (29/56) and 71% (40/56) of the clinical subtasks, respectively, and they performed best in 29% (16/56) and 54% (30/56) of the clinical subtasks, respectively. General-purpose LLMs may not perform well in specialized areas as they often require lightweight prompt engineering methods or fine-tuning techniques based on specific datasets to improve model performance. Most LLMs with multimodal abilities are closed-source models and, therefore, lack of transparency, model customization, and fine-tuning for specific clinical tasks and may also pose challenges regarding data protection and privacy, which are common requirements in clinical settings. In this review, we found that LLMs may help clinicians in a variety of clinical tasks. However, we did not find evidence of generalist clinical LLMs successfully applicable to a wide range of clinical tasks. Therefore, their clinical deployment remains challenging. On the basis of this review, we propose an interactive online guideline for clinicians to select suitable LLMs by clinical task. With a clinical perspective and free of unnecessary technical jargon, this guideline may be used as a reference to successfully apply LLMs in clinical settings. |
| Audience | Academic |
| Author | Fu, Jun-Fen Python, Andre Li, HongYi |
| AuthorAffiliation | 1 Center for Data Science Zhejiang University Hangzhou China 5 National Regional Center for Children’s Health Hangzhou China 6 School of Medicine Zhejiang University Hangzhou China 7 Centre for Human Genetics Nuffield Department of Medicine University of Oxford Oxford United Kingdom 3 School of Medicine Children’s Hospital of Zhejiang University Hangzhou China 4 National Clinical Research Center for Child Health Hangzhou China 2 School of Mathematical Sciences Zhejiang University Hangzhou China |
| AuthorAffiliation_xml | – name: 7 Centre for Human Genetics Nuffield Department of Medicine University of Oxford Oxford United Kingdom – name: 2 School of Mathematical Sciences Zhejiang University Hangzhou China – name: 3 School of Medicine Children’s Hospital of Zhejiang University Hangzhou China – name: 1 Center for Data Science Zhejiang University Hangzhou China – name: 5 National Regional Center for Children’s Health Hangzhou China – name: 4 National Clinical Research Center for Child Health Hangzhou China – name: 6 School of Medicine Zhejiang University Hangzhou China |
| Author_xml | – sequence: 1 givenname: HongYi orcidid: 0009-0000-0471-8624 surname: Li fullname: Li, HongYi – sequence: 2 givenname: Jun-Fen orcidid: 0000-0001-6405-1251 surname: Fu fullname: Fu, Jun-Fen – sequence: 3 givenname: Andre orcidid: 0000-0001-8094-7226 surname: Python fullname: Python, Andre |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/40644686$$D View this record in MEDLINE/PubMed |
| BookMark | eNp1kltrFDEUgAep2Iv9CzIgii9Tc5uZxBcpi20XVgRRfAxnkzPTlEyyZma29N-bdmvpghJIwsmXLzknOS4OQgxYFKeUnDGqmo8tVbR5URxRwWUlZUsPns0Pi-NxvCGEEaHoq-JQkEaIRjZHRbccNh4HDJMLfbmC1GPuQz9DnnyNFv1YulBeIfjpulxAwk_lwrvgjINQXUQzj2jL77h1eFv-cplZhgkTmMltsbycXTa4gK-Llx34EU8fx5Pi58WXH4uravXtcrk4X1VGNHyqamMIW5um7hCoJC0jkjBjBRLJcpbSAlijuAJjlRCMMis5yVFLDKs5bfhJsdx5bYQbvUlugHSnIzj9EIip15AmZzzqdddx2fKWrBURUnKloBZZg7KmvK1Vdr3fueawgbtb8P5JSIm-L7p-KHoGP-_Azbwe0JpczAR-7_T9leCudR-3mjKmlKpJNnx4NKT4e8Zx0oMbDXoPAeM8ap7BOj9fU2f07Q7tISfhQhez0tzj-lyKVtCmZTRTZ_-gcrM4OJN_TudyfG_Dm-c5PF3-70_JwLsdYFIcx4Tdf4rxB9mLyJc |
| Cites_doi | 10.1016/j.ijom.2023.09.005 10.1038/s41746-024-01239-w 10.3389/frai.2023.1169595 10.1007/bf03168435 10.1016/j.compbiomed.2023.107807 10.1038/s41591-024-03328-5 10.1038/s41598-023-41512-8 10.1145/1721654.1721672 10.1016/j.aiopen.2021.08.002 10.1038/s41586-023-05881-4 10.4018/IJEIS.2019070105 10.1038/s41746-024-01258-7 10.1186/s12939-025-02419-0 10.1038/s41746-022-00742-2 10.1016/j.jbi.2025.104791 10.2139/ssrn.4809363 10.3390/informatics11030057 10.1038/s41591-024-03434-4 10.18653/v1/p19-1355 10.1212/wnl.0000000000207967 10.1038/s41586-024-07421-0 10.1007/s10916-024-02045-3 10.3390/jcm14020572 10.1016/S2589-7500(23)00202-9 10.1109/RBME.2025.3531360 10.1038/s41746-023-00970-0 10.1016/j.metrad.2023.100017 10.18653/v1/2023.emnlp-main.67 10.1148/radiol.230970 10.1038/s41746-025-01429-0 10.1016/j.compbiomed.2023.107794 10.35484/ahss.2025(6-I)22 10.1001/jamanetworkopen.2024.0357 10.1093/bib/bbad493 10.1016/s2589-7500(23)00201-7 10.1016/j.jaad.2023.04.005 10.1016/s2589-7500(23)00225-x 10.1038/s41746-024-01029-4 10.3389/fonc.2024.1526288 10.1038/s42256-021-00359-2 10.1038/s41368-023-00239-y 10.1109/rbme.2024.3493775 10.4274/dir.2023.232417 10.1177/17562848231218618 10.1016/j.jacr.2023.06.008 10.1038/s43856-023-00370-1 10.3390/jcm14020322 10.1109/tpami.2018.2857768 10.1056/AIra2400380 10.1038/s41591-023-02504-3 10.18653/v1/2024.emnlp-main.64 10.1001/jamahealthforum.2023.3909 10.1093/postmj/qgae122 10.1016/j.ebiom.2023.104512 10.3390/fi15090286 10.1056/AIoa2300138 10.1038/s41746-023-00837-4 10.1038/s41591-024-03445-1 10.1016/s2589-7500(23)00083-3 10.1038/s43856-024-00717-2 10.1109/ACCESS.2024.3365742 10.1093/eurheartj/ehad838 10.21203/rs.3.rs-3324530/v1 10.1038/s41586-023-06291-2 10.1007/s10115-021-01605-0 10.36227/techrxiv.23589741.v2 10.1002/hcs2.61 10.1145/3641289 10.1007/s00330-024-11115-6 10.1093/nsr/nwae403 10.1038/s41591-023-02448-8 10.1001/jamanetworkopen.2024.40969 10.1038/s41746-023-00952-2 10.18653/v1/2020.blackboxnlp-1.14 10.2478/jagi-2014-0001 10.1186/s13000-024-01464-7 10.1038/s41746-024-01157-x 10.18653/v1/2021.acl-long.416 10.1056/NEJMra2301725 10.3390/healthcare11162308 10.1038/s41746-023-00896-7 10.1038/s41746-025-01443-2 10.1038/s41746-019-0158-1 10.1038/s41746-024-01356-6 10.2196/50638 10.1001/jamanetworkopen.2023.25000 10.1101/2023.11.09.566468 10.1016/s2589-7500(23)00179-6 10.3390/electronics13152961 10.1145/3571730 10.1056/nejmsr2214184 10.1001/jamanetworkopen.2023.43689 10.1145/3386252 10.1038/s41591-024-03097-1 10.1162/tacl_a_00324 10.1109/tkde.2021.3090866 10.1001/jamanetworkopen.2023.27647 10.1101/2023.02.19.23286155 10.1038/s41586-020-2973-6 10.1148/radiol.232756 10.7326/M23-2772 10.1038/s41591-024-03423-7 10.3389/fdmed.2024.1456208 10.1001/jamanetworkopen.2024.57879 10.1038/s41591-024-03258-2 10.1001/jamanetworkopen.2023.46721 10.1093/clinchem/hvad106 10.1109/tmi.2016.2535302 10.3389/fmed.2024.1380148 10.1038/s41746-023-00879-8 10.1145/3712001 |
| ContentType | Journal Article |
| Copyright | HongYi Li, Jun-Fen Fu, Andre Python. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 11.07.2025. COPYRIGHT 2025 Journal of Medical Internet Research HongYi Li, Jun-Fen Fu, Andre Python. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 11.07.2025. 2025 |
| Copyright_xml | – notice: HongYi Li, Jun-Fen Fu, Andre Python. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 11.07.2025. – notice: COPYRIGHT 2025 Journal of Medical Internet Research – notice: HongYi Li, Jun-Fen Fu, Andre Python. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 11.07.2025. 2025 |
| DBID | AAYXX CITATION CGR CUY CVF ECM EIF NPM 7X8 5PM ADTOC UNPAY DOA |
| DOI | 10.2196/71916 |
| DatabaseName | CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed MEDLINE - Academic PubMed Central (Full Participant titles) Unpaywall for CDI: Periodical Content Unpaywall DOAJ Directory of Open Access Journals |
| DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) MEDLINE - Academic |
| DatabaseTitleList | MEDLINE MEDLINE - Academic |
| Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 3 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database – sequence: 4 dbid: UNPAY name: Unpaywall url: https://proxy.k.utb.cz/login?url=https://unpaywall.org/ sourceTypes: Open Access Repository |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Medicine Library & Information Science |
| EISSN | 1438-8871 |
| ExternalDocumentID | oai_doaj_org_article_bff387370b90488399a54c25e8513759 10.2196/71916 PMC12299950 A847416721 40644686 10_2196_71916 |
| Genre | Journal Article Review |
| GeographicLocations | China |
| GeographicLocations_xml | – name: China |
| GroupedDBID | --- .4I .DC 29L 2WC 36B 53G 5GY 5VS 77I 77K 7RV 7X7 8FI 8FJ AAFWJ AAKPC AAWTL AAYXX ABDBF ABIVO ABUWG ACGFO ADBBV AEGXH AENEX AFKRA AFPKN AIAGR ALMA_UNASSIGNED_HOLDINGS ALSLI AOIJS BAWUL BCNDV BENPR CCPQU CITATION CNYFK CS3 DIK DU5 DWQXO E3Z EAP EBD EBS EJD ELW EMB EMOBN ESX F5P FRP FYUFA GROUPED_DOAJ GX1 HMCUK HYE IAO ICO IEA IHR INH ISN ITC KQ8 M1O M48 NAPCQ OK1 OVT P2P PGMZT PHGZM PHGZT PIMPY PPXIY PQQKQ PRQQA PUEGO RNS RPM SJN SV3 TR2 UKHRP XSB ACUHS ALIPV CGR CUY CVF ECM EIF NPM 7X8 5PM ADRAZ ADTOC C1A O5R O5S UNPAY WOQ |
| ID | FETCH-LOGICAL-c463t-5cc02bc65fea180720802cd4e0822198daadc939acd944212d8308dad0c253163 |
| IEDL.DBID | DOA |
| ISSN | 1438-8871 1439-4456 |
| IngestDate | Fri Oct 03 12:44:45 EDT 2025 Sun Oct 26 04:17:05 EDT 2025 Tue Sep 30 17:02:11 EDT 2025 Fri Sep 05 15:41:03 EDT 2025 Mon Oct 20 22:40:18 EDT 2025 Mon Oct 20 16:51:53 EDT 2025 Fri Aug 01 03:41:24 EDT 2025 Wed Oct 01 05:46:01 EDT 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 8 |
| Keywords | clinical digital health LLM review AI LLM large language model artificial intelligence |
| Language | English |
| License | HongYi Li, Jun-Fen Fu, Andre Python. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 11.07.2025. This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included. cc-by |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c463t-5cc02bc65fea180720802cd4e0822198daadc939acd944212d8308dad0c253163 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 ObjectType-Review-3 content type line 23 |
| ORCID | 0009-0000-0471-8624 0000-0001-8094-7226 0000-0001-6405-1251 |
| OpenAccessLink | https://doaj.org/article/bff387370b90488399a54c25e8513759 |
| PMID | 40644686 |
| PQID | 3229500265 |
| PQPubID | 23479 |
| ParticipantIDs | doaj_primary_oai_doaj_org_article_bff387370b90488399a54c25e8513759 unpaywall_primary_10_2196_71916 pubmedcentral_primary_oai_pubmedcentral_nih_gov_12299950 proquest_miscellaneous_3229500265 gale_infotracmisc_A847416721 gale_infotracacademiconefile_A847416721 pubmed_primary_40644686 crossref_primary_10_2196_71916 |
| ProviderPackageCode | CITATION AAYXX |
| PublicationCentury | 2000 |
| PublicationDate | 2025-07-11 |
| PublicationDateYYYYMMDD | 2025-07-11 |
| PublicationDate_xml | – month: 07 year: 2025 text: 2025-07-11 day: 11 |
| PublicationDecade | 2020 |
| PublicationPlace | Canada |
| PublicationPlace_xml | – name: Canada – name: Toronto, Canada |
| PublicationTitle | Journal of medical Internet research |
| PublicationTitleAlternate | J Med Internet Res |
| PublicationYear | 2025 |
| Publisher | Journal of Medical Internet Research JMIR Publications |
| Publisher_xml | – name: Journal of Medical Internet Research – name: JMIR Publications |
| References | ref57 ref56 ref59 Vaswani, A (ref8) ref58 ref53 ref52 Moell, B (ref141) ref55 ref54 Rawte, V (ref38) Ghaffarzadeh-Esfahani, M (ref86) ref51 ref50 ref46 ref45 ref48 ref47 Pellegrini, C (ref94) ref42 ref41 ref44 ref43 Ouyang, L (ref9) ref49 Perez, F (ref125) Ziaei, R (ref108) ref7 ref4 ref3 ref6 ref5 Hisamoto, S (ref124) ref100 ref101 ref40 ref35 ref34 ref37 ref36 Yuan, M (ref82) ref148 ref149 ref33 ref146 ref32 Gollub, RL (ref137) 2021 ref147 ref39 Wu, C (ref68) Hu, M (ref80) Wu, J (ref99) Thoppilan, R (ref62) Brown, TB (ref65) ref24 Bai, Y (ref30) ref26 ref25 ref20 ref22 ref21 He, T (ref78) ref28 ref27 Haux, R (ref83) 2004 Zhang, H (ref31) ref13 ref12 ref15 ref128 ref14 ref129 ref126 ref96 ref127 ref11 ref10 ref17 ref16 ref19 ref18 Lehman, E (ref93) ref133 ref92 ref134 ref95 ref131 ref132 ref130 ref91 ref90 Zhang, K (ref98) Mukherjee, S (ref104) ref89 ref139 Kim, K (ref102) ref85 ref138 ref88 ref135 ref87 ref136 Khanmohammadi, R (ref73) ref144 ref81 ref145 ref84 ref142 ref143 ref140 ref79 ref109 ref106 ref107 ref75 ref74 ref105 ref77 ref76 ref103 ref2 ref1 Li, ZZ (ref29) Liu, H (ref118) ref71 ref111 ref70 ref112 ref72 ref110 ref119 ref67 ref117 ref69 ref64 ref115 ref63 ref116 Wu, D (ref150) ref66 ref113 ref114 Zhang, D (ref23) Zhou, HY (ref97) ref60 ref122 ref123 ref120 ref61 ref121 |
| References_xml | – ident: ref76 doi: 10.1016/j.ijom.2023.09.005 – ident: ref26 doi: 10.1038/s41746-024-01239-w – ident: ref36 doi: 10.3389/frai.2023.1169595 – ident: ref84 doi: 10.1007/bf03168435 – ident: ref118 publication-title: arXiv – ident: ref139 doi: 10.1016/j.compbiomed.2023.107807 – ident: ref134 doi: 10.1038/s41591-024-03328-5 – ident: ref9 publication-title: arXiv – ident: ref88 doi: 10.1038/s41598-023-41512-8 – ident: ref127 doi: 10.1145/1721654.1721672 – ident: ref27 doi: 10.1016/j.aiopen.2021.08.002 – ident: ref82 publication-title: arXiv – ident: ref86 publication-title: arXiv – ident: ref141 publication-title: arXiv – ident: ref131 – ident: ref15 doi: 10.1038/s41586-023-05881-4 – ident: ref16 doi: 10.4018/IJEIS.2019070105 – ident: ref135 doi: 10.1038/s41746-024-01258-7 – ident: ref55 doi: 10.1186/s12939-025-02419-0 – ident: ref62 publication-title: arXiv – ident: ref6 doi: 10.1038/s41746-022-00742-2 – ident: ref149 doi: 10.1016/j.jbi.2025.104791 – ident: ref72 doi: 10.2139/ssrn.4809363 – ident: ref44 doi: 10.3390/informatics11030057 – ident: ref51 doi: 10.1038/s41591-024-03434-4 – ident: ref18 doi: 10.18653/v1/p19-1355 – ident: ref112 doi: 10.1212/wnl.0000000000207967 – ident: ref116 doi: 10.1038/s41586-024-07421-0 – ident: ref47 doi: 10.1007/s10916-024-02045-3 – ident: ref143 doi: 10.3390/jcm14020572 – ident: ref2 doi: 10.1016/S2589-7500(23)00202-9 – ident: ref29 publication-title: arXiv – ident: ref98 publication-title: arXiv – ident: ref52 doi: 10.1109/RBME.2025.3531360 – ident: ref89 doi: 10.1038/s41746-023-00970-0 – ident: ref35 doi: 10.1016/j.metrad.2023.100017 – ident: ref30 publication-title: arXiv – ident: ref114 doi: 10.18653/v1/2023.emnlp-main.67 – ident: ref3 doi: 10.1148/radiol.230970 – ident: ref128 doi: 10.1038/s41746-025-01429-0 – ident: ref140 doi: 10.1016/j.compbiomed.2023.107794 – ident: ref58 doi: 10.35484/ahss.2025(6-I)22 – ident: ref105 doi: 10.1001/jamanetworkopen.2024.0357 – ident: ref12 doi: 10.1093/bib/bbad493 – ident: ref85 doi: 10.1016/s2589-7500(23)00201-7 – ident: ref60 doi: 10.1016/j.jaad.2023.04.005 – ident: ref123 doi: 10.1016/s2589-7500(23)00225-x – ident: ref91 doi: 10.1038/s41746-024-01029-4 – ident: ref145 doi: 10.3389/fonc.2024.1526288 – ident: ref121 doi: 10.1038/s42256-021-00359-2 – ident: ref7 doi: 10.1038/s41368-023-00239-y – ident: ref148 publication-title: arXiv – ident: ref150 publication-title: arXiv – ident: ref81 doi: 10.1109/rbme.2024.3493775 – ident: ref48 doi: 10.4274/dir.2023.232417 – ident: ref75 doi: 10.1177/17562848231218618 – ident: ref142 doi: 10.1016/j.jacr.2023.06.008 – ident: ref71 doi: 10.1038/s43856-023-00370-1 – ident: ref146 doi: 10.3390/jcm14020322 – ident: ref33 doi: 10.1109/tpami.2018.2857768 – ident: ref8 publication-title: arXiv – ident: ref54 doi: 10.1056/AIra2400380 – ident: ref63 – year: 2004 ident: ref83 publication-title: Strategic Information Management in Hospitals: An Introduction to Hospital Information Systems – ident: ref69 doi: 10.1038/s41591-023-02504-3 – ident: ref80 publication-title: arXiv – ident: ref21 doi: 10.18653/v1/2024.emnlp-main.64 – ident: ref103 doi: 10.1001/jamahealthforum.2023.3909 – ident: ref53 doi: 10.1093/postmj/qgae122 – ident: ref65 publication-title: arXiv – ident: ref10 doi: 10.1016/j.ebiom.2023.104512 – ident: ref41 doi: 10.3390/fi15090286 – ident: ref95 doi: 10.1056/AIoa2300138 – ident: ref125 publication-title: arXiv – ident: ref99 publication-title: arXiv – ident: ref120 doi: 10.1038/s41746-023-00837-4 – ident: ref126 doi: 10.1038/s41591-024-03445-1 – ident: ref129 doi: 10.1016/s2589-7500(23)00083-3 – ident: ref57 doi: 10.1038/s43856-024-00717-2 – ident: ref102 publication-title: arXiv – ident: ref43 doi: 10.1109/ACCESS.2024.3365742 – ident: ref23 publication-title: arXiv – ident: ref124 publication-title: arXiv – ident: ref74 doi: 10.1093/eurheartj/ehad838 – ident: ref96 doi: 10.21203/rs.3.rs-3324530/v1 – ident: ref4 doi: 10.1038/s41586-023-06291-2 – ident: ref25 doi: 10.1007/s10115-021-01605-0 – ident: ref37 doi: 10.36227/techrxiv.23589741.v2 – ident: ref42 doi: 10.1002/hcs2.61 – ident: ref38 publication-title: arXiv – ident: ref73 publication-title: arXiv – ident: ref13 doi: 10.1145/3641289 – ident: ref93 publication-title: arXiv – ident: ref31 publication-title: arXiv – ident: ref34 – ident: ref90 doi: 10.1007/s00330-024-11115-6 – ident: ref61 publication-title: arXiv – ident: ref24 doi: 10.1093/nsr/nwae403 – ident: ref1 doi: 10.1038/s41591-023-02448-8 – ident: ref147 doi: 10.1001/jamanetworkopen.2024.40969 – ident: ref67 doi: 10.1038/s41746-023-00952-2 – ident: ref119 doi: 10.18653/v1/2020.blackboxnlp-1.14 – ident: ref17 doi: 10.2478/jagi-2014-0001 – ident: ref46 doi: 10.1186/s13000-024-01464-7 – ident: ref49 doi: 10.1038/s41746-024-01157-x – ident: ref117 – ident: ref122 doi: 10.18653/v1/2021.acl-long.416 – ident: ref40 doi: 10.1056/NEJMra2301725 – ident: ref94 publication-title: arXiv – ident: ref68 publication-title: arXiv – ident: ref101 doi: 10.3390/healthcare11162308 – ident: ref109 doi: 10.1038/s41746-023-00896-7 – ident: ref132 doi: 10.1038/s41746-025-01443-2 – ident: ref106 doi: 10.1038/s41746-019-0158-1 – ident: ref136 doi: 10.1038/s41746-024-01356-6 – ident: ref14 doi: 10.2196/50638 – year: 2021 ident: ref137 publication-title: Mental Health Informatics – ident: ref130 – ident: ref100 doi: 10.1001/jamanetworkopen.2023.25000 – ident: ref70 doi: 10.1101/2023.11.09.566468 – ident: ref66 publication-title: arXiv – ident: ref5 doi: 10.1016/s2589-7500(23)00179-6 – ident: ref92 doi: 10.3390/electronics13152961 – ident: ref111 doi: 10.1145/3571730 – ident: ref64 – ident: ref115 doi: 10.1056/nejmsr2214184 – ident: ref87 doi: 10.1001/jamanetworkopen.2023.43689 – ident: ref19 doi: 10.1145/3386252 – ident: ref107 doi: 10.1038/s41591-024-03097-1 – ident: ref28 doi: 10.1162/tacl_a_00324 – ident: ref32 doi: 10.1109/tkde.2021.3090866 – ident: ref113 doi: 10.1001/jamanetworkopen.2023.27647 – ident: ref39 doi: 10.1101/2023.02.19.23286155 – ident: ref22 doi: 10.1038/s41586-020-2973-6 – ident: ref45 doi: 10.1148/radiol.232756 – ident: ref97 publication-title: arXiv – ident: ref11 doi: 10.7326/M23-2772 – ident: ref78 publication-title: arXiv – ident: ref108 publication-title: arXiv – ident: ref110 doi: 10.1038/s41591-024-03423-7 – ident: ref56 doi: 10.3389/fdmed.2024.1456208 – ident: ref59 doi: 10.1001/jamanetworkopen.2024.57879 – ident: ref104 publication-title: arXiv – ident: ref133 doi: 10.1038/s41591-024-03258-2 – ident: ref138 doi: 10.1001/jamanetworkopen.2023.46721 – ident: ref77 doi: 10.1093/clinchem/hvad106 – ident: ref20 doi: 10.1109/tmi.2016.2535302 – ident: ref144 doi: 10.3389/fmed.2024.1380148 – ident: ref79 doi: 10.1038/s41746-023-00879-8 – ident: ref50 doi: 10.1145/3712001 |
| SSID | ssj0020491 |
| Score | 2.4691615 |
| SecondaryResourceType | review_article |
| Snippet | Large language models (LLMs) can generate outputs understandable by humans, such as answers to medical questions and radiology reports. With the rapid... Background Large language models (LLMs) can generate outputs understandable by humans, such as answers to medical questions and radiology reports. With the... BackgroundLarge language models (LLMs) can generate outputs understandable by humans, such as answers to medical questions and radiology reports. With the... |
| SourceID | doaj unpaywall pubmedcentral proquest gale pubmed crossref |
| SourceType | Open Website Open Access Repository Aggregation Database Index Database |
| StartPage | e71916 |
| SubjectTerms | Algorithms China Delivery of Health Care Humans Language Large Language Models Medical care Medical research Medicine, Experimental Review Technology application |
| SummonAdditionalLinks | – databaseName: Unpaywall dbid: UNPAY link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwhV3db9MwED-NThqTJj4Gg8A6jITgKSNfTmLeCqJMiE08UDGeLMd2oKJKJ9po2v567uy0agaIlyiKncQfd_b97LufAV6gBa8zY4tQ11GJACXiYSUiG-aFNjw1KuGKgpNPz_KTSfbxnJ9vwXAVC7Oxf4-6lL8uEE_kt2A752hqD2B7cvZ59M1FDKGiooLE_l6EGVoCO7DXe6831zhK_j8H3o2Z56ZX5O22uVBXl2o225hyxnf96uDCMRWSp8nP43ZZHevrGzyO_6zNPbjTGZts5KXjPmzZZh-GXagCe8m6WCTqG9Yp-T7snHbb7Q-gdtzBzqGo-c4-kdM4Xv0CJ6NT1GYLNm2YD2ViFMr0hnmiURS6cDzX7cIa5rcf2Ncp5nELkMqNsexDSxRb-KOHMBm___LuJOwOZgh1lqfLkGsdJZXOeW1VXEZFQgG72mSW6ONjURqljBapUNqIjLacTZlG-NREOkGdz9MDGDTzxj4GZlWhq7rkvBAGkSLCG65VhrDF8kSYUgdwtOpCeeH5NyTiFmpN6VozgLfUsetEost2D7DdZad9sqrrtCzSIkI5xBELjTLFMyyLRXszLbgI4BWJhSSlxr7XqotNwDISPZYc4RyOliui5QAOezlRGXUv-flKsCQlkQdbY-ftQqZ0bjohXh7AIy9o6zKjUYWovMS6lD0R7FWqn9JMfzgu8Bg_K_DDATxbS-vfG-rJf3M8hd2ETjcm2tD4EAbLX60dosm1rI46xfsNWcMlVw priority: 102 providerName: Unpaywall |
| Title | Implementing Large Language Models in Health Care: Clinician-Focused Review With Interactive Guideline |
| URI | https://www.ncbi.nlm.nih.gov/pubmed/40644686 https://www.proquest.com/docview/3229500265 https://pubmed.ncbi.nlm.nih.gov/PMC12299950 https://doi.org/10.2196/71916 https://doaj.org/article/bff387370b90488399a54c25e8513759 |
| UnpaywallVersion | publishedVersion |
| Volume | 27 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAFT databaseName: Open Access Digital Library customDbUrl: eissn: 1438-8871 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0020491 issn: 1439-4456 databaseCode: KQ8 dateStart: 19990101 isFulltext: true titleUrlDefault: http://grweb.coalliance.org/oadl/oadl.html providerName: Colorado Alliance of Research Libraries – providerCode: PRVAON databaseName: DOAJ Directory of Open Access Journals customDbUrl: eissn: 1438-8871 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0020491 issn: 1439-4456 databaseCode: DOA dateStart: 19990101 isFulltext: true titleUrlDefault: https://www.doaj.org/ providerName: Directory of Open Access Journals – providerCode: PRVEBS databaseName: EBSCOhost Academic Search Ultimate customDbUrl: https://search.ebscohost.com/login.aspx?authtype=ip,shib&custid=s3936755&profile=ehost&defaultdb=asn eissn: 1438-8871 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0020491 issn: 1439-4456 databaseCode: ABDBF dateStart: 20050101 isFulltext: true titleUrlDefault: https://search.ebscohost.com/direct.asp?db=asn providerName: EBSCOhost – providerCode: PRVBFR databaseName: Free Medical Journals customDbUrl: eissn: 1438-8871 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0020491 issn: 1439-4456 databaseCode: DIK dateStart: 19990101 isFulltext: true titleUrlDefault: http://www.freemedicaljournals.com providerName: Flying Publisher – providerCode: PRVFQY databaseName: GFMER Free Medical Journals customDbUrl: eissn: 1438-8871 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0020491 issn: 1439-4456 databaseCode: GX1 dateStart: 19990101 isFulltext: true titleUrlDefault: http://www.gfmer.ch/Medical_journals/Free_medical.php providerName: Geneva Foundation for Medical Education and Research – providerCode: PRVAQN databaseName: PubMed Central customDbUrl: eissn: 1438-8871 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0020491 issn: 1439-4456 databaseCode: RPM dateStart: 19990101 isFulltext: true titleUrlDefault: https://www.ncbi.nlm.nih.gov/pmc/ providerName: National Library of Medicine – providerCode: PRVPQU databaseName: Health & Medical Collection customDbUrl: eissn: 1438-8871 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0020491 issn: 1439-4456 databaseCode: 7X7 dateStart: 20010101 isFulltext: true titleUrlDefault: https://search.proquest.com/healthcomplete providerName: ProQuest – providerCode: PRVPQU databaseName: ProQuest Central customDbUrl: http://www.proquest.com/pqcentral?accountid=15518 eissn: 1438-8871 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0020491 issn: 1439-4456 databaseCode: BENPR dateStart: 20010101 isFulltext: true titleUrlDefault: https://www.proquest.com/central providerName: ProQuest – providerCode: PRVPQU databaseName: ProQuest Library Science Database customDbUrl: eissn: 1438-8871 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0020491 issn: 1439-4456 databaseCode: M1O dateStart: 20010101 isFulltext: true titleUrlDefault: https://search.proquest.com/libraryscience providerName: ProQuest |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3db9MwED_BkAYSmmB8hW3FSAieojkfjp29ddPKhGiZEBXlKXJsBypV6UQbIf577py0agCJF17yYEeO4zvn7hff_Q7gFXrwJrVOhqbiCgEKF2GZcxdm0liRWB0LTcnJ40l2NU3fzcRsp9QXxYS19MDtwp2WVZUomUiOQ6CyoT3VIjWxcOgqJFL41D2u8g2Y6qAW-r3RPtynQGdUsVOJqCTrWR5P0P_nZ3jHDv0eI3m3qW_0zx96sdgxQKMHcNB5jmzYzvgh3HL1IZx0eQfsNesSi2ihWbdjD2F_3J2dP4LKEwH76KD6K3tPEeB4bf9WMiqJtlixec3avCRGeUlnrGUNRQ0KR0vTrJxl7VkC-zzHe_zfRO0_mOxtQ3xZ-KDHMB1dfrq4CrsqC6FJs2QdCmN4XJpMVE5HisuYsm-NTR1xwUe5slpbkye5NjZP6fzYqoRjq-UohQTduSewVy9r9wyY09KUlRJC5hZhH2IVYXSKsnAizq0yAQw2EihuWjKNAkEIiajwIgrgnOSy7STua9-AGlF0GlH8SyMCeENSLWiHouiM7hINcI7EdVUM0SCjG4rQN4Dj3p24s0yv--VGLwrqonC02i2bVZFQEXSCryKAp62ebOeMHhJCbIXvonoa1Hupfk89_-aJvSMcNseBA3ixVba_L9Tz_7FQR3AvpmrGRBMaHcPe-nvjTtDFWpcDuC1ncgB3zi8n1x8Hfm_hdRx9wLbp5Hr45Re_Rict |
| linkProvider | Directory of Open Access Journals |
| linkToUnpaywall | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwhV3db9MwED-NThqTJj4Gg8A6jITgKSNfTmLeCqJMiE08UDGeLMd2oKJKJ9po2v567uy0agaIlyiKncQfd_b97LufAV6gBa8zY4tQ11GJACXiYSUiG-aFNjw1KuGKgpNPz_KTSfbxnJ9vwXAVC7Oxf4-6lL8uEE_kt2A752hqD2B7cvZ59M1FDKGiooLE_l6EGVoCO7DXe6831zhK_j8H3o2Z56ZX5O22uVBXl2o225hyxnf96uDCMRWSp8nP43ZZHevrGzyO_6zNPbjTGZts5KXjPmzZZh-GXagCe8m6WCTqG9Yp-T7snHbb7Q-gdtzBzqGo-c4-kdM4Xv0CJ6NT1GYLNm2YD2ViFMr0hnmiURS6cDzX7cIa5rcf2Ncp5nELkMqNsexDSxRb-KOHMBm___LuJOwOZgh1lqfLkGsdJZXOeW1VXEZFQgG72mSW6ONjURqljBapUNqIjLacTZlG-NREOkGdz9MDGDTzxj4GZlWhq7rkvBAGkSLCG65VhrDF8kSYUgdwtOpCeeH5NyTiFmpN6VozgLfUsetEost2D7DdZad9sqrrtCzSIkI5xBELjTLFMyyLRXszLbgI4BWJhSSlxr7XqotNwDISPZYc4RyOliui5QAOezlRGXUv-flKsCQlkQdbY-ftQqZ0bjohXh7AIy9o6zKjUYWovMS6lD0R7FWqn9JMfzgu8Bg_K_DDATxbS-vfG-rJf3M8hd2ETjcm2tD4EAbLX60dosm1rI46xfsNWcMlVw |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Implementing+Large+Language+Models+in+Health+Care%3A+Clinician-Focused+Review+With+Interactive+Guideline&rft.jtitle=Journal+of+medical+Internet+research&rft.au=HongYi+Li&rft.au=Jun-Fen+Fu&rft.au=Andre+Python&rft.date=2025-07-11&rft.pub=JMIR+Publications&rft.eissn=1438-8871&rft.volume=27&rft.spage=e71916&rft_id=info:doi/10.2196%2F71916&rft.externalDBID=DOA&rft.externalDocID=oai_doaj_org_article_bff387370b90488399a54c25e8513759 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1438-8871&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1438-8871&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1438-8871&client=summon |