The Clinicians’ Guide to Large Language Models: A General Perspective With a Focus on Hallucinations

Large language models (LLMs) are artificial intelligence tools that have the prospect of profoundly changing how we practice all aspects of medicine. Considering the incredible potential of LLMs in medicine and the interest of many health care stakeholders for implementation into routine practice, i...

Full description

Saved in:
Bibliographic Details
Published inInteractive journal of medical research Vol. 14; p. e59823
Main Authors Roustan, Dimitri, Bastardot, François
Format Journal Article
LanguageEnglish
Published Canada JMIR Publications 28.01.2025
Subjects
Online AccessGet full text
ISSN1929-073X
1929-073X
DOI10.2196/59823

Cover

Abstract Large language models (LLMs) are artificial intelligence tools that have the prospect of profoundly changing how we practice all aspects of medicine. Considering the incredible potential of LLMs in medicine and the interest of many health care stakeholders for implementation into routine practice, it is therefore essential that clinicians be aware of the basic risks associated with the use of these models. Namely, a significant risk associated with the use of LLMs is their potential to create hallucinations. Hallucinations (false information) generated by LLMs arise from a multitude of causes, including both factors related to the training dataset as well as their auto-regressive nature. The implications for clinical practice range from the generation of inaccurate diagnostic and therapeutic information to the reinforcement of flawed diagnostic reasoning pathways, as well as a lack of reliability if not used properly. To reduce this risk, we developed a general technical framework for approaching LLMs in general clinical practice, as well as for implementation on a larger institutional scale.
AbstractList Large language models (LLMs) are artificial intelligence tools that have the prospect of profoundly changing how we practice all aspects of medicine. Considering the incredible potential of LLMs in medicine and the interest of many health care stakeholders for implementation into routine practice, it is therefore essential that clinicians be aware of the basic risks associated with the use of these models. Namely, a significant risk associated with the use of LLMs is their potential to create hallucinations. Hallucinations (false information) generated by LLMs arise from a multitude of causes, including both factors related to the training dataset as well as their auto-regressive nature. The implications for clinical practice range from the generation of inaccurate diagnostic and therapeutic information to the reinforcement of flawed diagnostic reasoning pathways, as well as a lack of reliability if not used properly. To reduce this risk, we developed a general technical framework for approaching LLMs in general clinical practice, as well as for implementation on a larger institutional scale.
Large language models (LLMs) are artificial intelligence tools that have the prospect of profoundly changing how we practice all aspects of medicine. Considering the incredible potential of LLMs in medicine and the interest of many health care stakeholders for implementation into routine practice, it is therefore essential that clinicians be aware of the basic risks associated with the use of these models. Namely, a significant risk associated with the use of LLMs is their potential to create hallucinations. Hallucinations (false information) generated by LLMs arise from a multitude of causes, including both factors related to the training dataset as well as their auto-regressive nature. The implications for clinical practice range from the generation of inaccurate diagnostic and therapeutic information to the reinforcement of flawed diagnostic reasoning pathways, as well as a lack of reliability if not used properly. To reduce this risk, we developed a general technical framework for approaching LLMs in general clinical practice, as well as for implementation on a larger institutional scale.Large language models (LLMs) are artificial intelligence tools that have the prospect of profoundly changing how we practice all aspects of medicine. Considering the incredible potential of LLMs in medicine and the interest of many health care stakeholders for implementation into routine practice, it is therefore essential that clinicians be aware of the basic risks associated with the use of these models. Namely, a significant risk associated with the use of LLMs is their potential to create hallucinations. Hallucinations (false information) generated by LLMs arise from a multitude of causes, including both factors related to the training dataset as well as their auto-regressive nature. The implications for clinical practice range from the generation of inaccurate diagnostic and therapeutic information to the reinforcement of flawed diagnostic reasoning pathways, as well as a lack of reliability if not used properly. To reduce this risk, we developed a general technical framework for approaching LLMs in general clinical practice, as well as for implementation on a larger institutional scale.
Author Roustan, Dimitri
Bastardot, François
AuthorAffiliation 2 Medical Directorate Lausanne University Hospital Lausanne Switzerland
1 Emergency Medicine Department Cliniques Universitaires Saint-Luc Brussels Belgium
AuthorAffiliation_xml – name: 2 Medical Directorate Lausanne University Hospital Lausanne Switzerland
– name: 1 Emergency Medicine Department Cliniques Universitaires Saint-Luc Brussels Belgium
Author_xml – sequence: 1
  givenname: Dimitri
  orcidid: 0009-0008-2650-4035
  surname: Roustan
  fullname: Roustan, Dimitri
– sequence: 2
  givenname: François
  orcidid: 0000-0003-4060-0353
  surname: Bastardot
  fullname: Bastardot, François
BackLink https://www.ncbi.nlm.nih.gov/pubmed/39874574$$D View this record in MEDLINE/PubMed
BookMark eNpVkstuUzEQhi3UipaSV0DeILEJ-HJuZoOqqE0rpYJFUbuz5tjjxNWJHexzKrHjNXg9ngTTlKr1Yjzy_Pr-sWbekIMQAxIy4-yj4Kr5VKtOyFfkmCuh5qyVtwfP8iMyy_mOldPxppP8NTmSqmuruq2OibveIF0MPnjjIeQ_v37T5eQt0jHSFaQ1lhjWE5TkKloc8md6SpcYMMFAv2HKOzSjv0d648cNBXoezZRpDPQChmEyPsDoY8hvyaGDIePs8T4h38_PrhcX89XX5eXidDU3UlTjnPcosbEtw46ZXoCQshOmFtJVxjnD6h6NZTVrnDECXeNUr7C3WFfQcN4ZeUIu91wb4U7vkt9C-qkjeP3wENNaQxq9GVBXTnFhrQQQrDKqAeCt6FuJvGecARbWlz1rN_VbtAbDWD79AvqyEvxGr-O9Lp3wWqiqED48ElL8MWEe9dZng8MAAeOUteQNU7IugyrSd8_Nnlz-T6oI3u8FJsWcE7onCWf63xLohyWQfwFePqRq
Cites_doi 10.1038/s41586-024-07421-0
10.48550/ARXIV.2305.18153
10.1148/radiol.230582
10.48550/arXiv.1706.03762
10.1056/NEJMsr2214184
10.1016/S2589-7500(23)00083-3
10.1093/nsr/nwae403
10.1080/08820538.2023.2209166
10.1001/jamanetworkopen.2023.25000
10.48550/arXiv.2005.11401
10.1016/j.psychres.2023.115334
10.18653/v1/2024.emnlp-main.418
10.1001/jamainternmed.2023.1838
10.18653/v1/2022.naacl-main.387
10.48550/ARXIV.2306.06085
10.1016/S2589-7500(23)00048-1
10.1016/S2589-7500(23)00021-3
10.1145/3571730
10.1371/journal.pdig.0000198
10.1016/j.wneu.2023.08.088
10.1016/j.amjoto.2023.103980
10.48550/arXiv.2310.00754
10.18653/v1/2023.findings-emnlp.68
ContentType Journal Article
Copyright Dimitri Roustan, François Bastardot. Originally published in the Interactive Journal of Medical Research (https://www.i-jmr.org/), 28.01.2025.
Dimitri Roustan, François Bastardot. Originally published in the Interactive Journal of Medical Research (https://www.i-jmr.org/), 28.01.2025. 2025
Copyright_xml – notice: Dimitri Roustan, François Bastardot. Originally published in the Interactive Journal of Medical Research (https://www.i-jmr.org/), 28.01.2025.
– notice: Dimitri Roustan, François Bastardot. Originally published in the Interactive Journal of Medical Research (https://www.i-jmr.org/), 28.01.2025. 2025
DBID AAYXX
CITATION
NPM
7X8
5PM
DOA
DOI 10.2196/59823
DatabaseName CrossRef
PubMed
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
PubMed
MEDLINE - Academic
DatabaseTitleList PubMed


MEDLINE - Academic
CrossRef
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
DeliveryMethod fulltext_linktorsrc
Discipline Medicine
EISSN 1929-073X
ExternalDocumentID oai_doaj_org_article_4f912dd3aa204c96aa172b73e1b010ae
PMC11815294
39874574
10_2196_59823
Genre Journal Article
GroupedDBID AAFWJ
AAYXX
ADBBV
AFPKN
ALMA_UNASSIGNED_HOLDINGS
AOIJS
BAWUL
BCNDV
CITATION
DIK
EMOBN
GROUPED_DOAJ
HYE
KQ8
M48
M~E
OK1
PGMZT
RPM
NPM
7X8
5PM
ID FETCH-LOGICAL-c324t-1be3e6d70e80cb2a23382c523f4cffc05becd0506fcc2ef6f9b9ebde54a6118c3
IEDL.DBID M48
ISSN 1929-073X
IngestDate Wed Aug 27 01:29:58 EDT 2025
Thu Aug 21 18:29:22 EDT 2025
Fri Jul 11 00:14:21 EDT 2025
Sat Feb 15 01:21:26 EST 2025
Tue Jul 01 01:37:24 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords clinical informatics
decision
electronic data system
AI
decision support
LLM
large language model
false information
artificial intelligence
medical informatics
decision support techniques
artificial intelligence tool
decision-making
computer assisted
technical framework
hallucinations
Language English
License Dimitri Roustan, François Bastardot. Originally published in the Interactive Journal of Medical Research (https://www.i-jmr.org/), 28.01.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Interactive Journal of Medical Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.i-jmr.org/, as well as this copyright and license information must be included.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c324t-1be3e6d70e80cb2a23382c523f4cffc05becd0506fcc2ef6f9b9ebde54a6118c3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0003-4060-0353
0009-0008-2650-4035
OpenAccessLink http://journals.scholarsportal.info/openUrl.xqy?doi=10.2196/59823
PMID 39874574
PQID 3160935929
PQPubID 23479
ParticipantIDs doaj_primary_oai_doaj_org_article_4f912dd3aa204c96aa172b73e1b010ae
pubmedcentral_primary_oai_pubmedcentral_nih_gov_11815294
proquest_miscellaneous_3160935929
pubmed_primary_39874574
crossref_primary_10_2196_59823
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 20250128
PublicationDateYYYYMMDD 2025-01-28
PublicationDate_xml – month: 1
  year: 2025
  text: 20250128
  day: 28
PublicationDecade 2020
PublicationPlace Canada
PublicationPlace_xml – name: Canada
– name: Toronto, Canada
PublicationTitle Interactive journal of medical research
PublicationTitleAlternate Interact J Med Res
PublicationYear 2025
Publisher JMIR Publications
Publisher_xml – name: JMIR Publications
References ref13
ref24
ref12
ref23
ref15
ref14
ref20
ref11
ref22
ref10
ref21
ref2
ref1
ref17
ref16
ref19
ref18
ref8
ref7
ref9
ref4
ref3
ref6
ref5
References_xml – ident: ref22
  doi: 10.1038/s41586-024-07421-0
– ident: ref10
  doi: 10.48550/ARXIV.2305.18153
– ident: ref3
  doi: 10.1148/radiol.230582
– ident: ref1
  doi: 10.48550/arXiv.1706.03762
– ident: ref14
  doi: 10.1056/NEJMsr2214184
– ident: ref2
  doi: 10.1016/S2589-7500(23)00083-3
– ident: ref21
  doi: 10.1093/nsr/nwae403
– ident: ref13
  doi: 10.1080/08820538.2023.2209166
– ident: ref15
  doi: 10.1001/jamanetworkopen.2023.25000
– ident: ref24
  doi: 10.48550/arXiv.2005.11401
– ident: ref17
  doi: 10.1016/j.psychres.2023.115334
– ident: ref20
  doi: 10.18653/v1/2024.emnlp-main.418
– ident: ref5
  doi: 10.1001/jamainternmed.2023.1838
– ident: ref7
  doi: 10.18653/v1/2022.naacl-main.387
– ident: ref11
  doi: 10.48550/ARXIV.2306.06085
– ident: ref4
  doi: 10.1016/S2589-7500(23)00048-1
– ident: ref12
  doi: 10.1016/S2589-7500(23)00021-3
– ident: ref19
– ident: ref8
  doi: 10.1145/3571730
– ident: ref6
  doi: 10.1371/journal.pdig.0000198
– ident: ref16
  doi: 10.1016/j.wneu.2023.08.088
– ident: ref18
  doi: 10.1016/j.amjoto.2023.103980
– ident: ref23
  doi: 10.48550/arXiv.2310.00754
– ident: ref9
  doi: 10.18653/v1/2023.findings-emnlp.68
SSID ssj0000816831
Score 2.341434
Snippet Large language models (LLMs) are artificial intelligence tools that have the prospect of profoundly changing how we practice all aspects of medicine....
SourceID doaj
pubmedcentral
proquest
pubmed
crossref
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
StartPage e59823
SubjectTerms Viewpoint
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1LT9wwEB4hDqgSQpTyCLTIlbhGJLbjONyWqttV1SIOrOAW-SlWWiWom9z7N_h7_BLGSVh2K6ReuOQQH2zNN_I8_Q3AmZFKWpebmGWGxdxLH6vU2tibkNjXVkjddfleicmU_7zL7lZGfYWesJ4euBfcOfdFSq1lStGEm0IohSZX58ylGkMJ5cLti2ZsJZjq7uAwToKlW7Adep1Ry84DUR1bMz4dR_9bjuW__ZErBme8CzuDp0hG_Qk_woar9mDr91AL_wQeESY9rSdCvHj6-0h-tDPrSFOTX6G_G799LpKEgWfzxQUZkYFlmly_vrEkt7Pmnigyrk27IHVFJmo-b3GPPpe3D9Px95tvk3iYmhAbdI6aONWOOWHzxMnEaKooBqHUYLzpufEIQYao2SRLhDeGOi98oQunrcu4EhhtGHYAm1VduSMgeeY01xyvQ865TVVhLBUqcUZKXmhqIjh9EWf50JNjlBhUBHmXnbwjuAxCXi4GLuvuByJcDgiX_0M4gq8vEJWo-6GgoSpXt4uSpSLUcdHDi-Cwh2y5FSsCkX_OI5BrYK6dZX2lmt13_NrhLW5GC378Hqc_gQ80jAxO0pjKz7DZ_GndF_RjGn3aqewzHKP0xQ
  priority: 102
  providerName: Directory of Open Access Journals
Title The Clinicians’ Guide to Large Language Models: A General Perspective With a Focus on Hallucinations
URI https://www.ncbi.nlm.nih.gov/pubmed/39874574
https://www.proquest.com/docview/3160935929
https://pubmed.ncbi.nlm.nih.gov/PMC11815294
https://doaj.org/article/4f912dd3aa204c96aa172b73e1b010ae
Volume 14
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAFT
  databaseName: Open Access Digital Library
  customDbUrl:
  eissn: 1929-073X
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0000816831
  issn: 1929-073X
  databaseCode: KQ8
  dateStart: 20120101
  isFulltext: true
  titleUrlDefault: http://grweb.coalliance.org/oadl/oadl.html
  providerName: Colorado Alliance of Research Libraries
– providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 1929-073X
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0000816831
  issn: 1929-073X
  databaseCode: DOA
  dateStart: 20120101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVBFR
  databaseName: Free Medical Journals
  customDbUrl:
  eissn: 1929-073X
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0000816831
  issn: 1929-073X
  databaseCode: DIK
  dateStart: 20120101
  isFulltext: true
  titleUrlDefault: http://www.freemedicaljournals.com
  providerName: Flying Publisher
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 1929-073X
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0000816831
  issn: 1929-073X
  databaseCode: M~E
  dateStart: 20120101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
– providerCode: PRVAQN
  databaseName: PubMed Central
  customDbUrl:
  eissn: 1929-073X
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0000816831
  issn: 1929-073X
  databaseCode: RPM
  dateStart: 20120101
  isFulltext: true
  titleUrlDefault: https://www.ncbi.nlm.nih.gov/pmc/
  providerName: National Library of Medicine
– providerCode: PRVFZP
  databaseName: Scholars Portal Journals: Open Access
  customDbUrl:
  eissn: 1929-073X
  dateEnd: 20250131
  omitProxy: true
  ssIdentifier: ssj0000816831
  issn: 1929-073X
  databaseCode: M48
  dateStart: 20120401
  isFulltext: true
  titleUrlDefault: http://journals.scholarsportal.info
  providerName: Scholars Portal
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3da9RAEB9qhSKI-G20PVYQfYomu5svoUgrnodY8cHDewv7FXtwJO0lAf3vndnkqpE--JKHLGTDzOzuzM7M7wfwwuQqty4zoUiMCGWVV6GKrQ0rQxf72qa59lW-X9LFUn5aJas92LUrjgJsrw3tiE9qud28_nn56x0u-GMqY0YDekMYdOLlxWVIXFKUcx2JNW7ATTyfONn62ej0-_2ZqCY8USF6N9TBI1YHcHvysckh5bH8r3NA_62j_Otgmt-FO6NHyU4GE7gHe66-DwdnY878AVi0BDbAf6IptK_Yx35tHesa9pmqwPE53FgyokXbtG_ZCRuxqNnXP52Y7Pu6O2eKzRvTt6yp2UJtNj3OMNz4PYTl_MO394tw5FYIDbpQXRhrJ1xqs8jlkdFccQxVucGotJKmQkUlqFsbJVFaGcNdlVaFLpy2LpEqxZjEiEewXze1ewIsS5yWWuKmKaW0sSqM5amKnMlzWWhuApjthFleDBAaJYYeJO3SSzuAUxLx1SAhXvsXzfZHOS6gUlZFzK0VSvFImiJVCl0vnQkXawwplQvg-U5BJa4QSnuo2jV9W4o4pWwv6jqAx4PCrqYSBcH9ZzKAfKLKyb9MR-r1uUfhpo7dhBfy6X9M_AxuceINjuKQ54ew3217d4TOTKdn_hJg5i30N47N9y4
linkProvider Scholars Portal
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=The+Clinicians%27+Guide+to+Large+Language+Models%3A+A+General+Perspective+With+a+Focus+on+Hallucinations&rft.jtitle=Interactive+journal+of+medical+research&rft.au=Roustan%2C+Dimitri&rft.au=Bastardot%2C+Fran%C3%A7ois&rft.date=2025-01-28&rft.issn=1929-073X&rft.eissn=1929-073X&rft.volume=14&rft.spage=e59823&rft_id=info:doi/10.2196%2F59823&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1929-073X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1929-073X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1929-073X&client=summon