Improving Radiology Report Generation Quality and Diversity through Reinforcement Learning and Text Augmentation

Deep learning is revolutionizing radiology report generation (RRG) with the adoption of vision encoder–decoder (VED) frameworks, which transform radiographs into detailed medical reports. Traditional methods, however, often generate reports of limited diversity and struggle with generalization. Our...

Full description

Saved in:
Bibliographic Details
Published inBioengineering (Basel) Vol. 11; no. 4; p. 351
Main Authors Parres, Daniel, Albiol, Alberto, Paredes, Roberto
Format Journal Article
LanguageEnglish
Published Switzerland MDPI AG 01.04.2024
Subjects
Online AccessGet full text
ISSN2306-5354
2306-5354
DOI10.3390/bioengineering11040351

Cover

Abstract Deep learning is revolutionizing radiology report generation (RRG) with the adoption of vision encoder–decoder (VED) frameworks, which transform radiographs into detailed medical reports. Traditional methods, however, often generate reports of limited diversity and struggle with generalization. Our research introduces reinforcement learning and text augmentation to tackle these issues, significantly improving report quality and variability. By employing RadGraph as a reward metric and innovating in text augmentation, we surpass existing benchmarks like BLEU4, ROUGE-L, F1CheXbert, and RadGraph, setting new standards for report accuracy and diversity on MIMIC-CXR and Open-i datasets. Our VED model achieves F1-scores of 66.2 for CheXbert and 37.8 for RadGraph on the MIMIC-CXR dataset, and 54.7 and 45.6, respectively, on Open-i. These outcomes represent a significant breakthrough in the RRG field. The findings and implementation of the proposed approach, aimed at enhancing diagnostic precision and radiological interpretations in clinical settings, are publicly available on GitHub to encourage further advancements in the field.
AbstractList Deep learning is revolutionizing radiology report generation (RRG) with the adoption of vision encoder–decoder (VED) frameworks, which transform radiographs into detailed medical reports. Traditional methods, however, often generate reports of limited diversity and struggle with generalization. Our research introduces reinforcement learning and text augmentation to tackle these issues, significantly improving report quality and variability. By employing RadGraph as a reward metric and innovating in text augmentation, we surpass existing benchmarks like BLEU4, ROUGE-L, F[sub.1] CheXbert, and RadGraph, setting new standards for report accuracy and diversity on MIMIC-CXR and Open-i datasets. Our VED model achieves F[sub.1] -scores of 66.2 for CheXbert and 37.8 for RadGraph on the MIMIC-CXR dataset, and 54.7 and 45.6, respectively, on Open-i. These outcomes represent a significant breakthrough in the RRG field. The findings and implementation of the proposed approach, aimed at enhancing diagnostic precision and radiological interpretations in clinical settings, are publicly available on GitHub to encourage further advancements in the field.
Deep learning is revolutionizing radiology report generation (RRG) with the adoption of vision encoder–decoder (VED) frameworks, which transform radiographs into detailed medical reports. Traditional methods, however, often generate reports of limited diversity and struggle with generalization. Our research introduces reinforcement learning and text augmentation to tackle these issues, significantly improving report quality and variability. By employing RadGraph as a reward metric and innovating in text augmentation, we surpass existing benchmarks like BLEU4, ROUGE-L, F1CheXbert, and RadGraph, setting new standards for report accuracy and diversity on MIMIC-CXR and Open-i datasets. Our VED model achieves F1-scores of 66.2 for CheXbert and 37.8 for RadGraph on the MIMIC-CXR dataset, and 54.7 and 45.6, respectively, on Open-i. These outcomes represent a significant breakthrough in the RRG field. The findings and implementation of the proposed approach, aimed at enhancing diagnostic precision and radiological interpretations in clinical settings, are publicly available on GitHub to encourage further advancements in the field.
Deep learning is revolutionizing radiology report generation (RRG) with the adoption of vision encoder-decoder (VED) frameworks, which transform radiographs into detailed medical reports. Traditional methods, however, often generate reports of limited diversity and struggle with generalization. Our research introduces reinforcement learning and text augmentation to tackle these issues, significantly improving report quality and variability. By employing RadGraph as a reward metric and innovating in text augmentation, we surpass existing benchmarks like BLEU4, ROUGE-L, F CheXbert, and RadGraph, setting new standards for report accuracy and diversity on MIMIC-CXR and Open-i datasets. Our VED model achieves F -scores of 66.2 for CheXbert and 37.8 for RadGraph on the MIMIC-CXR dataset, and 54.7 and 45.6, respectively, on Open-i. These outcomes represent a significant breakthrough in the RRG field. The findings and implementation of the proposed approach, aimed at enhancing diagnostic precision and radiological interpretations in clinical settings, are publicly available on GitHub to encourage further advancements in the field.
Deep learning is revolutionizing radiology report generation (RRG) with the adoption of vision encoder-decoder (VED) frameworks, which transform radiographs into detailed medical reports. Traditional methods, however, often generate reports of limited diversity and struggle with generalization. Our research introduces reinforcement learning and text augmentation to tackle these issues, significantly improving report quality and variability. By employing RadGraph as a reward metric and innovating in text augmentation, we surpass existing benchmarks like BLEU4, ROUGE-L, F1CheXbert, and RadGraph, setting new standards for report accuracy and diversity on MIMIC-CXR and Open-i datasets. Our VED model achieves F1-scores of 66.2 for CheXbert and 37.8 for RadGraph on the MIMIC-CXR dataset, and 54.7 and 45.6, respectively, on Open-i. These outcomes represent a significant breakthrough in the RRG field. The findings and implementation of the proposed approach, aimed at enhancing diagnostic precision and radiological interpretations in clinical settings, are publicly available on GitHub to encourage further advancements in the field.Deep learning is revolutionizing radiology report generation (RRG) with the adoption of vision encoder-decoder (VED) frameworks, which transform radiographs into detailed medical reports. Traditional methods, however, often generate reports of limited diversity and struggle with generalization. Our research introduces reinforcement learning and text augmentation to tackle these issues, significantly improving report quality and variability. By employing RadGraph as a reward metric and innovating in text augmentation, we surpass existing benchmarks like BLEU4, ROUGE-L, F1CheXbert, and RadGraph, setting new standards for report accuracy and diversity on MIMIC-CXR and Open-i datasets. Our VED model achieves F1-scores of 66.2 for CheXbert and 37.8 for RadGraph on the MIMIC-CXR dataset, and 54.7 and 45.6, respectively, on Open-i. These outcomes represent a significant breakthrough in the RRG field. The findings and implementation of the proposed approach, aimed at enhancing diagnostic precision and radiological interpretations in clinical settings, are publicly available on GitHub to encourage further advancements in the field.
Audience Academic
Author Parres, Daniel
Paredes, Roberto
Albiol, Alberto
Author_xml – sequence: 1
  givenname: Daniel
  orcidid: 0000-0002-2078-0329
  surname: Parres
  fullname: Parres, Daniel
– sequence: 2
  givenname: Alberto
  orcidid: 0000-0002-1970-3289
  surname: Albiol
  fullname: Albiol, Alberto
– sequence: 3
  givenname: Roberto
  orcidid: 0000-0002-5192-0021
  surname: Paredes
  fullname: Paredes, Roberto
BackLink https://www.ncbi.nlm.nih.gov/pubmed/38671773$$D View this record in MEDLINE/PubMed
BookMark eNqFkktr3DAQx0VJaNI0XyEYeullU72sB_SypG26sBAS0rOQ5bGjxZa2sh26375yNukjBIIOkmZ-8x_m8Q4dhBgAoTOCzxnT-FPlI4TWB4DkQ0sI5piV5A06pgyLRclKfvDP-widDsMGY0wYLangb9ERU0ISKdkx2q76bYr3Waa4sbWPXWx3xQ1sYxqLSwiQ7OhjKK4n2_lxV9hQF1_8PaRh_o13KU7tXeZ9aGJy0EMYizXYFGbBGb6FX2OxnNrZ8yD1Hh02thvg9PE-QT--fb29-L5YX12uLpbrhSspHhd1zWpaKaao0mWllVRQKkYUJRocZrwhDSVUEeKoqnD2NqpihDmFbSUEaHaCVnvdOtqN2Sbf27Qz0XrzYIipNTaN3nVgnANdU5BUaMKFqrQktZBaSN5oyinLWh_3WrlVPycYRtP7wUHX2QBxGgzDXGpOKFUZ_fAM3cQphVzpTAlJiSrZX6q1Of_cvDFZN4uapdQUY65LkanzF6h8aui9yyvR-Gz_L-DsMflU9VD_qfpp3Bn4vAdcisOQoDHO78eSlX1nCDbzgpmXFyyHi2fhTxleCfwNNs7XXQ
CitedBy_id crossref_primary_10_1016_j_inffus_2024_102795
crossref_primary_10_3389_fdgth_2025_1535168
crossref_primary_10_1007_s10278_025_01411_y
Cites_doi 10.5626/JCSE.2012.6.2.168
10.1007/978-3-030-00928-1_51
10.1109/CVPR.2016.274
10.1109/CVPR.2017.131
10.1016/j.artmed.2023.102633
10.18653/v1/2021.acl-long.459
10.18653/v1/2020.emnlp-main.117
10.1109/CVPR.2015.7298594
10.1109/CVPR.2009.5206848
10.1016/j.artmed.2023.102714
10.1016/j.imu.2023.101273
10.1016/j.artmed.2020.101878
10.1109/CVPR.2018.00943
10.1109/TPAMI.2016.2599174
10.1016/j.media.2023.102798
10.1109/ICCV48922.2021.00986
10.18653/v1/2020.emnlp-main.112
10.1109/CVPR.2016.90
10.1016/j.imu.2021.100557
10.18653/v1/2022.findings-emnlp.319
10.18653/v1/2020.acl-main.703
10.1038/s41597-019-0322-0
10.1016/j.media.2022.102510
10.1109/ISBI.2019.8759236
10.18653/v1/2021.naacl-main.416
10.18653/v1/2020.acl-main.458
10.1109/CVPR.2017.243
10.3115/1073083.1073135
10.18653/v1/P18-1240
10.1109/CVPR46437.2021.01354
ContentType Journal Article
Copyright COPYRIGHT 2024 MDPI AG
2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright_xml – notice: COPYRIGHT 2024 MDPI AG
– notice: 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
DBID AAYXX
CITATION
NPM
8FE
8FG
8FH
ABJCF
ABUWG
AFKRA
AZQEC
BBNVY
BENPR
BGLVJ
BHPHI
CCPQU
DWQXO
GNUQQ
HCIFZ
L6V
LK8
M7P
M7S
PHGZM
PHGZT
PIMPY
PKEHL
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
PTHSS
7X8
DOA
DOI 10.3390/bioengineering11040351
DatabaseName CrossRef
PubMed
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Natural Science Journals
Materials Science & Engineering Collection
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
ProQuest Central Essentials
Biological Science Database (Proquest)
ProQuest Central
Technology Collection
Natural Science Collection
ProQuest One
ProQuest Central
ProQuest Central Student
SciTech Premium Collection
ProQuest Engineering Collection
Biological Sciences
Biological Science Database
Engineering Database (ProQuest)
ProQuest Central Premium
ProQuest One Academic (New)
ProQuest Publicly Available Content
ProQuest One Academic Middle East (New)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic
ProQuest One Academic UKI Edition
ProQuest Central China
Engineering Collection
MEDLINE - Academic
DOAJ (Directory of Open Access Journals)
DatabaseTitle CrossRef
PubMed
Publicly Available Content Database
ProQuest Central Student
Technology Collection
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest Natural Science Collection
ProQuest Central China
ProQuest Central
ProQuest One Applied & Life Sciences
ProQuest Engineering Collection
Natural Science Collection
ProQuest Central Korea
Biological Science Collection
ProQuest Central (New)
Engineering Collection
Engineering Database
ProQuest Biological Science Collection
ProQuest One Academic Eastern Edition
ProQuest Technology Collection
Biological Science Database
ProQuest SciTech Collection
ProQuest One Academic UKI Edition
Materials Science & Engineering Collection
ProQuest One Academic
ProQuest One Academic (New)
MEDLINE - Academic
DatabaseTitleList
Publicly Available Content Database
CrossRef
PubMed

MEDLINE - Academic
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ (Directory of Open Access Journals)
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 3
  dbid: 8FG
  name: ProQuest Technology Collection
  url: https://search.proquest.com/technologycollection1
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Architecture
EISSN 2306-5354
ExternalDocumentID oai_doaj_org_article_cce9d2e72691468b971d679674f92423
A792004956
38671773
10_3390_bioengineering11040351
Genre Journal Article
GeographicLocations Spain
GeographicLocations_xml – name: Spain
GrantInformation_xml – fundername: Generalitat Valenciana
  grantid: Predoctoral grant CIACIF/2022/289
GroupedDBID 53G
5VS
8FE
8FG
8FH
AAFWJ
AAYXX
ABDBF
ABJCF
ACUHS
ADBBV
AFKRA
AFPKN
ALMA_UNASSIGNED_HOLDINGS
AOIJS
BBNVY
BCNDV
BENPR
BGLVJ
BHPHI
CCPQU
CITATION
GROUPED_DOAJ
HCIFZ
HYE
IAO
IHR
INH
ITC
KQ8
L6V
LK8
M7P
M7S
MODMG
M~E
OK1
PGMZT
PHGZM
PHGZT
PIMPY
PROAC
PTHSS
RPM
NPM
PQGLB
PMFND
ABUWG
AZQEC
DWQXO
GNUQQ
PKEHL
PQEST
PQQKQ
PQUKI
PRINS
7X8
PUEGO
ID FETCH-LOGICAL-c520t-dd3d2b8382895b9878e58318219ec034f1f212811c28b078ef8b313c80ab66e93
IEDL.DBID 8FG
ISSN 2306-5354
IngestDate Wed Aug 27 01:28:42 EDT 2025
Thu Sep 04 22:07:01 EDT 2025
Fri Jul 25 11:48:13 EDT 2025
Tue Jun 17 22:08:25 EDT 2025
Tue Jun 10 21:02:48 EDT 2025
Mon Jul 21 05:45:24 EDT 2025
Tue Jul 01 04:35:31 EDT 2025
Thu Apr 24 22:59:24 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 4
Keywords text augmentation
chest X-rays
deep learning
radiology report generation
medical image
machine learning
vision transformer
text generation
reinforcement learning
Language English
License https://creativecommons.org/licenses/by/4.0
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c520t-dd3d2b8382895b9878e58318219ec034f1f212811c28b078ef8b313c80ab66e93
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-2078-0329
0000-0002-1970-3289
0000-0002-5192-0021
OpenAccessLink https://www.proquest.com/docview/3046721853?pq-origsite=%requestingapplication%
PMID 38671773
PQID 3046721853
PQPubID 2055440
ParticipantIDs doaj_primary_oai_doaj_org_article_cce9d2e72691468b971d679674f92423
proquest_miscellaneous_3047941228
proquest_journals_3046721853
gale_infotracmisc_A792004956
gale_infotracacademiconefile_A792004956
pubmed_primary_38671773
crossref_citationtrail_10_3390_bioengineering11040351
crossref_primary_10_3390_bioengineering11040351
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2024-04-01
PublicationDateYYYYMMDD 2024-04-01
PublicationDate_xml – month: 04
  year: 2024
  text: 2024-04-01
  day: 01
PublicationDecade 2020
PublicationPlace Switzerland
PublicationPlace_xml – name: Switzerland
– name: Basel
PublicationTitle Bioengineering (Basel)
PublicationTitleAlternate Bioengineering (Basel)
PublicationYear 2024
Publisher MDPI AG
Publisher_xml – name: MDPI AG
References ref_50
ref_14
ref_13
ref_12
ref_11
ref_10
Zhao (ref_1) 2024; 71
ref_51
ref_19
ref_18
ref_17
ref_16
ref_15
Yang (ref_28) 2022; 80
Liao (ref_25) 2023; 39
ref_24
ref_23
ref_22
Donahue (ref_49) 2017; 39
ref_21
Antani (ref_44) 2012; 6
ref_27
Pan (ref_29) 2024; 17
Yang (ref_30) 2023; 86
ref_36
ref_35
ref_34
ref_39
ref_38
ref_37
Zhao (ref_31) 2023; 146
ref_47
ref_46
ref_45
ref_43
Alfarghaly (ref_26) 2021; 24
ref_42
ref_41
ref_40
ref_3
ref_2
Radford (ref_33) 2019; 1
ref_48
ref_9
Nicolson (ref_32) 2023; 144
ref_8
ref_5
ref_4
Monshi (ref_20) 2020; 106
ref_7
ref_6
References_xml – volume: 6
  start-page: 168
  year: 2012
  ident: ref_44
  article-title: Design and development of a multimodal biomedical information retrieval system
  publication-title: J. Comput. Sci. Eng.
  doi: 10.5626/JCSE.2012.6.2.168
– ident: ref_5
– ident: ref_51
– ident: ref_17
  doi: 10.1007/978-3-030-00928-1_51
– ident: ref_16
  doi: 10.1109/CVPR.2016.274
– volume: 1
  start-page: 9
  year: 2019
  ident: ref_33
  article-title: Language Models are Unsupervised Multitask Learners
  publication-title: OpenAI Blog
– ident: ref_42
  doi: 10.1109/CVPR.2017.131
– volume: 144
  start-page: 102633
  year: 2023
  ident: ref_32
  article-title: Improving chest X-ray report generation by leveraging warm starting
  publication-title: Artif. Intell. Med.
  doi: 10.1016/j.artmed.2023.102633
– ident: ref_35
– ident: ref_10
  doi: 10.18653/v1/2021.acl-long.459
– ident: ref_48
  doi: 10.18653/v1/2020.emnlp-main.117
– ident: ref_23
  doi: 10.1109/CVPR.2015.7298594
– ident: ref_39
  doi: 10.1109/CVPR.2009.5206848
– ident: ref_8
– volume: 146
  start-page: 102714
  year: 2023
  ident: ref_31
  article-title: Radiology report generation with medical knowledge and multilevel image-report alignment: A new method and its verification
  publication-title: Artif. Intell. Med.
  doi: 10.1016/j.artmed.2023.102714
– volume: 39
  start-page: 101273
  year: 2023
  ident: ref_25
  article-title: Deep learning approaches to automatic radiology report generation: A systematic review
  publication-title: Inform. Med. Unlocked
  doi: 10.1016/j.imu.2023.101273
– ident: ref_27
– volume: 71
  start-page: 652
  year: 2024
  ident: ref_1
  article-title: Online Policy Learning-Based Output-Feedback Optimal Control of Continuous-Time Systems
  publication-title: IEEE Trans. Circuits Syst. II Express Briefs
– volume: 106
  start-page: 101878
  year: 2020
  ident: ref_20
  article-title: Deep learning in generating radiology reports: A survey
  publication-title: Artif. Intell. Med.
  doi: 10.1016/j.artmed.2020.101878
– volume: 17
  start-page: 100823
  year: 2024
  ident: ref_29
  article-title: Chest radiology report generation based on cross-modal multi-scale feature fusion
  publication-title: J. Radiat. Res. Appl. Sci.
– ident: ref_4
  doi: 10.1109/CVPR.2018.00943
– ident: ref_13
– ident: ref_38
– volume: 39
  start-page: 677
  year: 2017
  ident: ref_49
  article-title: Long-Term Recurrent Convolutional Networks for Visual Recognition and Description
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2016.2599174
– volume: 86
  start-page: 102798
  year: 2023
  ident: ref_30
  article-title: Radiology report generation with a learned knowledge base and multi-modal alignment
  publication-title: Med. Image Anal.
  doi: 10.1016/j.media.2023.102798
– ident: ref_37
  doi: 10.1109/ICCV48922.2021.00986
– ident: ref_9
  doi: 10.18653/v1/2020.emnlp-main.112
– ident: ref_7
– ident: ref_3
– ident: ref_24
  doi: 10.1109/CVPR.2016.90
– ident: ref_34
– volume: 24
  start-page: 100557
  year: 2021
  ident: ref_26
  article-title: Automated radiology report generation using conditioned transformers
  publication-title: Inform. Med. Unlocked
  doi: 10.1016/j.imu.2021.100557
– ident: ref_12
  doi: 10.18653/v1/2022.findings-emnlp.319
– ident: ref_40
– ident: ref_14
– ident: ref_18
– ident: ref_41
  doi: 10.18653/v1/2020.acl-main.703
– ident: ref_21
– ident: ref_43
  doi: 10.1038/s41597-019-0322-0
– volume: 80
  start-page: 102510
  year: 2022
  ident: ref_28
  article-title: Knowledge matters: Chest radiology report generation with general and specific knowledge
  publication-title: Med. Image Anal.
  doi: 10.1016/j.media.2022.102510
– ident: ref_2
  doi: 10.1109/ISBI.2019.8759236
– ident: ref_50
– ident: ref_11
  doi: 10.18653/v1/2021.naacl-main.416
– ident: ref_46
– ident: ref_47
  doi: 10.18653/v1/2020.acl-main.458
– ident: ref_36
  doi: 10.1109/CVPR.2017.243
– ident: ref_15
– ident: ref_45
  doi: 10.3115/1073083.1073135
– ident: ref_19
  doi: 10.18653/v1/P18-1240
– ident: ref_22
– ident: ref_6
  doi: 10.1109/CVPR46437.2021.01354
SSID ssj0001325264
Score 2.2955477
Snippet Deep learning is revolutionizing radiology report generation (RRG) with the adoption of vision encoder–decoder (VED) frameworks, which transform radiographs...
Deep learning is revolutionizing radiology report generation (RRG) with the adoption of vision encoder-decoder (VED) frameworks, which transform radiographs...
SourceID doaj
proquest
gale
pubmed
crossref
SourceType Open Website
Aggregation Database
Index Database
Enrichment Source
StartPage 351
SubjectTerms Architecture
Automation
Computational linguistics
Datasets
Deep learning
Knowledge representation
Language processing
machine learning
Natural language interfaces
Natural language processing
Neural networks
Radiology
radiology report generation
Radiology, Medical
Reinforcement
reinforcement learning
Reinforcement learning (Machine learning)
Semantics
Technology application
text augmentation
vision transformer
X-rays
SummonAdditionalLinks – databaseName: DOAJ (Directory of Open Access Journals)
  dbid: DOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1La9wwEB5CTsmh9JG0btOgQiAns7YkW_Jx2ySEQnsICeQmrIdDofWGdPfQf98ZSevstoVcerVH2NKMRt9IM58ATpTwiFobXvqhFqXs66G0Qomy6qvgccEXdUwe__K1vbyRn2-b242rvignLNEDp4GbORc6z4PibUdVQrZTtae9DyWHjrAAed-qqzaCqbi7IvD7rUwlwQLj-pn9tgiPDH-46Ek6Q9tajSJp_9-u-Q_AGReei-fwLCNGNk9_-gJ2wvgS9jd4BF_B_bQ1wK56nwpQWILWLPFK0_CzxJfxi_WjZ2frfAyWb-pB-Uii6uJ-Icu8q3dR-Bo9OJuv7n7kQqXxAG4uzq8_XZb5KoXSNbxalt4Lz60WFF81ttNKh0bjdEZ_FVwl5FAPnM7Uase1RdQQBm1FLZyuetu2oROHsDsuxvAGmG4QYQmiRrOtbLTvMSixCNJ6qQPG2bKAZj2kxmWecbru4rvBeINUYf6tigJmU7v7xLTxZIuPpLFJmpiy4wO0H5PtxzxlPwWckr4NDTH-putzWQJ2lpixzFx15EgwjCzgaEsS56Hbfr22GJP9wE9D584YYyMmKuDD9JpaUm7bGBarKINOseZcF_A6WdrUJUH0g0qJt_-jq-9gj6PeUt7REewuH1bhPUKqpT2Os-c3sAocaA
  priority: 102
  providerName: Directory of Open Access Journals
Title Improving Radiology Report Generation Quality and Diversity through Reinforcement Learning and Text Augmentation
URI https://www.ncbi.nlm.nih.gov/pubmed/38671773
https://www.proquest.com/docview/3046721853
https://www.proquest.com/docview/3047941228
https://doaj.org/article/cce9d2e72691468b971d679674f92423
Volume 11
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwhV3da9RAEB-0fdGCaP2K1mMFwadw2d18bJ4kZ3sWwSKlhb6F7EcOQZOzvXvof-_MZi_XU9Gng2T2SDK78z2_AXhXSItWayZi23IZpw1vYy0LGSdN4iwqfMl98fiXs_z0Mv18lV2FgNtNKKvcyEQvqG1vKEY-pQweeiuoXT4sf8Y0NYqyq2GExn3Y5wJ1LXWKzz9tYywSnyJPh8Zgid79VH_r3RbnD1VfSpm0HZ3kofv_FNC_mZ1e_cwfw6NgN7JqYPQTuOe6Qzio7qQBDuHhHXDBp7Ac4wXsvLFDVwob7G02gE0TT9gAonHLms6y402RBgvje5DeI6saH0RkAYx14YkvUKyzar34EbqXumdwOT-5-Hgah_kKsclEsoqtlVZoJcnpynSpCuUyhWcchZgziUxb3gpKtHEjlEZTwrVKSy6NShqd566Uz2Gv6zv3EpjK0OyShJem8zRTtkFPRaPl1qTKofOdRpBtvnBtAvg4zcD4XqMTQpyp_86ZCKbjuuUAv_HfFTNi4EhN8Nn-Qn-9qMNprI1xpRWuEHlJrWe6LLilgFqRtiUZmBG8J_bX9InxMU0TehXwZQkuq66KkqQL-pYRHO1Q4uE0u7c3G6gOwuGm3m7lCN6Ot2klFbx1rl97GpSUXAgVwYth442vJAmTsCjkq3__-Wt4IJAjQ5nREeytrtfuDVpQKz3xx2QC-9XseDbH39nJ2dfziY9H_AId6h4M
linkProvider ProQuest
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwEB5V5cBDQlCgBAoYCcQp2sR2EueA0EJZtvRxQFuptxA_skKiybbdFeqf4jcyEye7XUBw6jWxo9gznpdnvgF4lQmLVmvCQ1vFIpRlXIVaZCKMyshZVPgibpPHD4_S8bH8fJKcbMDPvhaG0ip7mdgKatsYipEP6AYPvRXULu9mZyF1jaLb1b6FhmeLfXf5A122i7d7u0jf15yPPk4-jMOuq0BoEh7NQ2uF5VoJcjUSjS63colCzsaj60wkZBVXnK6XYsOVRgXqKqVFLIyKSp2mjsCXUOTfkEIISiFUo0-rmI7AVafSFyILkUcD_a1xK1xBVLWSbu7WdGDbKuBPhfCbmduqu9E9uNvZqWzoGes-bLh6C-4Mr1w7bMHtK2CGD2C2jE-wL6X1VTDM2_fMg1sTDzAP2nHJytqy3T4phHXtgnB8i-Rq2qAl68Bfp-3gCaoRNlxMT7tqqfohHF_Lzj-Czbqp3WNgKkEzTxA-m05lomyJnpFGS7GUyqGzLwNI-h0uTAd2Tj03vhfo9BBlir9TJoDBct7Mw338d8Z7IuByNMF1tw-a82nRnf7CGJdb7jKe5lTqpvMsthTAy2SVk0EbwBsif0FbjL9pyq42AhdL8FzFMMtJmqEvG8DO2kgUBmb9dc9ARSeMLorV0Qng5fI1zaQEu9o1i3YMSuaYcxXAtme85ZIEYSBmmXjy74-_gJvjyeFBcbB3tP8UbnGkjk9x2oHN-fnCPUPrba6ft0eGwdfrPqO_AIh4VOs
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3da9RAEB9KC6KCaNUarbqC4lO4ZDefD0WuXo_W6lFKC32L2Y8cgk2u7R3Sf9G_ypnNJtdT0ae-XmbDZWfnc2d-A_A2FRq91pj7ugqFH5Vh5UuRCj8oA6PR4IvQFo9_mST7p9Gns_hsDX52vTBUVtnpRKuodaMoRz6gGzyMVtC6DCpXFnE0Gn-YXfg0QYpuWrtxGqUbs6B3LNyYa_I4NNc_MJy72jkYIe_fcT7eO_m477uJA76KeTD3tRaay0xQGBJLDMczE2d46lGsjQpEVIUVp6unUPFMonE1VSZFKFQWlDJJDAEzoTnYSNHqYyC4sbs3OTpeZnwE7kkStW3KQuTBQH5rzBJ1EA1xRPd6KxbSDhL401z85gRbYzh-CA-cF8uG7bF7BGum3oT7wxuXEptw7wbU4WOY9dkLdlzqtkeGtd4_a6Gv6YSwFtLjmpW1ZqOuZIS5YUJIb3FelU1pMgcNO7XEJ8gpNlxMz10vVf0ETm9l75_Cet3U5hmwLEYnUBB6m0yiONMlxk0S_cgyykwa8ciDuNvhQjkodJrI8b3AkIg4U_ydMx4M-nWzFgzkvyt2iYE9NYF52x-ay2nhdEOhlMk1NylPcmqEk3kaakrvpVGVk7vrwXtif0FbjH9Tla5zAj-WwLuKYZqTrsNI14PtFUpUFWr1cXeACqeqroqlYHnwpn9MK6n8rjbNwtKg3g45zzzYag9e_0mCEBLTVDz_98tfwx2U1-LzweTwBdzlyJy2_mkb1ueXC_MSXbu5fOVkhsHX2xbTX20AX8Y
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Improving+Radiology+Report+Generation+Quality+and+Diversity+through+Reinforcement+Learning+and+Text+Augmentation&rft.jtitle=Bioengineering+%28Basel%29&rft.au=Parres%2C+Daniel&rft.au=Albiol%2C+Alberto&rft.au=Paredes%2C+Roberto&rft.date=2024-04-01&rft.pub=MDPI+AG&rft.eissn=2306-5354&rft.volume=11&rft.issue=4&rft.spage=351&rft_id=info:doi/10.3390%2Fbioengineering11040351&rft.externalDBID=HAS_PDF_LINK
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2306-5354&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2306-5354&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2306-5354&client=summon