Integrating Eye Tracking With Grouped Fusion Networks for Semantic Segmentation on Mammogram Images
Medical image segmentation has seen great progress in recent years, largely due to the development of deep neural networks. However, unlike in computer vision, high-quality clinical data is relatively scarce, and the annotation process is often a burden for clinicians. As a result, the scarcity of m...
Saved in:
| Published in | IEEE transactions on medical imaging Vol. 44; no. 2; pp. 868 - 879 |
|---|---|
| Main Authors | , , , , , , |
| Format | Journal Article |
| Language | English |
| Published |
United States
IEEE
01.02.2025
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 0278-0062 1558-254X 1558-254X |
| DOI | 10.1109/TMI.2024.3468404 |
Cover
| Abstract | Medical image segmentation has seen great progress in recent years, largely due to the development of deep neural networks. However, unlike in computer vision, high-quality clinical data is relatively scarce, and the annotation process is often a burden for clinicians. As a result, the scarcity of medical data limits the performance of existing medical image segmentation models. In this paper, we propose a novel framework that integrates eye tracking information from experienced radiologists during the screening process to improve the performance of deep neural networks with limited data. Our approach, a grouped hierarchical network, guides the network to learn from its faults by using gaze information as weak supervision. We demonstrate the effectiveness of our framework on mammogram images, particularly for handling segmentation classes with large scale differences. We evaluate the impact of gaze information on medical image segmentation tasks and show that our method achieves better segmentation performance compared to state-of-the-art models. A robustness study is conducted to investigate the influence of distraction or inaccuracies in gaze collection. We also develop a convenient system for collecting gaze data without interrupting the normal clinical workflow. Our work offers novel insights into the potential benefits of integrating gaze information into medical image segmentation tasks. |
|---|---|
| AbstractList | Medical image segmentation has seen great progress in recent years, largely due to the development of deep neural networks. However, unlike in computer vision, high-quality clinical data is relatively scarce, and the annotation process is often a burden for clinicians. As a result, the scarcity of medical data limits the performance of existing medical image segmentation models. In this paper, we propose a novel framework that integrates eye tracking information from experienced radiologists during the screening process to improve the performance of deep neural networks with limited data. Our approach, a grouped hierarchical network, guides the network to learn from its faults by using gaze information as weak supervision. We demonstrate the effectiveness of our framework on mammogram images, particularly for handling segmentation classes with large scale differences. We evaluate the impact of gaze information on medical image segmentation tasks and show that our method achieves better segmentation performance compared to state-of-the-art models. A robustness study is conducted to investigate the influence of distraction or inaccuracies in gaze collection. We also develop a convenient system for collecting gaze data without interrupting the normal clinical workflow. Our work offers novel insights into the potential benefits of integrating gaze information into medical image segmentation tasks. Medical image segmentation has seen great progress in recent years, largely due to the development of deep neural networks. However, unlike in computer vision, high-quality clinical data is relatively scarce, and the annotation process is often a burden for clinicians. As a result, the scarcity of medical data limits the performance of existing medical image segmentation models. In this paper, we propose a novel framework that integrates eye tracking information from experienced radiologists during the screening process to improve the performance of deep neural networks with limited data. Our approach, a grouped hierarchical network, guides the network to learn from its faults by using gaze information as weak supervision. We demonstrate the effectiveness of our framework on mammogram images, particularly for handling segmentation classes with large scale differences. We evaluate the impact of gaze information on medical image segmentation tasks and show that our method achieves better segmentation performance compared to state-of-the-art models. A robustness study is conducted to investigate the influence of distraction or inaccuracies in gaze collection. We also develop a convenient system for collecting gaze data without interrupting the normal clinical workflow. Our work offers novel insights into the potential benefits of integrating gaze information into medical image segmentation tasks.Medical image segmentation has seen great progress in recent years, largely due to the development of deep neural networks. However, unlike in computer vision, high-quality clinical data is relatively scarce, and the annotation process is often a burden for clinicians. As a result, the scarcity of medical data limits the performance of existing medical image segmentation models. In this paper, we propose a novel framework that integrates eye tracking information from experienced radiologists during the screening process to improve the performance of deep neural networks with limited data. Our approach, a grouped hierarchical network, guides the network to learn from its faults by using gaze information as weak supervision. We demonstrate the effectiveness of our framework on mammogram images, particularly for handling segmentation classes with large scale differences. We evaluate the impact of gaze information on medical image segmentation tasks and show that our method achieves better segmentation performance compared to state-of-the-art models. A robustness study is conducted to investigate the influence of distraction or inaccuracies in gaze collection. We also develop a convenient system for collecting gaze data without interrupting the normal clinical workflow. Our work offers novel insights into the potential benefits of integrating gaze information into medical image segmentation tasks. |
| Author | Wang, Wenping Zhou, Yan Shen, Dinggang Cui, Zhiming Xie, Jiaming Zhang, Qing Ma, Chong |
| Author_xml | – sequence: 1 givenname: Jiaming orcidid: 0009-0002-5671-5753 surname: Xie fullname: Xie, Jiaming email: sky7hate@connect.hku.hk organization: School of Biomedical Engineering, ShanghaiTech University, Shanghai, China – sequence: 2 givenname: Qing surname: Zhang fullname: Zhang, Qing email: rockeyzq8888@126.com organization: Department of Radiology, Renji Hospital, Shanghai, China – sequence: 3 givenname: Zhiming surname: Cui fullname: Cui, Zhiming email: cuizhm@shanghaitech.edu.cn organization: School of Biomedical Engineering, ShanghaiTech University, Shanghai, China – sequence: 4 givenname: Chong orcidid: 0000-0002-5068-8814 surname: Ma fullname: Ma, Chong email: mc-npu@mail.nwpu.edu.cn organization: School of Automation, Northwestern Polytechnical University, Xi'an, China – sequence: 5 givenname: Yan surname: Zhou fullname: Zhou, Yan email: yaner1475@163.com organization: Department of Radiology, Renji Hospital, Shanghai, China – sequence: 6 givenname: Wenping orcidid: 0000-0002-2284-3952 surname: Wang fullname: Wang, Wenping email: wenping@tamu.edu organization: Department of Computer Science, The University of Hong Kong, Kowloon Tong, Hong Kong – sequence: 7 givenname: Dinggang orcidid: 0000-0002-7934-5698 surname: Shen fullname: Shen, Dinggang email: Dinggang.Shen@gmail.com organization: School of Biomedical Engineering and the State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/39331544$$D View this record in MEDLINE/PubMed |
| BookMark | eNpNkE1Lw0AQhhdRtK3ePYjk6CV19isfRxE_Cq0erOgtbJLZGtvN1t0E6b93S6sIAzMDz_seniE5bG2LhJxTGFMK-fV8NhkzYGLMRZIJEAdkQKXMYibF-yEZAEuzGCBhJ2To_ScAFRLyY3LCc86pFGJAqknb4cKprmkX0d0Go7lT1XL7vDXdR_TgbL_GOrrvfWPb6Am7b-uWPtLWRS9oVNs1VTgWBtsudAQkzEwZY0OniSZGLdCfkiOtVh7P9ntEXu_v5reP8fT5YXJ7M40rlvIuzlPFQYsS8rJMEkYhxYRlItFS1yWrhRKi1Gmt6xpVUgIKTVmmmATkkmey5iNytetdO_vVo-8K0_gKVyvVou19wYOzlGVBQUAv92hfGqyLtWuMcpviV0wAYAdUznrvUP8hFIqt-yK4L7bui737ELnYRRpE_IcnecpzwX8AdC9_0w |
| CODEN | ITMID4 |
| Cites_doi | 10.1111/medu.13066 10.1109/CVPR.2009.5206848 10.1109/TPAMI.2016.2644615 10.1002/acp.2886 10.1016/j.neuroimage.2011.07.036 10.1148/ryai.2020200047 10.1109/TSMC.1979.4310076 10.48550/arXiv.2102.04306 10.1109/CVPR.2016.319 10.1109/CVPR.2015.7298965 10.1016/j.media.2018.10.010 10.1109/ICCV48922.2021.00986 10.1109/TMI.2020.3042773 10.1109/TCSVT.2020.2990531 10.1007/978-3-030-00889-5_1 10.1109/CVPR52688.2022.00131 10.1080/00140137008931124 10.1109/CVPR52688.2022.00438 10.1097/00004424-197805000-00001 10.1097/00004424-199008000-00004 10.3390/vision3020032 10.1109/LGRS.2018.2802944 10.3758/bf03207377 10.1097/00004424-198705000-00010 10.1109/TMI.2022.3146973 10.1109/TIP.2022.3192989 10.1109/ICCV.2015.178 10.1007/978-3-319-24574-4_28 10.1016/j.acra.2011.09.014 10.1109/ICCV.2017.74 10.48550/ARXIV.1706.03762 10.1007/978-3-319-61188-4_9 10.1016/j.acra.2008.01.023 10.1007/s10278-019-00220-4 10.1109/3DV.2016.79 10.1109/ICCV.2015.179 |
| ContentType | Journal Article |
| DBID | 97E RIA RIE AAYXX CITATION CGR CUY CVF ECM EIF NPM 7X8 |
| DOI | 10.1109/TMI.2024.3468404 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed MEDLINE - Academic |
| DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) MEDLINE - Academic |
| DatabaseTitleList | MEDLINE MEDLINE - Academic |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database – sequence: 3 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Medicine Engineering |
| EISSN | 1558-254X |
| EndPage | 879 |
| ExternalDocumentID | 39331544 10_1109_TMI_2024_3468404 10697394 |
| Genre | orig-research Research Support, Non-U.S. Gov't Journal Article |
| GrantInformation_xml | – fundername: Key R&D Program of Guangdong Province, China grantid: 2023B0303040001; 2021B0101420006 funderid: 10.13039/501100003453 – fundername: Shanghai Municipal Central Guided Local Science and Technology Development Fund grantid: YDZX20233100001001 – fundername: National Natural Science Foundation of China grantid: U23A20295; 62131015; 82394432 funderid: 10.13039/501100001809 – fundername: China Ministry of Science and Technology [Science, Technology, and Innovation for the Sustainable Development Goals 2030 Agenda (STI 2030-Major) Projects] grantid: 2022ZD0209000; 2022ZD0213100 |
| GroupedDBID | --- -DZ -~X .GJ 0R~ 29I 4.4 53G 5GY 5RE 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK ACNCT ACPRK AENEX AETIX AFRAH AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD F5P HZ~ H~9 IBMZZ ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P PQQKQ RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION CGR CUY CVF ECM EIF NPM RIG 7X8 |
| ID | FETCH-LOGICAL-c273t-97a30f4b09bb662107e62846f5fdb2d4a44bf7dfddea6b0e4f128a250e35385d3 |
| IEDL.DBID | RIE |
| ISSN | 0278-0062 1558-254X |
| IngestDate | Sun Sep 28 11:21:54 EDT 2025 Wed Jul 30 01:47:47 EDT 2025 Wed Oct 01 03:55:34 EDT 2025 Wed Aug 27 01:53:25 EDT 2025 |
| IsPeerReviewed | false |
| IsScholarly | true |
| Issue | 2 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c273t-97a30f4b09bb662107e62846f5fdb2d4a44bf7dfddea6b0e4f128a250e35385d3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
| ORCID | 0000-0002-7934-5698 0000-0002-5068-8814 0009-0002-5671-5753 0000-0002-2284-3952 |
| PMID | 39331544 |
| PQID | 3110728509 |
| PQPubID | 23479 |
| PageCount | 12 |
| ParticipantIDs | ieee_primary_10697394 proquest_miscellaneous_3110728509 pubmed_primary_39331544 crossref_primary_10_1109_TMI_2024_3468404 |
| ProviderPackageCode | CITATION AAYXX |
| PublicationCentury | 2000 |
| PublicationDate | 2025-02-01 |
| PublicationDateYYYYMMDD | 2025-02-01 |
| PublicationDate_xml | – month: 02 year: 2025 text: 2025-02-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States |
| PublicationTitle | IEEE transactions on medical imaging |
| PublicationTitleAbbrev | TMI |
| PublicationTitleAlternate | IEEE Trans Med Imaging |
| PublicationYear | 2025 |
| Publisher | IEEE |
| Publisher_xml | – name: IEEE |
| References | ref12 ref34 ref15 ref37 ref14 ref36 ref31 Yang (ref40) Chen (ref5) 2014 ref30 ref11 ref33 ref10 Ma (ref23) 2022 ref32 ref1 ref17 ref39 ref16 ref38 ref19 ref18 Oktay (ref27) 2018 Suckling (ref35) 1994; 1069 Heath (ref9) 2001 Kingma (ref13) 2014 ref24 ref46 ref45 ref26 ref25 ref20 ref42 ref41 ref22 ref44 Cao (ref2) 2021 ref21 ref43 Dosovitskiy (ref8) 2020 ref28 ref29 ref4 ref3 ref6 Devlin (ref7) 2018 |
| References_xml | – ident: ref15 doi: 10.1111/medu.13066 – ident: ref6 doi: 10.1109/CVPR.2009.5206848 – year: 2014 ident: ref13 article-title: Adam: A method for stochastic optimization publication-title: arXiv:1412.6980 – ident: ref1 doi: 10.1109/TPAMI.2016.2644615 – year: 2014 ident: ref5 article-title: Semantic image segmentation with deep convolutional nets and fully connected CRFs publication-title: arXiv:1412.7062 – ident: ref14 doi: 10.1002/acp.2886 – ident: ref10 doi: 10.1016/j.neuroimage.2011.07.036 – ident: ref33 doi: 10.1148/ryai.2020200047 – ident: ref28 doi: 10.1109/TSMC.1979.4310076 – ident: ref4 doi: 10.48550/arXiv.2102.04306 – ident: ref45 doi: 10.1109/CVPR.2016.319 – ident: ref22 doi: 10.1109/CVPR.2015.7298965 – ident: ref12 doi: 10.1016/j.media.2018.10.010 – ident: ref21 doi: 10.1109/ICCV48922.2021.00986 – ident: ref29 doi: 10.1109/TMI.2020.3042773 – volume: 1069 start-page: 375 year: 1994 ident: ref35 article-title: The mammographic image analysis society digital mammogram database publication-title: Exerpta Medica – ident: ref42 doi: 10.1109/TCSVT.2020.2990531 – ident: ref46 doi: 10.1007/978-3-030-00889-5_1 – start-page: 1 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref40 article-title: XLNet: Generalized autoregressive pretraining for language understanding – ident: ref20 doi: 10.1109/CVPR52688.2022.00131 – year: 2018 ident: ref27 article-title: Attention U-Net: Learning where to look for the pancreas publication-title: arXiv:1804.03999 – volume-title: The Digital Database for Screening Mammography year: 2001 ident: ref9 – ident: ref31 doi: 10.1080/00140137008931124 – year: 2021 ident: ref2 article-title: Swin-UNet: UNet-like pure transformer for medical image segmentation publication-title: arXiv:2105.05537 – ident: ref38 doi: 10.1109/CVPR52688.2022.00438 – ident: ref16 doi: 10.1097/00004424-197805000-00001 – ident: ref17 doi: 10.1097/00004424-199008000-00004 – ident: ref39 doi: 10.3390/vision3020032 – ident: ref43 doi: 10.1109/LGRS.2018.2802944 – year: 2022 ident: ref23 article-title: Eye-gaze-guided vision transformer for rectifying shortcut learning publication-title: arXiv:2205.12466 – ident: ref3 doi: 10.3758/bf03207377 – ident: ref19 doi: 10.1097/00004424-198705000-00010 – ident: ref37 doi: 10.1109/TMI.2022.3146973 – ident: ref41 doi: 10.1109/TIP.2022.3192989 – ident: ref26 doi: 10.1109/ICCV.2015.178 – ident: ref30 doi: 10.1007/978-3-319-24574-4_28 – ident: ref25 doi: 10.1016/j.acra.2011.09.014 – ident: ref32 doi: 10.1109/ICCV.2017.74 – ident: ref36 doi: 10.48550/ARXIV.1706.03762 – year: 2020 ident: ref8 article-title: An image is worth 16×16 words: Transformers for image recognition at scale publication-title: arXiv:2010.11929 – ident: ref11 doi: 10.1007/978-3-319-61188-4_9 – ident: ref18 doi: 10.1016/j.acra.2008.01.023 – ident: ref34 doi: 10.1007/s10278-019-00220-4 – year: 2018 ident: ref7 article-title: BERT: Pre-training of deep bidirectional transformers for language understanding publication-title: arXiv:1810.04805 – ident: ref24 doi: 10.1109/3DV.2016.79 – ident: ref44 doi: 10.1109/ICCV.2015.179 |
| SSID | ssj0014509 |
| Score | 2.4843712 |
| Snippet | Medical image segmentation has seen great progress in recent years, largely due to the development of deep neural networks. However, unlike in computer vision,... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Publisher |
| StartPage | 868 |
| SubjectTerms | Algorithms Annotations Biomedical imaging Breast - diagnostic imaging Breast Neoplasms - diagnostic imaging Computer aided diagnosis Computer vision Deep Learning Diseases eye tracking Eye-Tracking Technology Female Gaze tracking Humans Image Processing, Computer-Assisted - methods mammogram image Mammography - methods Medical diagnostic imaging medical image segmentation Neural Networks, Computer Semantic segmentation Semantics Solid modeling Transformers Visualization |
| Title | Integrating Eye Tracking With Grouped Fusion Networks for Semantic Segmentation on Mammogram Images |
| URI | https://ieeexplore.ieee.org/document/10697394 https://www.ncbi.nlm.nih.gov/pubmed/39331544 https://www.proquest.com/docview/3110728509 |
| Volume | 44 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1558-254X dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014509 issn: 0278-0062 databaseCode: RIE dateStart: 19820101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3NS8MwFH84D6IHP6fOLyJ48dBamzRtjyIbKnQXN9ytJE2qIuvEtQf9631J2zEFQegh0A-a_F7yvt8DuKAMCUNQ5UgmY4fxSJgggNAJrqXIRaxUZrNckyG_G7OHSTBpktVtLozW2gafadcMrS9fzbLKmMpwh_M4pDHrQCeMeJ2stXAZsKCO5_BNyViP-61P0ouvRsk9aoI-cykztU1MLx6KirwpRPODHdn-Kn-LmpblDLZg2P5sHWny5laldLOvX3Uc_z2bbdhshE9yU1PLDqzoYhc2lkoS7sJa0jjb98CYqG0pCbxB-p-aIF_LjGWdPL2WL8RarbQig8oY3MiwjiefE5SCyaOeImKvGQ6ep012U0HwSgTSvQkII_dTPMnmXRgP-qPbO6fpyeBkKOiUThwK6uVMerGUnKO-GGqOHI7nQa6kr5hgTOahyvHUFFx6muXIAAXKWZri0Rooug-rxazQh0BQsdSBnwc-F6imyCxi15EWgnIVIk4868FlC036XpfeSK3K4sUpIpoaRNMG0R50zQIvPVevbQ_OWzBT3DfGGSIKPavmKTWKrx8h0fTgoEZ58XZLHEd_fPUY1n3TBtgGb5_AavlR6VOUTUp5ZmnyG2wc3YI |
| linkProvider | IEEE |
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT9wwEB61VGrhAJRHWcrDlXrhkCXEY2dzRIjVLpC9dBHcIjt2AKHNVt3kUH49YydBUAmpUg6W8lDsb-x5zwD85EiEobgJNOokQDlQLgggDsSJVoVKjMl9lms6kaNrvLgVt22yus-Fsdb64DPbd0PvyzfzvHamMtrhMol5gh_hk0BE0aRrvTgNUDQRHZErGhvKqPNKhsnxNB2TLhhhn6OrbuK68XBS5V0pmjcMyXdYeV_Y9ExnuAaT7nebWJPHfl3pfv70TyXH_57POqy24ic7bejlK3yw5QasvCpKuAGf09bdvgnOSO2LSdANdv7XMuJsubOts5uH6p55u5U1bFg7kxubNBHlC0ZyMPtlZ4TZQ06Du1mb31QyulJFlO9Cwth4RmfZYguuh-fTs1HQdmUIchJ1qiCJFQ8L1GGitZSkMcZWEo-ThSiMjgwqRF3EpqBzU0kdWiyIBSqStCynw1UYvg1L5by0O8BItbQiKkQkFSkqOh_gycAqxaWJCSeZ9-Cogyb73RTfyLzSEiYZIZo5RLMW0R5suQV-9Vyztj340YGZ0c5x7hBV2nm9yLhTfaMBEU0PvjUov7zdEcfuO189hC-jaXqVXY0nl99hOXJNgX0o9x4sVX9qu0-SSqUPPH0-AzA34M8 |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Integrating+Eye+Tracking+With+Grouped+Fusion+Networks+for+Semantic+Segmentation+on+Mammogram+Images&rft.jtitle=IEEE+transactions+on+medical+imaging&rft.au=Xie%2C+Jiaming&rft.au=Zhang%2C+Qing&rft.au=Cui%2C+Zhiming&rft.au=Ma%2C+Chong&rft.date=2025-02-01&rft.pub=IEEE&rft.issn=0278-0062&rft.volume=44&rft.issue=2&rft.spage=868&rft.epage=879&rft_id=info:doi/10.1109%2FTMI.2024.3468404&rft_id=info%3Apmid%2F39331544&rft.externalDocID=10697394 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0278-0062&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0278-0062&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0278-0062&client=summon |