MGRW-Transformer: Multigranularity Random Walk Transformer Model for Interpretable Learning

Deep-learning models have been widely used in image recognition tasks due to their strong feature-learning ability. However, most of the current deep-learning models are "black box" systems that lack a semantic explanation of how they reached their conclusions. This makes it difficult to a...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 36; no. 1; pp. 1104 - 1118
Main Authors Ding, Weiping, Geng, Yu, Huang, Jiashuang, Ju, Hengrong, Wang, Haipeng, Lin, Chin-Teng
Format Journal Article
LanguageEnglish
Published United States IEEE 01.01.2025
Subjects
Online AccessGet full text
ISSN2162-237X
2162-2388
2162-2388
DOI10.1109/TNNLS.2023.3326283

Cover

Abstract Deep-learning models have been widely used in image recognition tasks due to their strong feature-learning ability. However, most of the current deep-learning models are "black box" systems that lack a semantic explanation of how they reached their conclusions. This makes it difficult to apply these methods to complex medical image recognition tasks. The vision transformer (ViT) model is the most commonly used deep-learning model with a self-attention mechanism that shows the region of influence as compared to traditional convolutional networks. Thus, ViT offers greater interpretability. However, medical images often contain lesions of variable size in different locations, which makes it difficult for a deep-learning model with a self-attention module to reach correct and explainable conclusions. We propose a multigranularity random walk transformer (MGRW-Transformer) model guided by an attention mechanism to find the regions that influence the recognition task. Our method divides the image into multiple subimage blocks and transfers them to the ViT module for classification. Simultaneously, the attention matrix output from the multiattention layer is fused with the multigranularity random walk module. Within the multigranularity random walk module, the segmented image blocks are used as nodes to construct an undirected graph using the attention node as a starting node and guiding the coarse-grained random walk. We appropriately divide the coarse blocks into finer ones to manage the computational cost and combine the results based on the importance of the discovered features. The result is that the model offers a semantic interpretation of the input image, a visualization of the interpretation, and insight into how the decision was reached. Experimental results show that our method improves classification performance with medical images while presenting an understandable interpretation for use by medical professionals.
AbstractList Deep-learning models have been widely used in image recognition tasks due to their strong feature-learning ability. However, most of the current deep-learning models are "black box" systems that lack a semantic explanation of how they reached their conclusions. This makes it difficult to apply these methods to complex medical image recognition tasks. The vision transformer (ViT) model is the most commonly used deep-learning model with a self-attention mechanism that shows the region of influence as compared to traditional convolutional networks. Thus, ViT offers greater interpretability. However, medical images often contain lesions of variable size in different locations, which makes it difficult for a deep-learning model with a self-attention module to reach correct and explainable conclusions. We propose a multigranularity random walk transformer (MGRW-Transformer) model guided by an attention mechanism to find the regions that influence the recognition task. Our method divides the image into multiple subimage blocks and transfers them to the ViT module for classification. Simultaneously, the attention matrix output from the multiattention layer is fused with the multigranularity random walk module. Within the multigranularity random walk module, the segmented image blocks are used as nodes to construct an undirected graph using the attention node as a starting node and guiding the coarse-grained random walk. We appropriately divide the coarse blocks into finer ones to manage the computational cost and combine the results based on the importance of the discovered features. The result is that the model offers a semantic interpretation of the input image, a visualization of the interpretation, and insight into how the decision was reached. Experimental results show that our method improves classification performance with medical images while presenting an understandable interpretation for use by medical professionals.Deep-learning models have been widely used in image recognition tasks due to their strong feature-learning ability. However, most of the current deep-learning models are "black box" systems that lack a semantic explanation of how they reached their conclusions. This makes it difficult to apply these methods to complex medical image recognition tasks. The vision transformer (ViT) model is the most commonly used deep-learning model with a self-attention mechanism that shows the region of influence as compared to traditional convolutional networks. Thus, ViT offers greater interpretability. However, medical images often contain lesions of variable size in different locations, which makes it difficult for a deep-learning model with a self-attention module to reach correct and explainable conclusions. We propose a multigranularity random walk transformer (MGRW-Transformer) model guided by an attention mechanism to find the regions that influence the recognition task. Our method divides the image into multiple subimage blocks and transfers them to the ViT module for classification. Simultaneously, the attention matrix output from the multiattention layer is fused with the multigranularity random walk module. Within the multigranularity random walk module, the segmented image blocks are used as nodes to construct an undirected graph using the attention node as a starting node and guiding the coarse-grained random walk. We appropriately divide the coarse blocks into finer ones to manage the computational cost and combine the results based on the importance of the discovered features. The result is that the model offers a semantic interpretation of the input image, a visualization of the interpretation, and insight into how the decision was reached. Experimental results show that our method improves classification performance with medical images while presenting an understandable interpretation for use by medical professionals.
Deep-learning models have been widely used in image recognition tasks due to their strong feature-learning ability. However, most of the current deep-learning models are "black box" systems that lack a semantic explanation of how they reached their conclusions. This makes it difficult to apply these methods to complex medical image recognition tasks. The vision transformer (ViT) model is the most commonly used deep-learning model with a self-attention mechanism that shows the region of influence as compared to traditional convolutional networks. Thus, ViT offers greater interpretability. However, medical images often contain lesions of variable size in different locations, which makes it difficult for a deep-learning model with a self-attention module to reach correct and explainable conclusions. We propose a multigranularity random walk transformer (MGRW-Transformer) model guided by an attention mechanism to find the regions that influence the recognition task. Our method divides the image into multiple subimage blocks and transfers them to the ViT module for classification. Simultaneously, the attention matrix output from the multiattention layer is fused with the multigranularity random walk module. Within the multigranularity random walk module, the segmented image blocks are used as nodes to construct an undirected graph using the attention node as a starting node and guiding the coarse-grained random walk. We appropriately divide the coarse blocks into finer ones to manage the computational cost and combine the results based on the importance of the discovered features. The result is that the model offers a semantic interpretation of the input image, a visualization of the interpretation, and insight into how the decision was reached. Experimental results show that our method improves classification performance with medical images while presenting an understandable interpretation for use by medical professionals.
Author Geng, Yu
Ding, Weiping
Huang, Jiashuang
Wang, Haipeng
Ju, Hengrong
Lin, Chin-Teng
Author_xml – sequence: 1
  givenname: Weiping
  orcidid: 0000-0002-3180-7347
  surname: Ding
  fullname: Ding, Weiping
  email: dwp9988@163.com
  organization: School of Computer Science and Technology, Nantong University, Nantong, China
– sequence: 2
  givenname: Yu
  surname: Geng
  fullname: Geng, Yu
  email: tian19981999@163.com
  organization: School of Computer Science and Technology, Nantong University, Nantong, China
– sequence: 3
  givenname: Jiashuang
  orcidid: 0000-0002-6204-9569
  surname: Huang
  fullname: Huang, Jiashuang
  email: hjshdym@163.com
  organization: School of Computer Science and Technology, Nantong University, Nantong, China
– sequence: 4
  givenname: Hengrong
  orcidid: 0000-0001-9894-9844
  surname: Ju
  fullname: Ju, Hengrong
  email: juhengrong@ntu.edu.cn
  organization: School of Computer Science and Technology, Nantong University, Nantong, China
– sequence: 5
  givenname: Haipeng
  surname: Wang
  fullname: Wang, Haipeng
  email: whpjy79@163.com
  organization: School of Computer Science and Technology, Nantong University, Nantong, China
– sequence: 6
  givenname: Chin-Teng
  orcidid: 0000-0001-8371-8197
  surname: Lin
  fullname: Lin, Chin-Teng
  email: chin-teng.lin@uts.edu.au
  organization: Centre for Artificial Intelligence, FEIT, University of Technology Sydney, Ultimo, NSW, Australia
BackLink https://www.ncbi.nlm.nih.gov/pubmed/37938954$$D View this record in MEDLINE/PubMed
BookMark eNpNkE1PwkAQhjcGI4j8AWNMj16Kuzv92HozRJEEMEEMJh6a3XZKqtsWd9sD_94iSJjLfOR55_Bckk5ZlUjINaNDxmh0v5zPp29DTjkMAXjABZyRHmcBdzkI0TnO4UeXDKz9om0F1A-86IJ0IYxARL7XI5-z8WLlLo0sbVaZAs2DM2t0na_bS6Olyeuts5BlWhXOSupv54R0ZlWK2mkXZ1LWaDYGa6k0OlOUpszL9RU5z6S2ODj0Pnl_flqOXtzp63gyepy6CQ_92mU8C3zBQpVSJYQvfaYghZQGiZdw9FEiKp5kKkIvU6kMFApIhB8ihFkiWAp9crf_uzHVT4O2jovcJqi1LLFqbMyFEBS4x1iL3h7QRhWYxhuTF9Js438hLcD3QGIqaw1mR4TReCc-_hMf78THB_Ft6GYfyhHxJAAMKAP4BRl-gBI
CODEN ITNNAL
Cites_doi 10.1088/1361-6560/ac3b32
10.1007/978-3-030-59710-8_31
10.1007/978-3-030-87193-2_2
10.1109/CVPR.2016.319
10.1109/ICCV48922.2021.00060
10.1109/CVPR46437.2021.01470
10.1007/s11263-015-0816-y
10.1117/12.2543960
10.1007/s11042-018-6082-6
10.48550/arXiv.2102.04306
10.48550/ARXIV.1706.03762
10.1007/978-3-031-16038-7_31
10.1109/CVPRW50498.2020.00020
10.1002/ima.22403
10.18517/ijaseit.10.3.12113
10.18653/v1/P19-1282
10.1109/CVPR.2016.90
10.1109/ICCV48922.2021.00139
10.1007/978-3-319-10590-1_53
10.1117/12.2549298
10.1109/ICCV.2017.74
10.35940/ijrte.b2921.078219
10.1109/CVPR.2014.218
10.18653/v1/P19-1580
10.32604/cmc.2020.013443
10.1007/s11263-014-0713-9
10.1109/BIBM.2017.8217753
10.1109/TPAMI.2006.233
10.1109/CVPR46437.2021.00084
10.1007/978-3-030-87193-2_4
10.1145/3301275.3308446
10.1016/j.bspc.2021.102890
10.1109/CVPR.2017.378
10.1093/bioinformatics/btab137
10.1007/978-3-319-44781-0_8
10.1007/978-3-031-16038-7_30
10.1109/WACV.2018.00097
ContentType Journal Article
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7X8
DOI 10.1109/TNNLS.2023.3326283
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic

PubMed
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 2162-2388
EndPage 1118
ExternalDocumentID 37938954
10_1109_TNNLS_2023_3326283
10313013
Genre orig-research
Journal Article
GrantInformation_xml – fundername: Natural Science Foundation of Jiangsu Province
  grantid: BK20231337
  funderid: 10.13039/501100004608
– fundername: National Natural Science Foundation of China
  grantid: 61976120; 62006128; 62102199
  funderid: 10.13039/501100001809
– fundername: China Postdoctoral Science Foundation
  grantid: 2022M711716
  funderid: 10.13039/501100002858
– fundername: Natural Science Key Foundation of Jiangsu Education Department
  grantid: 21KJA510004
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACIWK
ACPRK
AENEX
AFRAH
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
IFIPE
IPLJI
JAVBF
M43
MS~
O9-
OCL
PQQKQ
RIA
RIE
RNS
AAYXX
CITATION
NPM
RIG
7X8
ID FETCH-LOGICAL-c275t-12f65817bd0b885a51b3d3d06c4c2e5eaeeb2cfb9e4fbda6be83c857e37fc81d3
IEDL.DBID RIE
ISSN 2162-237X
2162-2388
IngestDate Sun Sep 28 09:48:02 EDT 2025
Mon Jul 21 06:03:47 EDT 2025
Wed Oct 01 00:45:23 EDT 2025
Wed Aug 27 01:57:59 EDT 2025
IsPeerReviewed false
IsScholarly true
Issue 1
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c275t-12f65817bd0b885a51b3d3d06c4c2e5eaeeb2cfb9e4fbda6be83c857e37fc81d3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0001-9894-9844
0000-0002-6204-9569
0000-0001-8371-8197
0000-0002-3180-7347
PMID 37938954
PQID 2888032411
PQPubID 23479
PageCount 15
ParticipantIDs proquest_miscellaneous_2888032411
pubmed_primary_37938954
ieee_primary_10313013
crossref_primary_10_1109_TNNLS_2023_3326283
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2025-Jan.
2025-1-00
2025-Jan
20250101
PublicationDateYYYYMMDD 2025-01-01
PublicationDate_xml – month: 01
  year: 2025
  text: 2025-Jan.
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle IEEE transaction on neural networks and learning systems
PublicationTitleAbbrev TNNLS
PublicationTitleAlternate IEEE Trans Neural Netw Learn Syst
PublicationYear 2025
Publisher IEEE
Publisher_xml – name: IEEE
References ref35
ref12
ref34
ref15
ref37
ref14
ref36
ref31
ref11
ref32
Dosovitskiy (ref7) 2020
ref2
ref1
ref17
ref39
ref16
ref19
ref18
Petsiuk (ref33)
Simonyan (ref44) 2014
ref24
ref23
ref45
ref26
ref25
ref20
ref42
ref41
ref22
ref21
Wang (ref30) 2021
ref28
ref27
ref29
ref8
Matsoukas (ref13) 2021
ref9
ref4
ref3
ref6
ref5
Ba (ref38) 2016
Hany (ref43) 2023
Gao (ref10) 2021
ref40
References_xml – ident: ref6
  doi: 10.1088/1361-6560/ac3b32
– start-page: 3619
  volume-title: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.
  ident: ref33
  article-title: RISE: Randomized input sampling for explanation of black-box models
– ident: ref36
  doi: 10.1007/978-3-030-59710-8_31
– ident: ref9
  doi: 10.1007/978-3-030-87193-2_2
– ident: ref25
  doi: 10.1109/CVPR.2016.319
– ident: ref12
  doi: 10.1109/ICCV48922.2021.00060
– ident: ref31
  doi: 10.1109/CVPR46437.2021.01470
– ident: ref41
  doi: 10.1007/s11263-015-0816-y
– year: 2020
  ident: ref7
  article-title: An image is worth 16×16 words: Transformers for image recognition at scale
  publication-title: arXiv:2010.11929
– ident: ref34
  doi: 10.1117/12.2543960
– ident: ref5
  doi: 10.1007/s11042-018-6082-6
– ident: ref8
  doi: 10.48550/arXiv.2102.04306
– ident: ref37
  doi: 10.48550/ARXIV.1706.03762
– ident: ref16
  doi: 10.1007/978-3-031-16038-7_31
– ident: ref28
  doi: 10.1109/CVPRW50498.2020.00020
– ident: ref1
  doi: 10.1002/ima.22403
– ident: ref4
  doi: 10.18517/ijaseit.10.3.12113
– ident: ref35
  doi: 10.18653/v1/P19-1282
– ident: ref45
  doi: 10.1109/CVPR.2016.90
– ident: ref29
  doi: 10.1109/ICCV48922.2021.00139
– ident: ref32
  doi: 10.1007/978-3-319-10590-1_53
– ident: ref15
  doi: 10.1117/12.2549298
– ident: ref26
  doi: 10.1109/ICCV.2017.74
– year: 2014
  ident: ref44
  article-title: Very deep convolutional networks for large-scale image recognition
  publication-title: arXiv:1409.1556
– ident: ref2
  doi: 10.35940/ijrte.b2921.078219
– ident: ref39
  doi: 10.1109/CVPR.2014.218
– ident: ref23
  doi: 10.18653/v1/P19-1580
– ident: ref3
  doi: 10.32604/cmc.2020.013443
– ident: ref42
  doi: 10.1007/s11263-014-0713-9
– year: 2021
  ident: ref10
  article-title: COVID-VIT: Classification of COVID-19 from CT chest images based on vision transformer models
  publication-title: arXiv:2107.01682
– ident: ref20
  doi: 10.1109/BIBM.2017.8217753
– year: 2021
  ident: ref30
  article-title: IS-CAM: Integrated score-CAM for axiomatic-based explanations
  publication-title: arXiv:2105.01661
– ident: ref40
  doi: 10.1109/TPAMI.2006.233
– year: 2021
  ident: ref13
  article-title: Is it time to replace CNNs with transformers for medical images?
  publication-title: arXiv:2108.09038
– ident: ref24
  doi: 10.1109/CVPR46437.2021.00084
– ident: ref11
  doi: 10.1007/978-3-030-87193-2_4
– volume-title: Chest CT-Scan Images Dataset
  year: 2023
  ident: ref43
– ident: ref14
  doi: 10.1145/3301275.3308446
– ident: ref21
  doi: 10.1016/j.bspc.2021.102890
– ident: ref18
  doi: 10.1109/CVPR.2017.378
– ident: ref19
  doi: 10.1093/bioinformatics/btab137
– ident: ref22
  doi: 10.1007/978-3-319-44781-0_8
– ident: ref17
  doi: 10.1007/978-3-031-16038-7_30
– ident: ref27
  doi: 10.1109/WACV.2018.00097
– year: 2016
  ident: ref38
  article-title: Layer normalization
  publication-title: arXiv:1607.06450
SSID ssj0000605649
Score 2.4943707
Snippet Deep-learning models have been widely used in image recognition tasks due to their strong feature-learning ability. However, most of the current deep-learning...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Publisher
StartPage 1104
SubjectTerms Classification algorithms
Computational modeling
Graph random walk
interpretable method
Lesions
Medical diagnostic imaging
multigranularity formal analysis
Prediction algorithms
self-attention mechanism
Task analysis
Transformers
vision transformer (ViT)
Title MGRW-Transformer: Multigranularity Random Walk Transformer Model for Interpretable Learning
URI https://ieeexplore.ieee.org/document/10313013
https://www.ncbi.nlm.nih.gov/pubmed/37938954
https://www.proquest.com/docview/2888032411
Volume 36
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 2162-2388
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000605649
  issn: 2162-237X
  databaseCode: RIE
  dateStart: 20120101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjZ3LT9wwEIdHwKHiUmhL26UPuVJvVUJix7HNrUJQVMEe6CJW6iHyY8KBbbaiuxf--o6dhNJKSL0lkfPyjOVvbI9_AB-JKAJaaoBeYZtVbc0zo53MvPHW2LI1Iqk1nE_r08vq61zOh2T1lAuDiGnxGebxMM3lh6Vfx6GygyhJQA4pNmFT6bpP1rofUCkIzOuEu7ykN3Kh5mOSTGEOZtPp2bc8aoXngoiFOtVteCLIObWR1V99UhJZeZw3U79zsgPT8Yv75SY3-Xrlcn_3z2aO__1Lu_B0IFD2uXeZZ7CB3XPYGdUd2NDYX8D38y8XV9lsBFu8PWQpW_earsS1q4Tv7MJ2YfmDXdnFDXtQkkWFtQWjE_ZnUaNbIBt2c73eg8uT49nRaTZIMWSeK7nKSt4SqpTKhcJpLa0snQgiFLWvPEeJFilC960zWLUu2NqhFl5LhUK1npBYvIStbtnha2C2IKbU0oRALMaxthSecxWcqLVypfcT-DQao_nZ77jRpEilME2yYhOt2AxWnMBerNQHJfv6nMCH0YANNZg4C2I7XK5_NZxi_oIwsiwn8Kq37P3do0PsP_LUN7DNo_5vGoJ5C1ur2zW-IyhZuffJGX8D_z3cfA
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjZ3Lb9QwEIdHpZWgl5ZHCwsFjMQNJU3sOIm5oaplgd0cylZdiUPkx6SHLlnU7l746xk7SSlIlXpLIuflGcvf2B7_AN4TUTjU1ABtgU2UNTmPVGlkZJXVSqeNEkGtYVrl47Ps61zO-2T1kAuDiGHxGcb-MMzlu6Vd-6GyQy9JQA4pHsCWzLJMdulaN0MqCaF5HoCXp_ROLor5kCaTqMNZVU2-x14tPBbELNStbsNDQe5ZKpn90ysFmZW7iTP0PCe7UA3f3C04uYzXKxPb3_9t53jvn3oMOz2Dsk-d0zyBDWyfwu6g78D65v4Mfkw_n55HswFt8eojC_m6F3TFr14lgGenunXLn-xcLy7ZrZLMa6wtGJ2wv8sazQJZv5_rxR6cnRzPjsZRL8YQWV7IVZTyhmAlLYxLTFlKLVMjnHBJbjPLUaJGitFtYxRmjXE6N1gKW8oCRdFYgmKxD5vtssUXwHRCVFlK5RzRGMdcU4DOC2dEXhYmtXYEHwZj1L-6PTfqEKskqg5WrL0V696KI9jzlXqrZFefI3g3GLCmJuPnQXSLy_V1zSnqTwgk03QEzzvL3tw9OMTLO576Fh6NZ9NJPflSfXsF29yrAYcBmQPYXF2t8TUhysq8CY75B7Tn38k
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=MGRW-Transformer%3A+Multigranularity+Random+Walk+Transformer+Model+for+Interpretable+Learning&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Ding%2C+Weiping&rft.au=Geng%2C+Yu&rft.au=Huang%2C+Jiashuang&rft.au=Ju%2C+Hengrong&rft.date=2025-01-01&rft.issn=2162-237X&rft.eissn=2162-2388&rft.volume=36&rft.issue=1&rft.spage=1104&rft.epage=1118&rft_id=info:doi/10.1109%2FTNNLS.2023.3326283&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TNNLS_2023_3326283
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon