SpikeNAS-Bench: Benchmarking NAS Algorithms for Spiking Neural Network Architecture

In recent years, neural architecture search (NAS) has marked significant advancements, yet its efficacy is marred by the dependence on substantial computational resources. To mitigate this, the development of NAS benchmarks has emerged, offering datasets that enumerate all potential network architec...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on artificial intelligence Vol. 6; no. 6; pp. 1614 - 1625
Main Authors Sun, Gengchen, Liu, Zhengkun, Gan, Lin, Su, Hang, Li, Ting, Zhao, Wenfeng, Sun, Biao
Format Journal Article
LanguageEnglish
Published IEEE 01.06.2025
Subjects
Online AccessGet full text
ISSN2691-4581
2691-4581
DOI10.1109/TAI.2025.3534136

Cover

Abstract In recent years, neural architecture search (NAS) has marked significant advancements, yet its efficacy is marred by the dependence on substantial computational resources. To mitigate this, the development of NAS benchmarks has emerged, offering datasets that enumerate all potential network architectures and their performances within a predefined search space. Nonetheless, these benchmarks predominantly focus on convolutional architectures, which are criticized for their limited interpretability and suboptimal hardware efficiency. Recognizing the untapped potential of spiking neural networks (SNNs)-often hailed as the third generation of neural networks due to their biological realism and computational thrift-this study introduces SpikeNAS-Bench. As a pioneering benchmark for SNN, SpikeNAS-Bench utilizes a cell-based search space, integrating leaky integrate-and-fire neurons with variable thresholds as candidate operations. It encompasses 15 625 candidate architectures, rigorously evaluated on CIFAR10, CIFAR100, and Tiny-ImageNet datasets. This article delves into the architectural nuances of SpikeNAS-Bench, leveraging various criteria to underscore the benchmark's utility and presenting insights that could steer future NAS algorithm designs. Moreover, we assess the benchmark's consistency through three distinct proxy types: zero-cost-based, early-stop-based, and predictor-based proxies. Additionally, the article benchmarks seven contemporary NAS algorithms to attest to SpikeNAS-Bench's broad applicability. We commit to providing training logs, diagnostic data for all candidate architectures, and we promise to release all code and datasets postacceptance, aiming to catalyze further exploration and innovation within the SNN domain.
AbstractList In recent years, neural architecture search (NAS) has marked significant advancements, yet its efficacy is marred by the dependence on substantial computational resources. To mitigate this, the development of NAS benchmarks has emerged, offering datasets that enumerate all potential network architectures and their performances within a predefined search space. Nonetheless, these benchmarks predominantly focus on convolutional architectures, which are criticized for their limited interpretability and suboptimal hardware efficiency. Recognizing the untapped potential of spiking neural networks (SNNs)-often hailed as the third generation of neural networks due to their biological realism and computational thrift-this study introduces SpikeNAS-Bench. As a pioneering benchmark for SNN, SpikeNAS-Bench utilizes a cell-based search space, integrating leaky integrate-and-fire neurons with variable thresholds as candidate operations. It encompasses 15 625 candidate architectures, rigorously evaluated on CIFAR10, CIFAR100, and Tiny-ImageNet datasets. This article delves into the architectural nuances of SpikeNAS-Bench, leveraging various criteria to underscore the benchmark's utility and presenting insights that could steer future NAS algorithm designs. Moreover, we assess the benchmark's consistency through three distinct proxy types: zero-cost-based, early-stop-based, and predictor-based proxies. Additionally, the article benchmarks seven contemporary NAS algorithms to attest to SpikeNAS-Bench's broad applicability. We commit to providing training logs, diagnostic data for all candidate architectures, and we promise to release all code and datasets postacceptance, aiming to catalyze further exploration and innovation within the SNN domain.
Author Sun, Gengchen
Zhao, Wenfeng
Li, Ting
Gan, Lin
Sun, Biao
Su, Hang
Liu, Zhengkun
Author_xml – sequence: 1
  givenname: Gengchen
  orcidid: 0009-0009-9209-6460
  surname: Sun
  fullname: Sun, Gengchen
  email: sun_gengchen@163.com
  organization: School of Electrical and Information Engineering, Tianjin University, Tianjin, China
– sequence: 2
  givenname: Zhengkun
  orcidid: 0000-0002-6481-835X
  surname: Liu
  fullname: Liu, Zhengkun
  email: liuzhengkun@tju.edu.cn, wudawei124578@126.com
  organization: School of Electrical and Information Engineering, Tianjin University, Tianjin, China
– sequence: 3
  givenname: Lin
  orcidid: 0009-0002-8390-8251
  surname: Gan
  fullname: Gan, Lin
  email: ganlin@nwpu.edu.cn
  organization: School of Automation, Northwestern Polytechnical University, Xi'an, China
– sequence: 4
  givenname: Hang
  orcidid: 0000-0002-6877-6783
  surname: Su
  fullname: Su, Hang
  email: hang.su@polimi.it
  organization: Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
– sequence: 5
  givenname: Ting
  orcidid: 0000-0001-5145-3024
  surname: Li
  fullname: Li, Ting
  email: t.li619@foxmail.com
  organization: Institute of Biomedical Engineering, Chinese Academy of Medical Science & Pecking Union Medical College, Tianjin, China
– sequence: 6
  givenname: Wenfeng
  orcidid: 0000-0002-2933-750X
  surname: Zhao
  fullname: Zhao, Wenfeng
  email: wzhao@binghamton.edu
  organization: Department of Electrical and Computer Engineering, Binghamton University, State University of New York, Binghamton, NY, USA
– sequence: 7
  givenname: Biao
  orcidid: 0000-0002-4124-9350
  surname: Sun
  fullname: Sun, Biao
  email: sunbiao@tju.edu.cn
  organization: School of Electrical and Information Engineering, Tianjin University, Tianjin, China
BookMark eNpNkEFPwkAQhTcGExG5e_DQP9A6M9sti7dKREmIHsBzs2ynUIGWbEuM_94tcOD0Xmbem0y-e9Gr6oqFeESIEGH8vExnEQGpSCoZo0xuRJ-SMYax0ti78ndi2DQ_AD6KRDTqi8XiUG75M12Er1zZzUtwkr1x27JaB34epLt17cp2s2-ConZBlz-t-OjMzkv7W7ttkDq7KVu27dHxg7gtzK7h4UUH4nv6tpx8hPOv99kknYcWJbWh5oKtzGWhyODIxNIw4lglKwK2MbAeMSAlSq8SWpGyOeVgvAXSVsdg5EDA-a51ddM4LrKDK_3rfxlC1nHJPJes45JduPjK07lSMvNVXCuVaCn_AU3MX9M
CODEN ITAICB
Cites_doi 10.1038/s41467-021-24456-3
10.1038/s42256-021-00311-4
10.3389/fnins.2021.773954
10.1109/TNNLS.2017.2726060
10.1109/ICCV48922.2021.00516
10.1145/3321707.3321729
10.1016/j.neunet.2019.08.034
10.1109/ICCV48922.2021.00266
10.1109/CVPR.2017.243
10.1109/TNNLS.2021.3111897
10.1007/978-3-031-20053-3_3
10.1109/CVPR46437.2021.00938
10.1016/S0893-6080(01)00078-8
10.1109/MM.2020.2971677
10.1609/aaai.v33i01.33014780
10.1038/nn.3785
10.1113/jphysiol.1952.sp004764
10.3389/fncom.2015.00099
10.1609/aaai.v33i01.33011311
10.1109/WACV48630.2021.00400
10.1109/TCYB.2020.2983860
10.1109/HPEC55821.2022.9926296
10.1109/TNN.2003.820440
10.1109/iccv48922.2021.00463
10.1016/j.neunet.2024.106630
10.1017/CBO9780511815706
10.1109/CVPR.2016.90
10.1609/aaai.v38i10.28975
10.1007/978-3-031-73411-3_15
10.1109/MM.2018.112130359
10.1109/CVPR.2018.00907
10.1038/s42256-021-00388-x
10.1016/j.neunet.2021.09.022
10.1109/TIP.2021.3070193
10.24963/ijcai.2018/228
10.1109/ICCV.2017.154
10.1162/NECO_a_00395
10.1007/978-3-030-58526-6_39
10.24963/ijcai.2021/236
10.1007/978-3-031-20071-7_37
10.1109/CVPR52688.2022.00358
10.1126/sciadv.adi1480
10.1109/tpami.2021.3054824
10.1109/ISSCC.2014.6757323
10.1109/TEVC.2022.3147526
ContentType Journal Article
DBID 97E
RIA
RIE
AAYXX
CITATION
DOI 10.1109/TAI.2025.3534136
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 2691-4581
EndPage 1625
ExternalDocumentID 10_1109_TAI_2025_3534136
10855683
Genre orig-research
GrantInformation_xml – fundername: National Natural Science Foundation of China
  grantid: T2322020; 62371329
  funderid: 10.13039/501100001809
GroupedDBID 0R~
97E
AASAJ
AAWTH
ABAZT
ABJNI
ABQJQ
ABVLG
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
IEDLZ
IFIPE
JAVBF
M~E
OCL
RIA
RIE
AAYXX
CITATION
ID FETCH-LOGICAL-c132t-8efec3d3f52a17a43ae11956b20ec40e87e012658b62b25cd2d0a62b028c840a3
IEDL.DBID RIE
ISSN 2691-4581
IngestDate Wed Oct 01 05:33:59 EDT 2025
Wed Aug 27 07:40:21 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 6
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c132t-8efec3d3f52a17a43ae11956b20ec40e87e012658b62b25cd2d0a62b028c840a3
ORCID 0009-0002-8390-8251
0000-0002-6481-835X
0000-0002-6877-6783
0009-0009-9209-6460
0000-0002-2933-750X
0000-0002-4124-9350
0000-0001-5145-3024
PageCount 12
ParticipantIDs ieee_primary_10855683
crossref_primary_10_1109_TAI_2025_3534136
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2025-June
PublicationDateYYYYMMDD 2025-06-01
PublicationDate_xml – month: 06
  year: 2025
  text: 2025-June
PublicationDecade 2020
PublicationTitle IEEE transactions on artificial intelligence
PublicationTitleAbbrev TAI
PublicationYear 2025
Publisher IEEE
Publisher_xml – name: IEEE
References ref13
ref57
ref56
ref15
ref59
ref14
ref58
ref53
ref54
Dong (ref11) 2020
ref17
Zoph (ref24) 2017
ref16
ref19
Xu (ref33) 2021
ref18
Lu (ref60) 2021; 34
ref50
Na (ref39) 2022
ref45
ref48
ref47
ref41
ref44
Siems (ref12) 2020
Li (ref35) 2020
Ying (ref10) 2019
Zhang (ref42) 2020; 33
ref8
Liu (ref27) 2018
ref7
ref9
ref4
ref3
ref6
ref5
Tan (ref49) 2019
ref37
Hunsberger (ref51) 2016
Real (ref28) 2017
Chen (ref31) 2021
ref30
Mellor (ref32) 2021
ref2
ref1
ref38
Simonyan (ref46) 2014
Li (ref36) 2020; 2
ref23
ref67
ref26
ref25
ref20
Pham (ref40) 2018
ref64
ref63
ref22
ref66
ref21
ref65
Abdelfattah (ref34) 2020
ref29
Krizhevsky (ref43) 2009
Chen (ref55) 2023
Hong (ref52) 2023
ref62
ref61
References_xml – start-page: 367
  year: 2020
  ident: ref35
  article-title: Random search and reproducibility for neural architecture search
  publication-title: Proc. Uncertain. Artif. Intell.
– volume: 2
  start-page: 230
  year: 2020
  ident: ref36
  article-title: A system for massively parallel hyperparameter tuning
  publication-title: Proc. Mach. Learn. Syst.
– year: 2014
  ident: ref46
  article-title: Very deep convolutional networks for large-scale image recognition
– ident: ref64
  doi: 10.1038/s41467-021-24456-3
– ident: ref18
  doi: 10.1038/s42256-021-00311-4
– year: 2016
  ident: ref51
  article-title: Training spiking deep networks for neuromorphic hardware
– start-page: 6105
  year: 2019
  ident: ref49
  article-title: EfficientNet: Rethinking model scaling for convolutional neural networks
  publication-title: Proc. Int. Conf. Mach. Learn.
– ident: ref16
  doi: 10.3389/fnins.2021.773954
– ident: ref45
  doi: 10.1109/TNNLS.2017.2726060
– ident: ref17
  doi: 10.1109/ICCV48922.2021.00516
– ident: ref30
  doi: 10.1145/3321707.3321729
– ident: ref8
  doi: 10.1016/j.neunet.2019.08.034
– ident: ref20
  doi: 10.1109/ICCV48922.2021.00266
– year: 2023
  ident: ref52
  article-title: LaSNN: Layer-wise ANN-to-SNN distillation for effective and efficient training in deep spiking neural networks
– ident: ref48
  doi: 10.1109/CVPR.2017.243
– start-page: 2902
  year: 2017
  ident: ref28
  article-title: Large-scale evolution of image classifiers
  publication-title: Proc. Int. Conf. Mach. Learn.
– ident: ref50
  doi: 10.1109/TNNLS.2021.3111897
– ident: ref9
  doi: 10.1007/978-3-031-20053-3_3
– year: 2009
  ident: ref43
  article-title: Learning multiple layers of features from tiny images
– ident: ref38
  doi: 10.1109/CVPR46437.2021.00938
– ident: ref1
  doi: 10.1016/S0893-6080(01)00078-8
– ident: ref59
  doi: 10.1109/MM.2020.2971677
– ident: ref26
  doi: 10.1609/aaai.v33i01.33014780
– ident: ref63
  doi: 10.1038/nn.3785
– ident: ref2
  doi: 10.1113/jphysiol.1952.sp004764
– ident: ref14
  doi: 10.3389/fncom.2015.00099
– ident: ref41
  doi: 10.1609/aaai.v33i01.33011311
– ident: ref53
  doi: 10.1109/WACV48630.2021.00400
– ident: ref62
  doi: 10.1109/TCYB.2020.2983860
– ident: ref58
  doi: 10.1109/HPEC55821.2022.9926296
– start-page: 1
  year: 2021
  ident: ref31
  article-title: Neural architecture search on imagenet in four GPU hours: A theoretically inspired perspective
  publication-title: Proc. Int. Conf. Learn. Represent.
– volume: 34
  start-page: 15125
  year: 2021
  ident: ref60
  article-title: TNASP: A transformer-based nas predictor with a self-evolution framework
  publication-title: Proc. Adv. Neural Inf. Process. Syst.
– year: 2023
  ident: ref55
  article-title: Training full spike neural networks via auxiliary accumulation pathway
– start-page: 4095
  year: 2018
  ident: ref40
  article-title: Efficient neural architecture search via parameters sharing
  publication-title: Proc. Int. Conf. Mach. Learn.
– ident: ref3
  doi: 10.1109/TNN.2003.820440
– ident: ref54
  doi: 10.1109/iccv48922.2021.00463
– start-page: 7588
  year: 2021
  ident: ref32
  article-title: Neural architecture search without training
  publication-title: Proc. Int. Conf. Mach. Learn.
– start-page: 16253
  year: 2022
  ident: ref39
  article-title: AutoSNN: Towards energy-efficient spiking neural networks
  publication-title: Proc. Int. Conf. Mach. Learn.
– ident: ref66
  doi: 10.1016/j.neunet.2024.106630
– start-page: 7105
  year: 2019
  ident: ref10
  article-title: NAS-Bench-101: Towards reproducible neural architecture search
  publication-title: Proc. Int. Conf. Mach. Learn.
– volume: 33
  start-page: 12022
  year: 2020
  ident: ref42
  article-title: Temporal spike sequence learning via backpropagation for deep spiking neural networks,” in
  publication-title: Proc. Adv, Neural Inf. Process. Syst.
– ident: ref4
  doi: 10.1017/CBO9780511815706
– ident: ref47
  doi: 10.1109/CVPR.2016.90
– ident: ref67
  doi: 10.1609/aaai.v38i10.28975
– ident: ref65
  doi: 10.1007/978-3-031-73411-3_15
– ident: ref56
  doi: 10.1109/MM.2018.112130359
– ident: ref25
  doi: 10.1109/CVPR.2018.00907
– start-page: 11613
  year: 2021
  ident: ref33
  article-title: KNAS: Green neural architecture search
  publication-title: Proc. Int. Conf. Mach. Learn.
– ident: ref6
  doi: 10.1038/s42256-021-00388-x
– year: 2018
  ident: ref27
  article-title: DARTS: Differentiable architecture search
– ident: ref15
  doi: 10.1016/j.neunet.2021.09.022
– ident: ref7
  doi: 10.1109/TIP.2021.3070193
– year: 2020
  ident: ref11
  article-title: NAS-Bench-201: Extending the scope of reproducible neural architecture search
– ident: ref44
  doi: 10.24963/ijcai.2018/228
– ident: ref29
  doi: 10.1109/ICCV.2017.154
– start-page: 1
  year: 2020
  ident: ref34
  article-title: Zero-cost proxies for lightweight NAS
  publication-title: Proc. Int. Conf. Learn. Represent.
– ident: ref5
  doi: 10.1162/NECO_a_00395
– ident: ref37
  doi: 10.1007/978-3-030-58526-6_39
– ident: ref21
  doi: 10.24963/ijcai.2021/236
– ident: ref23
  doi: 10.1007/978-3-031-20071-7_37
– ident: ref22
  doi: 10.1109/CVPR52688.2022.00358
– ident: ref19
  doi: 10.1126/sciadv.adi1480
– ident: ref13
  doi: 10.1109/tpami.2021.3054824
– year: 2020
  ident: ref12
  article-title: NAS-Bench–301 and the case for surrogate benchmarks for neural architecture search
– ident: ref57
  doi: 10.1109/ISSCC.2014.6757323
– start-page: 1
  year: 2017
  ident: ref24
  article-title: Neural architecture search with reinforcement learning
  publication-title: Proc. Int. Conf. Learn. Represent.
– ident: ref61
  doi: 10.1109/TEVC.2022.3147526
SSID ssj0002512227
Score 2.2958028
Snippet In recent years, neural architecture search (NAS) has marked significant advancements, yet its efficacy is marred by the dependence on substantial...
SourceID crossref
ieee
SourceType Index Database
Publisher
StartPage 1614
SubjectTerms Benchmark
Benchmark testing
Biological information theory
Computer architecture
Convolution
Membrane potentials
Microprocessors
neural architecture search (NAS)
neuron threshold
Neurons
spiking neural network (SNNs)
Spiking neural networks
Sun
Training
Title SpikeNAS-Bench: Benchmarking NAS Algorithms for Spiking Neural Network Architecture
URI https://ieeexplore.ieee.org/document/10855683
Volume 6
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 2691-4581
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0002512227
  issn: 2691-4581
  databaseCode: RIE
  dateStart: 20200101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 2691-4581
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0002512227
  issn: 2691-4581
  databaseCode: M~E
  dateStart: 20200101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwELZoJxaeRZRH5YGFwWlqJ07MFlCrgtQubaVukeNcaFX6UEkXBn47tpOggoTEZOviSNadk_vufA-E7kBkJra1Q4ArTjwGlAiaBASyIOso15OuzVsbDHl_4r1M_WmZrG5zYQDABp-BY6b2Lj9dq51xlbVNpLzPQ1ZDtSDkRbLWt0PFKGpKg-oq0hXtcfSsDUDqO8w3_2r-Q_Xs9VKxqqR3jIbVJooIkoWzyxNHffyqz_jvXZ6goxJU4qg4BafoAFZn6Lhq2IDL7_ccjUab-QKG0Yg8asLsAdthKa2_HGs6jt5e19t5Plu-Y41msVlvH4GpzqEHGzOOo73bhwaa9Lrjpz4puyoQpS3PnISQgWIpy3wqO4H0mART9o0n1AXluRAGoJWWBiYJpwn1VUpTV-qpBiJKW4OSXaD6ar2CS4Ql64C2eJgAyryUJVKKTPBMpKCBTuinTXRfMTzeFMUzYmt0uCLWwomNcOJSOE3UMKzcW1dw8eoP-jU6NK8XUVs3qJ5vd3Cr8UGetFBt8Nlt2dPxBaKOuSY
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwELagDLBQHkWUpwcWhqSpH3mwBUTVQpulrdQtcpwLrUofKunCr8d2UlSQkJhsXazIunNy353vgdAdBJmObW1a4ErXYhSIFZDEsyDzsqZ0mHBM3lovcttD9jLiozJZ3eTCAIAJPgNbT81dfrqQa-0qa-hIee76dBftccYYL9K1vl0qWlUT4m0uI52gMQg7ygQk3KZc_63dH8pnq5uKUSatKoo22yhiSKb2Ok9s-fmrQuO_93mEDktYicPiHByjHZifoOqmZQMuv-BT1O8vJ1OIwr71qAjjB2yGmTAec6zoOHx_W6wm-Xj2gRWexXq9eQS6PocaTNQ4DrfuH2po2HoePLWtsq-CJZXtmVs-ZCBpSjNORNMTjArQhd_chDggmQO-B0ptKWiSuCQhXKYkdYSaKigilT0o6BmqzBdzOEdY0CYom4cGQChLaSJEkAVuFqSgoI7P0zq63zA8XhblM2JjdjhBrIQTa-HEpXDqqKZZubWu4OLFH_RbtN8e9LpxtxO9XqID_aoihusKVfLVGq4VWsiTG3NGvgAfhbtC
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=SpikeNAS-Bench%3A+Benchmarking+NAS+Algorithms+for+Spiking+Neural+Network+Architecture&rft.jtitle=IEEE+transactions+on+artificial+intelligence&rft.au=Sun%2C+Gengchen&rft.au=Liu%2C+Zhengkun&rft.au=Gan%2C+Lin&rft.au=Su%2C+Hang&rft.date=2025-06-01&rft.pub=IEEE&rft.eissn=2691-4581&rft.volume=6&rft.issue=6&rft.spage=1614&rft.epage=1625&rft_id=info:doi/10.1109%2FTAI.2025.3534136&rft.externalDocID=10855683
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2691-4581&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2691-4581&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2691-4581&client=summon