Non-Singular Adversarial Robustness of Neural Networks

ICASSP 2021 Adversarial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations. While being critical, we argue that solving this singular issue alone fails to provide a comprehensive robustness assessment. Even worse, the conclusions...

Full description

Saved in:
Bibliographic Details
Main Authors Tsai, Yu-Lin, Hsu, Chia-Yi, Yu, Chia-Mu, Chen, Pin-Yu
Format Journal Article
LanguageEnglish
Published 23.02.2021
Subjects
Online AccessGet full text
DOI10.48550/arxiv.2102.11935

Cover

Abstract ICASSP 2021 Adversarial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations. While being critical, we argue that solving this singular issue alone fails to provide a comprehensive robustness assessment. Even worse, the conclusions drawn from singular robustness may give a false sense of overall model robustness. Specifically, our findings show that adversarially trained models that are robust to input perturbations are still (or even more) vulnerable to weight perturbations when compared to standard models. In this paper, we formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights. To our best knowledge, this study is the first work considering simultaneous input-weight adversarial perturbations. Based on a multi-layer feed-forward neural network model with ReLU activation functions and standard classification loss, we establish error analysis for quantifying the loss sensitivity subject to$\ell_\infty$ -norm bounded perturbations on data inputs and model weights. Based on the error analysis, we propose novel regularization functions for robust training and demonstrate improved non-singular robustness against joint input-weight adversarial perturbations.
AbstractList ICASSP 2021 Adversarial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations. While being critical, we argue that solving this singular issue alone fails to provide a comprehensive robustness assessment. Even worse, the conclusions drawn from singular robustness may give a false sense of overall model robustness. Specifically, our findings show that adversarially trained models that are robust to input perturbations are still (or even more) vulnerable to weight perturbations when compared to standard models. In this paper, we formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights. To our best knowledge, this study is the first work considering simultaneous input-weight adversarial perturbations. Based on a multi-layer feed-forward neural network model with ReLU activation functions and standard classification loss, we establish error analysis for quantifying the loss sensitivity subject to$\ell_\infty$ -norm bounded perturbations on data inputs and model weights. Based on the error analysis, we propose novel regularization functions for robust training and demonstrate improved non-singular robustness against joint input-weight adversarial perturbations.
Author Yu, Chia-Mu
Tsai, Yu-Lin
Chen, Pin-Yu
Hsu, Chia-Yi
Author_xml – sequence: 1
  givenname: Yu-Lin
  surname: Tsai
  fullname: Tsai, Yu-Lin
– sequence: 2
  givenname: Chia-Yi
  surname: Hsu
  fullname: Hsu, Chia-Yi
– sequence: 3
  givenname: Chia-Mu
  surname: Yu
  fullname: Yu, Chia-Mu
– sequence: 4
  givenname: Pin-Yu
  surname: Chen
  fullname: Chen, Pin-Yu
BackLink https://doi.org/10.48550/arXiv.2102.11935$$DView paper in arXiv
BookMark eNrjYmDJy89LZWCQNDTQM7EwNTXQTyyqyCzTMzI0MNIzNLQ0NuVkMPPLz9MNzsxLL81JLFJwTClLLSpOLMpMzFEIyk8qLS7JSy0uVshPU_BLLS0CCvqllpTnF2UX8zCwpiXmFKfyQmluBnk31xBnD12wDfEFRZm5iUWV8SCb4sE2GRNWAQDcpzTs
ContentType Journal Article
Copyright http://arxiv.org/licenses/nonexclusive-distrib/1.0
Copyright_xml – notice: http://arxiv.org/licenses/nonexclusive-distrib/1.0
DBID AKY
GOX
DOI 10.48550/arxiv.2102.11935
DatabaseName arXiv Computer Science
arXiv.org
DatabaseTitleList
Database_xml – sequence: 1
  dbid: GOX
  name: arXiv.org
  url: http://arxiv.org/find
  sourceTypes: Open Access Repository
DeliveryMethod fulltext_linktorsrc
ExternalDocumentID 2102_11935
GroupedDBID AKY
GOX
ID FETCH-arxiv_primary_2102_119353
IEDL.DBID GOX
IngestDate Tue Sep 30 19:28:44 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-arxiv_primary_2102_119353
OpenAccessLink https://arxiv.org/abs/2102.11935
ParticipantIDs arxiv_primary_2102_11935
PublicationCentury 2000
PublicationDate 2021-02-23
PublicationDateYYYYMMDD 2021-02-23
PublicationDate_xml – month: 02
  year: 2021
  text: 2021-02-23
  day: 23
PublicationDecade 2020
PublicationYear 2021
Score 3.5058758
SecondaryResourceType preprint
Snippet ICASSP 2021 Adversarial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations. While being...
SourceID arxiv
SourceType Open Access Repository
SubjectTerms Computer Science - Learning
Title Non-Singular Adversarial Robustness of Neural Networks
URI https://arxiv.org/abs/2102.11935
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV1LSwMxEB7anryIUqW-ag5eg3Y3j81RpKUIXUEU9rZkmix46Up3K_58M0lFL71OhmTIkHmQmW8A7rCwNli9ghfKeC6MUxylVVz7bIbC4_pBU7_zqlTLd_FcyWoA7LcXxm6_P74SPjB295SPhEdtcjmEYQgUqJn3pUqfkxGKa8__xxdizEj65yQWJ3C8j-7YY1LHKQz8ZgyqbDecxtpTxSeLE5A7S3pnry3uup6MDWsbRjgZgVimwuzuDG4X87enJY8n1Z8JFqImIeooRH4Oo5C8-wkwhY2QGtHNpBOFW1vjssa7kOhobbwVFzA5tMvl4aUrOMqotoJaq_NrGPXbnb8JzrHHabyhH1cxaYo
linkProvider Cornell University
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Non-Singular+Adversarial+Robustness+of+Neural+Networks&rft.au=Tsai%2C+Yu-Lin&rft.au=Hsu%2C+Chia-Yi&rft.au=Yu%2C+Chia-Mu&rft.au=Chen%2C+Pin-Yu&rft.date=2021-02-23&rft_id=info:doi/10.48550%2Farxiv.2102.11935&rft.externalDocID=2102_11935