Attacks Which Do Not Kill Training Make Adversarial Learning Stronger

Adversarial training based on the minimax formulation is necessary for obtaining adversarial robustness of trained models. However, it is conservative or even pessimistic so that it sometimes hurts the natural generalization. In this paper, we raise a fundamental question---do we have to trade off n...

Full description

Saved in:
Bibliographic Details
Main Authors Zhang, Jingfeng, Xu, Xilie, Han, Bo, Niu, Gang, Cui, Lizhen, Sugiyama, Masashi, Kankanhalli, Mohan
Format Journal Article
LanguageEnglish
Published 25.02.2020
Subjects
Online AccessGet full text
DOI10.48550/arxiv.2002.11242

Cover

Abstract Adversarial training based on the minimax formulation is necessary for obtaining adversarial robustness of trained models. However, it is conservative or even pessimistic so that it sometimes hurts the natural generalization. In this paper, we raise a fundamental question---do we have to trade off natural generalization for adversarial robustness? We argue that adversarial training is to employ confident adversarial data for updating the current model. We propose a novel approach of friendly adversarial training (FAT): rather than employing most adversarial data maximizing the loss, we search for least adversarial (i.e., friendly adversarial) data minimizing the loss, among the adversarial data that are confidently misclassified. Our novel formulation is easy to implement by just stopping the most adversarial data searching algorithms such as PGD (projected gradient descent) early, which we call early-stopped PGD. Theoretically, FAT is justified by an upper bound of the adversarial risk. Empirically, early-stopped PGD allows us to answer the earlier question negatively---adversarial robustness can indeed be achieved without compromising the natural generalization.
AbstractList Adversarial training based on the minimax formulation is necessary for obtaining adversarial robustness of trained models. However, it is conservative or even pessimistic so that it sometimes hurts the natural generalization. In this paper, we raise a fundamental question---do we have to trade off natural generalization for adversarial robustness? We argue that adversarial training is to employ confident adversarial data for updating the current model. We propose a novel approach of friendly adversarial training (FAT): rather than employing most adversarial data maximizing the loss, we search for least adversarial (i.e., friendly adversarial) data minimizing the loss, among the adversarial data that are confidently misclassified. Our novel formulation is easy to implement by just stopping the most adversarial data searching algorithms such as PGD (projected gradient descent) early, which we call early-stopped PGD. Theoretically, FAT is justified by an upper bound of the adversarial risk. Empirically, early-stopped PGD allows us to answer the earlier question negatively---adversarial robustness can indeed be achieved without compromising the natural generalization.
Author Xu, Xilie
Sugiyama, Masashi
Kankanhalli, Mohan
Cui, Lizhen
Niu, Gang
Zhang, Jingfeng
Han, Bo
Author_xml – sequence: 1
  givenname: Jingfeng
  surname: Zhang
  fullname: Zhang, Jingfeng
– sequence: 2
  givenname: Xilie
  surname: Xu
  fullname: Xu, Xilie
– sequence: 3
  givenname: Bo
  surname: Han
  fullname: Han, Bo
– sequence: 4
  givenname: Gang
  surname: Niu
  fullname: Niu, Gang
– sequence: 5
  givenname: Lizhen
  surname: Cui
  fullname: Cui, Lizhen
– sequence: 6
  givenname: Masashi
  surname: Sugiyama
  fullname: Sugiyama, Masashi
– sequence: 7
  givenname: Mohan
  surname: Kankanhalli
  fullname: Kankanhalli, Mohan
BackLink https://doi.org/10.48550/arXiv.2002.11242$$DView paper in arXiv
BookMark eNrjYmDJy89LZWCQNDTQM7EwNTXQTyyqyCzTMzIwMNIzNDQyMeJkcHUsKUlMzi5WCM_ITM5QcMlX8MsvUfDOzMlRCClKzMzLzEtX8E3MTlVwTClLLSpOLMpMzFHwSU0sAssElxTl56WnFvEwsKYl5hSn8kJpbgZ5N9cQZw9dsIXxBUWZuYlFlfEgi-PBFhsTVgEA-MA5Ww
ContentType Journal Article
Copyright http://arxiv.org/licenses/nonexclusive-distrib/1.0
Copyright_xml – notice: http://arxiv.org/licenses/nonexclusive-distrib/1.0
DBID AKY
EPD
GOX
DOI 10.48550/arxiv.2002.11242
DatabaseName arXiv Computer Science
arXiv Statistics
arXiv.org
DatabaseTitleList
Database_xml – sequence: 1
  dbid: GOX
  name: arXiv.org
  url: http://arxiv.org/find
  sourceTypes: Open Access Repository
DeliveryMethod fulltext_linktorsrc
ExternalDocumentID 2002_11242
GroupedDBID AKY
EPD
GOX
ID FETCH-arxiv_primary_2002_112423
IEDL.DBID GOX
IngestDate Wed Jul 23 00:23:58 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-arxiv_primary_2002_112423
OpenAccessLink https://arxiv.org/abs/2002.11242
ParticipantIDs arxiv_primary_2002_11242
PublicationCentury 2000
PublicationDate 2020-02-25
PublicationDateYYYYMMDD 2020-02-25
PublicationDate_xml – month: 02
  year: 2020
  text: 2020-02-25
  day: 25
PublicationDecade 2020
PublicationYear 2020
Score 3.435293
SecondaryResourceType preprint
Snippet Adversarial training based on the minimax formulation is necessary for obtaining adversarial robustness of trained models. However, it is conservative or even...
SourceID arxiv
SourceType Open Access Repository
SubjectTerms Computer Science - Learning
Statistics - Machine Learning
Title Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
URI https://arxiv.org/abs/2002.11242
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV1LTwIxEJ4gJy9GogYf6By8bgS23S1HohCiAQ9i3BvpurOwkYhZqvHnO9Ndoxeu7aRtOs082n7fAFwbtvpap6wBrShQFFFgs6gXxN08TDOriUgAztNZNHlW94lOGoC_WBhbfhdfFT9wur2RHwQCclFsZPc4UBAw72NSPU56Kq5a_k-OY0zf9M9JjA_hoI7ucFipowUNej-C0dA5QbLjy6p4XeHdBmcbhw_Feo3zukADTu0boS-OvLVyJLDmPV3ik9xVL6k8hqvxaH47CfzEi4-KJULqO_YXfk3hCTQ5l6c2IBmTd8OM8xLeGw4-TET5QKuejWOj82xwCu1do5zt7jqH_b6kgYK01hfQdOUnddhXuvTSb9gPVCZsyg
linkProvider Cornell University
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Attacks+Which+Do+Not+Kill+Training+Make+Adversarial+Learning+Stronger&rft.au=Zhang%2C+Jingfeng&rft.au=Xu%2C+Xilie&rft.au=Han%2C+Bo&rft.au=Niu%2C+Gang&rft.date=2020-02-25&rft_id=info:doi/10.48550%2Farxiv.2002.11242&rft.externalDocID=2002_11242