A fast saddle-point dynamical system approach to robust deep learning
Recent focus on robustness to adversarial attacks for deep neural networks produced a large variety of algorithms for training robust models. Most of the effective algorithms involve solving the min–max optimization problem for training robust models (min step) under worst-case attacks (max step). H...
Saved in:
| Published in | Neural networks Vol. 139; pp. 33 - 44 |
|---|---|
| Main Authors | , , , , , |
| Format | Journal Article |
| Language | English |
| Published |
United States
Elsevier Ltd
01.07.2021
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 0893-6080 1879-2782 1879-2782 |
| DOI | 10.1016/j.neunet.2021.02.021 |
Cover
| Abstract | Recent focus on robustness to adversarial attacks for deep neural networks produced a large variety of algorithms for training robust models. Most of the effective algorithms involve solving the min–max optimization problem for training robust models (min step) under worst-case attacks (max step). However, they often suffer from high computational cost from running several inner maximization iterations (to find an optimal attack) inside every outer minimization iteration. Therefore, it becomes difficult to readily apply such algorithms for moderate to large size real world data sets. To alleviate this, we explore the effectiveness of iterative descent–ascent algorithms where the maximization and minimization steps are executed in an alternate fashion to simultaneously obtain the worst-case attack and the corresponding robust model. Specifically, we propose a novel discrete-time dynamical system-based algorithm that aims to find the saddle point of a min–max optimization problem in the presence of uncertainties. Under the assumptions that the cost function is convex and uncertainties enter concavely in the robust learning problem, we analytically show that our algorithm converges asymptotically to the robust optimal solution under a general adversarial budget constraints as induced by ℓp norm, for 1≤p≤∞. Based on our proposed analysis, we devise a fast robust training algorithm for deep neural networks. Although such training involves highly non-convex robust optimization problems, empirical results show that the algorithm can achieve significant robustness compared to other state-of-the-art robust models on benchmark data sets. |
|---|---|
| AbstractList | Recent focus on robustness to adversarial attacks for deep neural networks produced a large variety of algorithms for training robust models. Most of the effective algorithms involve solving the min-max optimization problem for training robust models (min step) under worst-case attacks (max step). However, they often suffer from high computational cost from running several inner maximization iterations (to find an optimal attack) inside every outer minimization iteration. Therefore, it becomes difficult to readily apply such algorithms for moderate to large size real world data sets. To alleviate this, we explore the effectiveness of iterative descent-ascent algorithms where the maximization and minimization steps are executed in an alternate fashion to simultaneously obtain the worst-case attack and the corresponding robust model. Specifically, we propose a novel discrete-time dynamical system-based algorithm that aims to find the saddle point of a min-max optimization problem in the presence of uncertainties. Under the assumptions that the cost function is convex and uncertainties enter concavely in the robust learning problem, we analytically show that our algorithm converges asymptotically to the robust optimal solution under a general adversarial budget constraints as induced by ℓ
norm, for 1≤p≤∞. Based on our proposed analysis, we devise a fast robust training algorithm for deep neural networks. Although such training involves highly non-convex robust optimization problems, empirical results show that the algorithm can achieve significant robustness compared to other state-of-the-art robust models on benchmark data sets. Recent focus on robustness to adversarial attacks for deep neural networks produced a large variety of algorithms for training robust models. Most of the effective algorithms involve solving the min-max optimization problem for training robust models (min step) under worst-case attacks (max step). However, they often suffer from high computational cost from running several inner maximization iterations (to find an optimal attack) inside every outer minimization iteration. Therefore, it becomes difficult to readily apply such algorithms for moderate to large size real world data sets. To alleviate this, we explore the effectiveness of iterative descent-ascent algorithms where the maximization and minimization steps are executed in an alternate fashion to simultaneously obtain the worst-case attack and the corresponding robust model. Specifically, we propose a novel discrete-time dynamical system-based algorithm that aims to find the saddle point of a min-max optimization problem in the presence of uncertainties. Under the assumptions that the cost function is convex and uncertainties enter concavely in the robust learning problem, we analytically show that our algorithm converges asymptotically to the robust optimal solution under a general adversarial budget constraints as induced by ℓp norm, for 1≤p≤∞. Based on our proposed analysis, we devise a fast robust training algorithm for deep neural networks. Although such training involves highly non-convex robust optimization problems, empirical results show that the algorithm can achieve significant robustness compared to other state-of-the-art robust models on benchmark data sets.Recent focus on robustness to adversarial attacks for deep neural networks produced a large variety of algorithms for training robust models. Most of the effective algorithms involve solving the min-max optimization problem for training robust models (min step) under worst-case attacks (max step). However, they often suffer from high computational cost from running several inner maximization iterations (to find an optimal attack) inside every outer minimization iteration. Therefore, it becomes difficult to readily apply such algorithms for moderate to large size real world data sets. To alleviate this, we explore the effectiveness of iterative descent-ascent algorithms where the maximization and minimization steps are executed in an alternate fashion to simultaneously obtain the worst-case attack and the corresponding robust model. Specifically, we propose a novel discrete-time dynamical system-based algorithm that aims to find the saddle point of a min-max optimization problem in the presence of uncertainties. Under the assumptions that the cost function is convex and uncertainties enter concavely in the robust learning problem, we analytically show that our algorithm converges asymptotically to the robust optimal solution under a general adversarial budget constraints as induced by ℓp norm, for 1≤p≤∞. Based on our proposed analysis, we devise a fast robust training algorithm for deep neural networks. Although such training involves highly non-convex robust optimization problems, empirical results show that the algorithm can achieve significant robustness compared to other state-of-the-art robust models on benchmark data sets. Recent focus on robustness to adversarial attacks for deep neural networks produced a large variety of algorithms for training robust models. Most of the effective algorithms involve solving the min–max optimization problem for training robust models (min step) under worst-case attacks (max step). However, they often suffer from high computational cost from running several inner maximization iterations (to find an optimal attack) inside every outer minimization iteration. Therefore, it becomes difficult to readily apply such algorithms for moderate to large size real world data sets. To alleviate this, we explore the effectiveness of iterative descent–ascent algorithms where the maximization and minimization steps are executed in an alternate fashion to simultaneously obtain the worst-case attack and the corresponding robust model. Specifically, we propose a novel discrete-time dynamical system-based algorithm that aims to find the saddle point of a min–max optimization problem in the presence of uncertainties. Under the assumptions that the cost function is convex and uncertainties enter concavely in the robust learning problem, we analytically show that our algorithm converges asymptotically to the robust optimal solution under a general adversarial budget constraints as induced by ℓp norm, for 1≤p≤∞. Based on our proposed analysis, we devise a fast robust training algorithm for deep neural networks. Although such training involves highly non-convex robust optimization problems, empirical results show that the algorithm can achieve significant robustness compared to other state-of-the-art robust models on benchmark data sets. |
| Author | Esfandiari, Yasaman Vaidya, Umesh Elia, Nicola Balu, Aditya Ebrahimi, Keivan Sarkar, Soumik |
| Author_xml | – sequence: 1 givenname: Yasaman orcidid: 0000-0002-8178-5385 surname: Esfandiari fullname: Esfandiari, Yasaman organization: Iowa State University, United States of America – sequence: 2 givenname: Aditya surname: Balu fullname: Balu, Aditya organization: Iowa State University, United States of America – sequence: 3 givenname: Keivan surname: Ebrahimi fullname: Ebrahimi, Keivan organization: Iowa State University, United States of America – sequence: 4 givenname: Umesh surname: Vaidya fullname: Vaidya, Umesh organization: Clemson University, United States of America – sequence: 5 givenname: Nicola surname: Elia fullname: Elia, Nicola organization: University of Minnesota, United States of America – sequence: 6 givenname: Soumik orcidid: 0000-0002-6775-9199 surname: Sarkar fullname: Sarkar, Soumik email: soumiks@iastate.edu organization: Iowa State University, United States of America |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/33677377$$D View this record in MEDLINE/PubMed |
| BookMark | eNqFkE1r3DAQhkVJaTZp_0EpPubijT7WktxDIYQ0CQR6ac9iLI1bLbbkSnJg_30VNu2hhwQG5jDPM8y8Z-QkxICEfGR0yyiTl_ttwDVg2XLK2ZbyWuwN2TCt-pYrzU_IhupetJJqekrOct5TSqXeiXfkVAiplFBqQ26umhFyaTI4N2G7RB9K4w4BZm9havIhF5wbWJYUwf5qSmxSHNYqOMSlmRBS8OHne_J2hCnjh-d-Tn58vfl-fdc-fLu9v756aK2QvLS9kyC7wSrh-DiIETqrdW-FA4ZCSdlRDULzXgB2jjM6CFUHrJNcSC2cFufk4ri3nvN7xVzM7LPFaYKAcc2G73rd90wpWdFPz-g6zOjMkvwM6WD-vl6B3RGwKeaccPyHMGqeEjZ7c0zYPCVsKK_Fqvb5P836AsXHUBL46TX5y1HGGtKjx2Sy9RgsOp_QFuOif3nBH1tHmFM |
| CitedBy_id | crossref_primary_10_3390_infrastructures6020028 crossref_primary_10_1016_j_neunet_2022_03_010 crossref_primary_10_1109_ACCESS_2022_3213734 crossref_primary_10_3390_app14020661 crossref_primary_10_1007_s12046_022_01980_6 |
| Cites_doi | 10.1007/s42421-020-00020-1 10.1137/0114053 10.1109/CVPR.2016.282 10.1109/ICCV.2019.00487 10.1137/080734510 10.1109/CVPR.2019.01161 10.23919/ECC.2019.8796115 10.3390/infrastructures6020028 10.1016/j.neucom.2018.04.027 10.1007/s10994-010-5188-5 |
| ContentType | Journal Article |
| Copyright | 2021 Elsevier Ltd Copyright © 2021 Elsevier Ltd. All rights reserved. |
| Copyright_xml | – notice: 2021 Elsevier Ltd – notice: Copyright © 2021 Elsevier Ltd. All rights reserved. |
| DBID | AAYXX CITATION NPM 7X8 |
| DOI | 10.1016/j.neunet.2021.02.021 |
| DatabaseName | CrossRef PubMed MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed MEDLINE - Academic |
| DatabaseTitleList | PubMed MEDLINE - Academic |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISSN | 1879-2782 |
| EndPage | 44 |
| ExternalDocumentID | 33677377 10_1016_j_neunet_2021_02_021 S089360802100071X |
| Genre | Journal Article |
| GrantInformation_xml | – fundername: NSF, USA grantid: CNS#1845969 |
| GroupedDBID | --- --K --M -~X .DC .~1 0R~ 123 186 1B1 1RT 1~. 1~5 29N 4.4 457 4G. 53G 5RE 5VS 6TJ 7-5 71M 8P~ 9JM 9JN AABNK AACTN AADPK AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AAXLA AAXUO AAYFN ABAOU ABBOA ABCQJ ABEFU ABFNM ABFRF ABHFT ABIVO ABJNI ABLJU ABMAC ABXDB ABYKQ ACAZW ACDAQ ACGFO ACGFS ACIUM ACNNM ACRLP ACZNC ADBBV ADEZE ADGUI ADJOM ADMUD ADRHT AEBSH AECPX AEFWE AEKER AENEX AFKWA AFTJW AFXIZ AGHFR AGUBO AGWIK AGYEJ AHHHB AHJVU AHZHX AIALX AIEXJ AIKHN AITUG AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD ARUGR ASPBG AVWKF AXJTR AZFZN BJAXD BKOJK BLXMC CS3 DU5 EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 F0J F5P FDB FEDTE FGOYB FIRID FNPLU FYGXN G-2 G-Q G8K GBLVA GBOLZ HLZ HMQ HVGLF HZ~ IHE J1W JJJVA K-O KOM KZ1 LG9 LMP M2V M41 MHUIS MO0 MOBAO MVM N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 R2- RIG ROL RPZ SBC SCC SDF SDG SDP SES SEW SNS SPC SPCBC SSN SST SSV SSW SSZ T5K TAE UAP UNMZH VOH WUQ XPP ZMT ~G- AATTM AAXKI AAYWO AAYXX ABDPE ABWVN ACLOT ACRPL ACVFH ADCNI ADNMO AEIPS AEUPX AFJKZ AFPUW AGQPQ AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP CITATION EFKBS ~HD BNPGV NPM SSH 7X8 |
| ID | FETCH-LOGICAL-c362t-9d6a65bc73d2fb3fa5c889c3da1e3766508a38293ae5d210b371e315623683d83 |
| IEDL.DBID | .~1 |
| ISSN | 0893-6080 1879-2782 |
| IngestDate | Thu Oct 02 07:04:04 EDT 2025 Thu Apr 03 06:59:19 EDT 2025 Thu Oct 16 04:25:24 EDT 2025 Thu Apr 24 23:10:21 EDT 2025 Fri Feb 23 02:41:56 EST 2024 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Keywords | Adversarial training Robust deep learning Robust optimization |
| Language | English |
| License | Copyright © 2021 Elsevier Ltd. All rights reserved. |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c362t-9d6a65bc73d2fb3fa5c889c3da1e3766508a38293ae5d210b371e315623683d83 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
| ORCID | 0000-0002-8178-5385 0000-0002-6775-9199 |
| PMID | 33677377 |
| PQID | 2498991776 |
| PQPubID | 23479 |
| PageCount | 12 |
| ParticipantIDs | proquest_miscellaneous_2498991776 pubmed_primary_33677377 crossref_primary_10_1016_j_neunet_2021_02_021 crossref_citationtrail_10_1016_j_neunet_2021_02_021 elsevier_sciencedirect_doi_10_1016_j_neunet_2021_02_021 |
| PublicationCentury | 2000 |
| PublicationDate | July 2021 2021-07-00 2021-Jul 20210701 |
| PublicationDateYYYYMMDD | 2021-07-01 |
| PublicationDate_xml | – month: 07 year: 2021 text: July 2021 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States |
| PublicationTitle | Neural networks |
| PublicationTitleAlternate | Neural Netw |
| PublicationYear | 2021 |
| Publisher | Elsevier Ltd |
| Publisher_xml | – name: Elsevier Ltd |
| References | Bastani, Ioannou, Lampropoulos, Vytiniotis, Nori, Criminisi (b3) 2016 Kurakin, Goodfellow, Bengio (b24) 2016 Tramèr, Kurakin, Papernot, Goodfellow, Boneh, McDaniel (b42) 2017 (pp. 2574–2582). Ben-Tal, El Ghaoui, Nemirovski (b4) 2009 Khalid, Hanif, Rehman, Shafique (b22) 2018 Moosavi-Dezfooli, Seyed-Mohsen, Fawzi, Alhussein, & Frossard, Pascal (2016). Deepfool: A simple and accurate method to fool deep neural networks. In Xu, Caramanis, Mannor (b45) 2009 Papernot, McDaniel, Goodfellow, Jha, Celik, Swami (b30) 2017 Goodfellow, Shlens, Szegedy (b15) 2014 Papernot, McDaniel, Wu, Jha, Swami (b31) 2016 (pp. 11350–11359). Tan, Sharma, Sarkar (b41) 2020 Lee, Esfandiari, Tan, Sarkar (b25) 2020 Barreno, Nelson, Joseph, Tygar (b2) 2010; 81 He, Zhang, Ren, Sun (b19) 2015 Wong, Rice, Kolter (b43) 2020 Zhang, Zhang, Lu, Zhu, Dong (b50) 2019 Engstrom, Ilyas, Santurkar, Tsipras (b13) 2019 (pp. 4773–4783). Sra, Nowozin, Wright (b38) 2011 Lin, Jin, Jordan (b27) 2019 . Haghighat, Ravichandra-Mouli, Chakraborty, Esfandiari, Arabi, Sharma (b17) 2020; 2 Bertsimas, Brown, Caramanis (b5) 2011; 53 Simonyan, Zisserman (b35) 2014 Yao, Zhewei, Gholami, Amir, Xu, Peng, Keutzer, Kurt, & Mahoney, Michael W (2019). Trust region based adversarial attack on neural networks. In Sinha, Namkoong, Volpi, Duchi (b36) 2017 Chen, Robert S., Lucier, Brendan, Singer, Yaron, & Syrgkanis, Vasilis (2017). Robust optimization for non-convex objectives. In Biggio, Corona, Maiorca, Nelson, Šrndić, Laskov (b6) 2013 Naples, Italy. Danskin (b11) 1966; 14 Shaham, Yamada, Negahban (b34) 2018; 307 Madry, Makelov, Schmidt, Tsipras, Vladu (b28) 2017 Szegedy, Zaremba, Sutskever, Bruna, Erhan, Goodfellow (b39) 2013 Boyd, Vandenberghe (b7) 2004 Gu, Rigazio (b16) 2014 Tan, Esfandiari, Lee, Sarkar (b40) 2020 Zhang, Wang (b48) 2019 Ebrahimi, Keivan, Elia, Nicola, & Vaidya, Umesh (2019). A continuous time dynamical system approach for solving robust optimization. In Fawzi, Moosavi-Dezfooli, Frossard (b14) 2016 Zhang, Yu, Jiao, Xing, Ghaoui, Jordan (b49) 2019; abs/1901.08573 Santurkar, Tsipras, Tran, Ilyas, Engstrom, Madry (b32) 2019; abs/1906.09453 Carlini, Wagner (b8) 2017 Havens, Jiang, Sarkar (b18) 2018 Hosseini, Smadi (b20) 2021; 6 Wong, Schmidt, Metzen, Kolter (b44) 2018 Sitawarin, Bhagoji, Mosenia, Chiang, Mittal (b37) 2018 Zagoruyko, Komodakis (b47) 2016 Lin, Hong, Liao, Shih, Liu, Sun (b26) 2017 Athalye, Carlini, Wagner (b1) 2018 Joshi, Ameya, Mukherjee, Amitangshu, Sarkar, Soumik, & Hegde, Chinmay (2019). Semantic adversarial attacks: Parametric transformations that fool deep classifiers. In Shafahi, Najibi, Ghiasi, Xu, Dickerson, Studer (b33) 2019 Krizhevsky, Nair, Hinton (b23) 2014 Coleman, Narayanan, Kang, Zhao, Zhang, Nardi (b10) 2017; 100 Sra (10.1016/j.neunet.2021.02.021_b38) 2011 Khalid (10.1016/j.neunet.2021.02.021_b22) 2018 10.1016/j.neunet.2021.02.021_b29 Bertsimas (10.1016/j.neunet.2021.02.021_b5) 2011; 53 Madry (10.1016/j.neunet.2021.02.021_b28) 2017 10.1016/j.neunet.2021.02.021_b21 Papernot (10.1016/j.neunet.2021.02.021_b31) 2016 Szegedy (10.1016/j.neunet.2021.02.021_b39) 2013 Boyd (10.1016/j.neunet.2021.02.021_b7) 2004 Bastani (10.1016/j.neunet.2021.02.021_b3) 2016 Wong (10.1016/j.neunet.2021.02.021_b44) 2018 Papernot (10.1016/j.neunet.2021.02.021_b30) 2017 Tan (10.1016/j.neunet.2021.02.021_b40) 2020 Wong (10.1016/j.neunet.2021.02.021_b43) 2020 Zagoruyko (10.1016/j.neunet.2021.02.021_b47) 2016 Lin (10.1016/j.neunet.2021.02.021_b27) 2019 Simonyan (10.1016/j.neunet.2021.02.021_b35) 2014 Tramèr (10.1016/j.neunet.2021.02.021_b42) 2017 Fawzi (10.1016/j.neunet.2021.02.021_b14) 2016 Danskin (10.1016/j.neunet.2021.02.021_b11) 1966; 14 Engstrom (10.1016/j.neunet.2021.02.021_b13) 2019 Goodfellow (10.1016/j.neunet.2021.02.021_b15) 2014 Shaham (10.1016/j.neunet.2021.02.021_b34) 2018; 307 Hosseini (10.1016/j.neunet.2021.02.021_b20) 2021; 6 10.1016/j.neunet.2021.02.021_b12 He (10.1016/j.neunet.2021.02.021_b19) 2015 Lee (10.1016/j.neunet.2021.02.021_b25) 2020 Haghighat (10.1016/j.neunet.2021.02.021_b17) 2020; 2 Kurakin (10.1016/j.neunet.2021.02.021_b24) 2016 Sinha (10.1016/j.neunet.2021.02.021_b36) 2017 Zhang (10.1016/j.neunet.2021.02.021_b50) 2019 Carlini (10.1016/j.neunet.2021.02.021_b8) 2017 Lin (10.1016/j.neunet.2021.02.021_b26) 2017 Zhang (10.1016/j.neunet.2021.02.021_b48) 2019 Tan (10.1016/j.neunet.2021.02.021_b41) 2020 Krizhevsky (10.1016/j.neunet.2021.02.021_b23) 2014 Xu (10.1016/j.neunet.2021.02.021_b45) 2009 Sitawarin (10.1016/j.neunet.2021.02.021_b37) 2018 10.1016/j.neunet.2021.02.021_b46 10.1016/j.neunet.2021.02.021_b9 Ben-Tal (10.1016/j.neunet.2021.02.021_b4) 2009 Coleman (10.1016/j.neunet.2021.02.021_b10) 2017; 100 Gu (10.1016/j.neunet.2021.02.021_b16) 2014 Barreno (10.1016/j.neunet.2021.02.021_b2) 2010; 81 Zhang (10.1016/j.neunet.2021.02.021_b49) 2019; abs/1901.08573 Shafahi (10.1016/j.neunet.2021.02.021_b33) 2019 Santurkar (10.1016/j.neunet.2021.02.021_b32) 2019; abs/1906.09453 Havens (10.1016/j.neunet.2021.02.021_b18) 2018 Athalye (10.1016/j.neunet.2021.02.021_b1) 2018 Biggio (10.1016/j.neunet.2021.02.021_b6) 2013 |
| References_xml | – year: 2018 ident: b1 article-title: Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples – volume: 2 start-page: 115 year: 2020 end-page: 145 ident: b17 article-title: Applications of deep learning in intelligent transportation systems publication-title: Journal of Big Data Analytics in Transportation – year: 2017 ident: b36 article-title: Certifying some distributional robustness with principled adversarial training – start-page: 3756 year: 2017 end-page: 3762 ident: b26 article-title: Tactics of adversarial attack on deep reinforcement learning agents publication-title: Proceedings of the twenty-sixth international joint conference on artificial intelligence – start-page: 1831 year: 2019 end-page: 1841 ident: b48 article-title: Defense against adversarial attacks using feature scattering-based adversarial training publication-title: Advances in neural information processing systems – start-page: 1801 year: 2009 end-page: 1808 ident: b45 article-title: Robust regression and lasso publication-title: Advances in neural information processing systems – reference: Yao, Zhewei, Gholami, Amir, Xu, Peng, Keutzer, Kurt, & Mahoney, Michael W (2019). Trust region based adversarial attack on neural networks. In – start-page: 9916 year: 2018 end-page: 9926 ident: b18 article-title: Online robust policy learning in the presence of unknown adversaries publication-title: Advances in neural information processing systems – start-page: 3959 year: 2020 end-page: 3964 ident: b40 article-title: Robustifying reinforcement learning agents via action space adversarial training publication-title: 2020 American control conference – start-page: 387 year: 2013 end-page: 402 ident: b6 article-title: Evasion attacks against machine learning at test time publication-title: Joint European conference on machine learning and knowledge discovery in databases – year: 2020 ident: b43 article-title: Fast is better than free: Revisiting adversarial training – start-page: 327 year: 2018 end-page: 332 ident: b22 article-title: Security for machine learning-based systems: Attacks and challenges during training and inference publication-title: 2018 international conference on frontiers of information technology – reference: Chen, Robert S., Lucier, Brendan, Singer, Yaron, & Syrgkanis, Vasilis (2017). Robust optimization for non-convex objectives. In – volume: 307 start-page: 195 year: 2018 end-page: 204 ident: b34 article-title: Understanding adversarial training: Increasing local stability of supervised models through robust optimization publication-title: Neurocomputing – year: 2009 ident: b4 article-title: Robust optimization, volume 28 – reference: (pp. 11350–11359). – volume: abs/1901.08573 year: 2019 ident: b49 article-title: Theoretically principled trade-off between robustness and accuracy publication-title: CoRR – start-page: 1632 year: 2016 end-page: 1640 ident: b14 article-title: Robustness of classifiers: From adversarial to random noise publication-title: Advances in neural information processing systems – year: 2019 ident: b27 article-title: On gradient descent ascent for nonconvex-concave minimax problems – year: 2017 ident: b28 article-title: Towards deep learning models resistant to adversarial attacks – year: 2014 ident: b15 article-title: Explaining and harnessing adversarial examples – reference: (pp. 4773–4783). – volume: abs/1906.09453 year: 2019 ident: b32 article-title: Computer vision with a single (robust) classifier publication-title: CoRR – year: 2019 ident: b33 article-title: Adversarial training for free! – start-page: 1 year: 2020 end-page: 12 ident: b41 article-title: Robust deep reinforcement learning for traffic signal control publication-title: Journal of Big Data Analytics in Transportation – volume: 81 start-page: 121 year: 2010 end-page: 148 ident: b2 article-title: The security of machine learning publication-title: Machine Learning – reference: Ebrahimi, Keivan, Elia, Nicola, & Vaidya, Umesh (2019). A continuous time dynamical system approach for solving robust optimization. In – year: 2017 ident: b42 article-title: Ensemble adversarial training: Attacks and defenses – year: 2004 ident: b7 article-title: Convex optimization – reference: Moosavi-Dezfooli, Seyed-Mohsen, Fawzi, Alhussein, & Frossard, Pascal (2016). Deepfool: A simple and accurate method to fool deep neural networks. In – year: 2016 ident: b24 article-title: Adversarial examples in the physical world – year: 2014 ident: b16 article-title: Towards deep neural network architectures robust to adversarial examples – reference: (pp. 2574–2582). – year: 2011 ident: b38 article-title: Optimization for machine learning – year: 2013 ident: b39 article-title: Intriguing properties of neural networks – year: 2019 ident: b13 article-title: Robustness (Python library) – reference: . Naples, Italy. – year: 2014 ident: b23 article-title: The CIFAR-10 dataset – start-page: 506 year: 2017 end-page: 519 ident: b30 article-title: Practical black-box attacks against machine learning publication-title: Proceedings of the 2017 ACM on Asia conference on computer and communications security – start-page: 2613 year: 2016 end-page: 2621 ident: b3 article-title: Measuring neural net robustness with constraints publication-title: Advances in neural information processing systems 29 – year: 2018 ident: b37 article-title: Darts: Deceiving autonomous cars with toxic signs – year: 2020 ident: b25 article-title: Query-based targeted action-space adversarial policies on deep reinforcement learning agents – start-page: 227 year: 2019 end-page: 238 ident: b50 article-title: You only propagate once: Accelerating adversarial training via maximal principle publication-title: Advances in neural information processing systems – start-page: 39 year: 2017 end-page: 57 ident: b8 article-title: Towards evaluating the robustness of neural networks publication-title: 2017 IEEE symposium on security and privacy – reference: . – volume: 6 start-page: 28 year: 2021 ident: b20 article-title: How prediction accuracy can affect the decision-making process in pavement management system publication-title: Infrastructures – volume: 53 start-page: 464 year: 2011 end-page: 501 ident: b5 article-title: Theory and applications of robust optimization publication-title: SIAM Review – start-page: 8400 year: 2018 end-page: 8409 ident: b44 article-title: Scaling provable adversarial defenses publication-title: Advances in neural information processing systems – volume: 14 start-page: 641 year: 1966 end-page: 664 ident: b11 article-title: The theory of max-min, with applications publication-title: SIAM Journal of Applied Mathematics – year: 2016 ident: b47 article-title: Wide residual networks – year: 2014 ident: b35 article-title: Very deep convolutional networks for large-scale image recognition – reference: Joshi, Ameya, Mukherjee, Amitangshu, Sarkar, Soumik, & Hegde, Chinmay (2019). Semantic adversarial attacks: Parametric transformations that fool deep classifiers. In – start-page: 582 year: 2016 end-page: 597 ident: b31 article-title: Distillation as a defense to adversarial perturbations against deep neural networks publication-title: 2016 IEEE symposium on security and privacy – year: 2015 ident: b19 article-title: Deep residual learning for image recognition – volume: 100 start-page: 102 year: 2017 ident: b10 article-title: Dawnbench: An end-to-end deep learning benchmark and competition publication-title: Training – start-page: 327 year: 2018 ident: 10.1016/j.neunet.2021.02.021_b22 article-title: Security for machine learning-based systems: Attacks and challenges during training and inference – year: 2020 ident: 10.1016/j.neunet.2021.02.021_b25 – volume: 2 start-page: 115 issue: 2 year: 2020 ident: 10.1016/j.neunet.2021.02.021_b17 article-title: Applications of deep learning in intelligent transportation systems publication-title: Journal of Big Data Analytics in Transportation doi: 10.1007/s42421-020-00020-1 – year: 2014 ident: 10.1016/j.neunet.2021.02.021_b16 – volume: abs/1901.08573 year: 2019 ident: 10.1016/j.neunet.2021.02.021_b49 article-title: Theoretically principled trade-off between robustness and accuracy publication-title: CoRR – year: 2019 ident: 10.1016/j.neunet.2021.02.021_b13 – volume: 14 start-page: 641 issue: 4 year: 1966 ident: 10.1016/j.neunet.2021.02.021_b11 article-title: The theory of max-min, with applications publication-title: SIAM Journal of Applied Mathematics doi: 10.1137/0114053 – year: 2011 ident: 10.1016/j.neunet.2021.02.021_b38 – start-page: 1 year: 2020 ident: 10.1016/j.neunet.2021.02.021_b41 article-title: Robust deep reinforcement learning for traffic signal control publication-title: Journal of Big Data Analytics in Transportation – year: 2009 ident: 10.1016/j.neunet.2021.02.021_b4 – year: 2020 ident: 10.1016/j.neunet.2021.02.021_b43 – ident: 10.1016/j.neunet.2021.02.021_b29 doi: 10.1109/CVPR.2016.282 – year: 2019 ident: 10.1016/j.neunet.2021.02.021_b27 – year: 2004 ident: 10.1016/j.neunet.2021.02.021_b7 – volume: abs/1906.09453 year: 2019 ident: 10.1016/j.neunet.2021.02.021_b32 article-title: Computer vision with a single (robust) classifier publication-title: CoRR – start-page: 3959 year: 2020 ident: 10.1016/j.neunet.2021.02.021_b40 article-title: Robustifying reinforcement learning agents via action space adversarial training – year: 2018 ident: 10.1016/j.neunet.2021.02.021_b37 – year: 2018 ident: 10.1016/j.neunet.2021.02.021_b1 – start-page: 8400 year: 2018 ident: 10.1016/j.neunet.2021.02.021_b44 article-title: Scaling provable adversarial defenses – year: 2014 ident: 10.1016/j.neunet.2021.02.021_b35 – ident: 10.1016/j.neunet.2021.02.021_b21 doi: 10.1109/ICCV.2019.00487 – year: 2013 ident: 10.1016/j.neunet.2021.02.021_b39 – volume: 53 start-page: 464 issue: 3 year: 2011 ident: 10.1016/j.neunet.2021.02.021_b5 article-title: Theory and applications of robust optimization publication-title: SIAM Review doi: 10.1137/080734510 – year: 2017 ident: 10.1016/j.neunet.2021.02.021_b36 – start-page: 39 year: 2017 ident: 10.1016/j.neunet.2021.02.021_b8 article-title: Towards evaluating the robustness of neural networks – start-page: 1632 year: 2016 ident: 10.1016/j.neunet.2021.02.021_b14 article-title: Robustness of classifiers: From adversarial to random noise – year: 2017 ident: 10.1016/j.neunet.2021.02.021_b28 – ident: 10.1016/j.neunet.2021.02.021_b46 doi: 10.1109/CVPR.2019.01161 – start-page: 2613 year: 2016 ident: 10.1016/j.neunet.2021.02.021_b3 article-title: Measuring neural net robustness with constraints – start-page: 506 year: 2017 ident: 10.1016/j.neunet.2021.02.021_b30 article-title: Practical black-box attacks against machine learning – start-page: 1801 year: 2009 ident: 10.1016/j.neunet.2021.02.021_b45 article-title: Robust regression and lasso – ident: 10.1016/j.neunet.2021.02.021_b9 – year: 2017 ident: 10.1016/j.neunet.2021.02.021_b42 – start-page: 227 year: 2019 ident: 10.1016/j.neunet.2021.02.021_b50 article-title: You only propagate once: Accelerating adversarial training via maximal principle – start-page: 582 year: 2016 ident: 10.1016/j.neunet.2021.02.021_b31 article-title: Distillation as a defense to adversarial perturbations against deep neural networks – ident: 10.1016/j.neunet.2021.02.021_b12 doi: 10.23919/ECC.2019.8796115 – volume: 100 start-page: 102 issue: 101 year: 2017 ident: 10.1016/j.neunet.2021.02.021_b10 article-title: Dawnbench: An end-to-end deep learning benchmark and competition publication-title: Training – year: 2015 ident: 10.1016/j.neunet.2021.02.021_b19 – volume: 6 start-page: 28 issue: 2 year: 2021 ident: 10.1016/j.neunet.2021.02.021_b20 article-title: How prediction accuracy can affect the decision-making process in pavement management system publication-title: Infrastructures doi: 10.3390/infrastructures6020028 – year: 2016 ident: 10.1016/j.neunet.2021.02.021_b47 – start-page: 387 year: 2013 ident: 10.1016/j.neunet.2021.02.021_b6 article-title: Evasion attacks against machine learning at test time – start-page: 3756 year: 2017 ident: 10.1016/j.neunet.2021.02.021_b26 article-title: Tactics of adversarial attack on deep reinforcement learning agents – volume: 307 start-page: 195 year: 2018 ident: 10.1016/j.neunet.2021.02.021_b34 article-title: Understanding adversarial training: Increasing local stability of supervised models through robust optimization publication-title: Neurocomputing doi: 10.1016/j.neucom.2018.04.027 – start-page: 1831 year: 2019 ident: 10.1016/j.neunet.2021.02.021_b48 article-title: Defense against adversarial attacks using feature scattering-based adversarial training – volume: 81 start-page: 121 issue: 2 year: 2010 ident: 10.1016/j.neunet.2021.02.021_b2 article-title: The security of machine learning publication-title: Machine Learning doi: 10.1007/s10994-010-5188-5 – year: 2014 ident: 10.1016/j.neunet.2021.02.021_b23 – year: 2019 ident: 10.1016/j.neunet.2021.02.021_b33 – year: 2014 ident: 10.1016/j.neunet.2021.02.021_b15 – year: 2016 ident: 10.1016/j.neunet.2021.02.021_b24 – start-page: 9916 year: 2018 ident: 10.1016/j.neunet.2021.02.021_b18 article-title: Online robust policy learning in the presence of unknown adversaries |
| SSID | ssj0006843 |
| Score | 2.3750973 |
| Snippet | Recent focus on robustness to adversarial attacks for deep neural networks produced a large variety of algorithms for training robust models. Most of the... |
| SourceID | proquest pubmed crossref elsevier |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 33 |
| SubjectTerms | Adversarial training Robust deep learning Robust optimization |
| Title | A fast saddle-point dynamical system approach to robust deep learning |
| URI | https://dx.doi.org/10.1016/j.neunet.2021.02.021 https://www.ncbi.nlm.nih.gov/pubmed/33677377 https://www.proquest.com/docview/2498991776 |
| Volume | 139 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVESC databaseName: Baden-Württemberg Complete Freedom Collection (Elsevier) customDbUrl: eissn: 1879-2782 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0006843 issn: 0893-6080 databaseCode: GBLVA dateStart: 20110101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier – providerCode: PRVESC databaseName: Elsevier Freedom Collection customDbUrl: eissn: 1879-2782 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0006843 issn: 0893-6080 databaseCode: ACRLP dateStart: 19950101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier – providerCode: PRVESC databaseName: Elsevier ScienceDirect Freedom Collection customDbUrl: eissn: 1879-2782 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0006843 issn: 0893-6080 databaseCode: AIKHN dateStart: 19950101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier – providerCode: PRVESC databaseName: Science Direct customDbUrl: eissn: 1879-2782 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0006843 issn: 0893-6080 databaseCode: .~1 dateStart: 19950101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier – providerCode: PRVLSH databaseName: Elsevier Journals customDbUrl: mediaType: online eissn: 1879-2782 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0006843 issn: 0893-6080 databaseCode: AKRWK dateStart: 19930101 isFulltext: true providerName: Library Specific Holdings |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV07b4MwELaidOnS9yN9RK7U1Q3GYJMxilKlrZqljZTNssFUqSpACaz97T1jSNUhilSJBfCBdTZ3n_F3dwjdK18zABYBoQHVJFA-J9pPA2JEmITC08LUGfheZ3w6D54X4aKDxm0sjKVVNrbf2fTaWjdXBo02B8VyOXjzwNVyGypKa0e5sBHsgbBVDB6-f2kePHLMOWhMbOs2fK7meGWmyoxlVPouc6dPt7mnbfCzdkOPR-igwY945Lp4jDomO0GHbW0G3Hyqp2gywqlal3it7A8IUuTLrMSJKz8P8i6BM24ziuMyx6tcVyCQGFPgppbExxmaP07ex1PSlEwgMXiikgwTrnioY8ESP9UsVWEcRcOYJYoaMCUWjikWgYtXJkxAcZoJuEEtCOIRSyJ2jrpZnplLhNMhNx6z26yAMtJ4qHgcwZM9wxWgPkV7iLWaknGTT9yWtfiSLXHsUzr9Sqtf6flwgBTZSBUun8aO9qIdBPlnXkgw-Tsk79oxk_DJ2H0QlZm8WktYccIqkwrBe-jCDeamL4xxIZgQV_9-7zXat2eO0nuDuuWqMrcAXErdr2dmH-2Nnl6msx_5t-t1 |
| linkProvider | Elsevier |
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV07b8IwELYoHdql7wd9ulJXlyRO7DAiVERbYClIbJadOBVVlUQQ1v72nuOEqgNCqpQp9iXW2b77bH--Q-hReooCsPCJ67uK-NJjRHmJTzQP4oA7iusyAt9ozAZT_3UWzBqoV9-FMbTKyvZbm15a6-pNu9JmO5_P2-8OuFpmroq6paOc7aBdP_C4WYE9ff_yPFhoqXNQm5jq9f25kuSV6lWqDaXSs6E7PXeTf9qEP0s_1D9CBxWAxF3bxmPU0OkJOqyTM-Bqrp6i5y5O5LLAS2l2IEiezdMCxzb_PMjbCM64DimOiwwvMrUCgVjrHFfJJD7O0LT_POkNSJUzgUTgigrSiZlkgYo4jb1E0UQGURh2IhpLV4MtMXhM0hB8vNRBDJpTlEOBa1AQC2kc0nPUTLNUXyKcdJh2qDlnBZiRRB3JohC-7GgmAfZJt4VorSkRVQHFTV6LL1Ezxz6F1a8w-hWOBw9IkbVUbgNqbKnP604QfwaGAJu_RfKh7jMBc8YchMhUZ6ulgCUnLDNdzlkLXdjOXLeFUsY55fzq3_-9R3uDyWgohi_jt2u0b0osv_cGNYvFSt8CiinUXTlKfwB5hO0K |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+fast+saddle-point+dynamical+system+approach+to+robust+deep+learning&rft.jtitle=Neural+networks&rft.au=Esfandiari%2C+Yasaman&rft.au=Balu%2C+Aditya&rft.au=Ebrahimi%2C+Keivan&rft.au=Vaidya%2C+Umesh&rft.date=2021-07-01&rft.pub=Elsevier+Ltd&rft.issn=0893-6080&rft.eissn=1879-2782&rft.volume=139&rft.spage=33&rft.epage=44&rft_id=info:doi/10.1016%2Fj.neunet.2021.02.021&rft.externalDocID=S089360802100071X |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0893-6080&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0893-6080&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0893-6080&client=summon |