Defence algorithm against adversarial example based on local perturbation DAT-LP

With further research into neural networks, their scope of application is becoming increasingly extensive. Among these, more neural network models are used in text classification tasks and have achieved excellent results. However, the crucial issue of derived adversarial examples has dramatically af...

Full description

Saved in:
Bibliographic Details
Published inNondestructive testing and evaluation Vol. 39; no. 1; pp. 204 - 220
Main Authors Tang, Jun, Huang, Yuchen, Mou, Zhi, Wang, Shiyu, Zhang, Yuanyuan, Guo, Bing
Format Journal Article
LanguageEnglish
Published Abingdon Taylor & Francis 02.01.2024
Taylor & Francis Ltd
Subjects
Online AccessGet full text
ISSN1058-9759
1477-2671
DOI10.1080/10589759.2023.2249581

Cover

More Information
Summary:With further research into neural networks, their scope of application is becoming increasingly extensive. Among these, more neural network models are used in text classification tasks and have achieved excellent results. However, the crucial issue of derived adversarial examples has dramatically affected the stability and robustness of the neural network model. This issue confines the further expansion of the neural network application, especially in some security-sensitive tasks. Concerning the text classification task, our proposed DAT-LP (Defence with Adversarial Training Based on Local Perturbation) algorithm is designed to address the adversarial example issue, which uses local perturbation to enhance model performance based on adversarial training. Furthermore, SW-CStart (Cold-start Algorithm Based on Sliding Window) algorithm is designed to realise adversarial training in the model's initialisation stage. The DAT-LP algorithm is evaluated by comparing with three baselines, including baseline models (BiLSTM, TextCNN), Dropout(regularisation method), and ADT (Adversarial Training method), respectively. As it turns out, DAT-LP's performance is superior and demonstrates the best generalisation ability.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1058-9759
1477-2671
DOI:10.1080/10589759.2023.2249581