Adversarial Attacks in Modulation Recognition With Convolutional Neural Networks

Deep learning (DL) models are vulnerable to adversarial attacks, by adding a subtle perturbation which is imperceptible to the human eye, a convolutional neural network (CNN) can lead to erroneous results, which greatly reduces the reliability and security of the DL tasks. Considering the wide appli...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on reliability Vol. 70; no. 1; pp. 389 - 401
Main Authors Lin, Yun, Zhao, Haojun, Ma, Xuefei, Tu, Ya, Wang, Meiyu
Format Journal Article
LanguageEnglish
Published New York IEEE 01.03.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN0018-9529
1558-1721
DOI10.1109/TR.2020.3032744

Cover

More Information
Summary:Deep learning (DL) models are vulnerable to adversarial attacks, by adding a subtle perturbation which is imperceptible to the human eye, a convolutional neural network (CNN) can lead to erroneous results, which greatly reduces the reliability and security of the DL tasks. Considering the wide application of modulation recognition in the communication field and the rapid development of DL, by adding a well-designed adversarial perturbation to the input signal, this article explores the performance of attack methods on modulation recognition, measures the effectiveness of adversarial attacks on signals, and provides the empirical evaluation of the reliabilities of CNNs. The results indicate that the accuracy of the target model reduce significantly by adversarial attacks, when the perturbation factor is 0.001, the accuracy of the model could drop by about 50% on average. Among them, iterative methods show greater attack performances than that of one-step method. In addition, the consistency of the waveform before and after the perturbation is examined, to consider whether the added adversarial examples are small enough (i.e., hard to distinguish by human eyes). This article also aims at inspiring researchers to further promote the CNNs reliabilities against adversarial attacks.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9529
1558-1721
DOI:10.1109/TR.2020.3032744