IoT-Based Android Malware Detection Using Graph Neural Network With Adversarial Defense

Since the Internet of Things (IoT) is widely adopted using Android applications, detecting malicious Android apps is essential. In recent years, Android graph-based deep learning research has proposed many approaches to extract relationships from the application as a graph to generate graph embeddin...

Full description

Saved in:
Bibliographic Details
Published inIEEE internet of things journal Vol. 10; no. 10; pp. 8432 - 8444
Main Authors Yumlembam, Rahul, Issac, Biju, Jacob, Seibu Mary, Yang, Longzhi
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 15.05.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN2327-4662
2327-4662
DOI10.1109/JIOT.2022.3188583

Cover

More Information
Summary:Since the Internet of Things (IoT) is widely adopted using Android applications, detecting malicious Android apps is essential. In recent years, Android graph-based deep learning research has proposed many approaches to extract relationships from the application as a graph to generate graph embeddings. First, we demonstrate the effectiveness of graph-based classification using graph neural networks (GNNs)-based classifier to generate API graph embedding. The graph embedding is used with "Permission" and "Intent" to train multiple machine learning and deep learning algorithms to detect Android malware. The classification achieved an accuracy of 98.33% in CICMaldroid and 98.68% in the Drebin data set. However, the graph-based deep learning is vulnerable as an attacker can add fake relationships to avoid detection by the classifier. Second, we propose a generative adversarial network (GAN)-based algorithm named VGAE-MalGAN to attack the graph-based GNN Android malware classifier. The VGAE-MalGAN generator generates adversarial malware API graphs, and the VGAE-MalGAN substitute detector (SD) tries to fit the detector. Experimental analysis shows that VGAE-MalGAN can effectively reduce the detection rate of GNN malware classifiers. Although the model fails to detect adversarial malware, experimental analysis shows that retraining the model with generated adversarial samples helps to combat adversarial attacks.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2022.3188583