Class-aware progressive self-training for learning convolutional networks on graphs

Learning convolutional networks on graphs have been a popular topic for machine learning on graph-structured data and achieved state-of-the-art results on various practical tasks. However, most existing works ignore the impact of per-class distribution, therefore their performance may be limited due...

Full description

Saved in:
Bibliographic Details
Published inExpert systems with applications Vol. 238; p. 121805
Main Authors Chen, Ke, Wu, Weining
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 15.03.2024
Subjects
Online AccessGet full text
ISSN0957-4174
1873-6793
DOI10.1016/j.eswa.2023.121805

Cover

More Information
Summary:Learning convolutional networks on graphs have been a popular topic for machine learning on graph-structured data and achieved state-of-the-art results on various practical tasks. However, most existing works ignore the impact of per-class distribution, therefore their performance may be limited due to the diversity of various categories. In this paper, we propose a novel class-aware progressive self-training (CPS) algorithm for training graph convolutional networks (GCNs). Compared to other self-training algorithms for GCNs’ learning, the proposed CPS algorithm leverages the class distribution to update the original graph structure in each self-training loop, including: (a) find these high-confident unlabeled nodes in the graph for each category to add pseudo labels, in order to enlarge the current set of labeled nodes; (b) delete these noisy edges between different classes for graph sparsification. Then, the optimized graph is used for next self-training loops in hopes of enhancing the classification performance. We evaluate the proposed CPS on several datasets commonly used for GCNs’ learning, and the experimental results show that the proposed CPS algorithm outperforms other baselines.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2023.121805