Semisupervised Manifold Regularization via a Subnetwork-Based Representation Learning Model
Semisupervised classification with a few labeled training samples is a challenging task in the area of data mining. Moore-Penrose inverse (MPI)-based manifold regularization (MR) is a widely used technique in tackling semisupervised classification. However, most of the existing MPI-based MR algorith...
Saved in:
| Published in | IEEE transactions on cybernetics Vol. 53; no. 11; pp. 6923 - 6936 |
|---|---|
| Main Authors | , , |
| Format | Journal Article |
| Language | English |
| Published |
United States
IEEE
01.11.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects | |
| Online Access | Get full text |
| ISSN | 2168-2267 2168-2275 2168-2275 |
| DOI | 10.1109/TCYB.2022.3177573 |
Cover
| Summary: | Semisupervised classification with a few labeled training samples is a challenging task in the area of data mining. Moore-Penrose inverse (MPI)-based manifold regularization (MR) is a widely used technique in tackling semisupervised classification. However, most of the existing MPI-based MR algorithms can only generate loosely connected feature encoding, which is generally less effective in data representation and feature learning. To alleviate this deficiency, we introduce a new semisupervised multilayer subnet neural network called SS-MSNN. The key contributions of this article are as follows: 1) a novel MPI-based MR model using the subnetwork structure is introduced. The subnet model is utilized to enrich the latent space representations iteratively; 2) a one-step training process to learn the discriminative encoding is proposed. The proposed SS-MSNN learns parameters by directly optimizing the entire network, accepting input from one end, and producing output at the other end; and 3) a new semisupervised dataset called HFSWR-RDE is built for this research. Experimental results on multiple domains show that the SS-MSNN achieves promising performance over the other semisupervised learning algorithms, demonstrating fast inference speed and better generalization ability. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ISSN: | 2168-2267 2168-2275 2168-2275 |
| DOI: | 10.1109/TCYB.2022.3177573 |