Diversified feature representation via deep auto-encoder ensemble through multiple activation functions
In this paper, we propose a novel Deep Auto-Encoders Ensemble model (DAEE) through assembling multiple deep network models with different activation functions. The hidden features obtained by our proposed model have better robustness in representation than traditional variants of auto-encoders becau...
Saved in:
| Published in | Applied intelligence (Dordrecht, Netherlands) Vol. 52; no. 9; pp. 10591 - 10603 |
|---|---|
| Main Authors | , , , , , , |
| Format | Journal Article |
| Language | English |
| Published |
New York
Springer US
01.07.2022
Springer Nature B.V |
| Subjects | |
| Online Access | Get full text |
| ISSN | 0924-669X 1573-7497 |
| DOI | 10.1007/s10489-021-03054-2 |
Cover
| Summary: | In this paper, we propose a novel Deep Auto-Encoders Ensemble model (DAEE) through assembling multiple deep network models with different activation functions. The hidden features obtained by our proposed model have better robustness in representation than traditional variants of auto-encoders because it aggregates the diversified feature representations from multiple activation sub-networks into a more robust uniform feature representations. In order to obtain such uniform feature representations, we set the weight of each individual auto-encoder sub-network by optimizing the cost function over multiple auto-encoder sub-networks in the proposed model. Therefore, decreases the influence of individual sub-networks with improper activations and increase those with appropriate activations, to ensure the final feature representation to keep more predominant and comprehensive feature information. Extensive experiments on benchmark computer vision datasets, including MNIST, COIL-20, CIFAR-10 and SVHN, demonstrate the superiority of our proposed method among state-of-the-art auto-encoder methods, such as sparse auto-encoders (SAE), denoising auto-encoders (DAE), stacked denoising auto-encoders (SDAE) and graph regularized auto-encoders (GAE). |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 0924-669X 1573-7497 |
| DOI: | 10.1007/s10489-021-03054-2 |