Autoencoders with shared and specific embeddings for multi-omics data integration

Background In cancer research, different levels of high-dimensional data are often collected for the same subjects. Effective integration of these data by considering the shared and specific information from each data source can help us better understand different types of cancer. Results In this st...

Full description

Saved in:
Bibliographic Details
Published inBMC bioinformatics Vol. 26; no. 1; pp. 214 - 16
Main Authors Wang, Chao, O’Connell, Michael J.
Format Journal Article
LanguageEnglish
Published London BioMed Central 19.08.2025
BioMed Central Ltd
Springer Nature B.V
BMC
Subjects
Online AccessGet full text
ISSN1471-2105
1471-2105
DOI10.1186/s12859-025-06245-7

Cover

More Information
Summary:Background In cancer research, different levels of high-dimensional data are often collected for the same subjects. Effective integration of these data by considering the shared and specific information from each data source can help us better understand different types of cancer. Results In this study we propose a novel autoencoder (AE) structure with explicitly defined orthogonal loss between the shared and specific embeddings to integrate different data sources. We compare our model with previously proposed AE structures based on simulated data and real cancer data from The Cancer Genome Atlas. Using simulations with different proportions of differentially expressed genes, we compare the performance of AE methods for subsequent classification tasks. We also compare the model performance with a commonly used dimension reduction method, joint and individual variance explained (JIVE). In terms of reconstruction loss, our proposed AE models with orthogonal constraints have a slightly better reconstruction loss. All AE models achieve higher classification accuracy than the original features, demonstrating the usefulness of the embeddings extracted by the model. Conclusions We show that the proposed models have consistently high classification accuracy on both training and testing sets. In comparison, the recently proposed MOCSS model that imposes an orthogonality penalty in the post-processing step has lower classification accuracy that is on par with JIVE.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1471-2105
1471-2105
DOI:10.1186/s12859-025-06245-7