SGTB: A graph representation learning model combining transformer and BERT for optimizing gene expression analysis in spatial transcriptomics data

In recent years, spatial transcriptomics (ST) has emerged as an innovative technology that enables the simultaneous acquisition of gene expression information and its spatial distribution at the single-cell or regional level, providing deeper insights into cellular interactions and tissue organizati...

Full description

Saved in:
Bibliographic Details
Published inComputational biology and chemistry Vol. 118; p. 108482
Main Authors Liu, Farong, Ren, Sheng, Li, Jie, Lv, Haoyang, Jiang, Fenghui, Bin Yu
Format Journal Article
LanguageEnglish
Published England Elsevier Ltd 01.10.2025
Subjects
Online AccessGet full text
ISSN1476-9271
1476-928X
1476-928X
DOI10.1016/j.compbiolchem.2025.108482

Cover

More Information
Summary:In recent years, spatial transcriptomics (ST) has emerged as an innovative technology that enables the simultaneous acquisition of gene expression information and its spatial distribution at the single-cell or regional level, providing deeper insights into cellular interactions and tissue organization, this technology provides a more holistic view of tissue organization and intercellular dynamics. However, existing methods still face certain limitations in data representation capabilities, making it challenging to fully capture complex spatial dependencies and global features. To address this, this paper proposes an innovative spatial multi-scale graph convolutional network (SGTB) based on large language models, integrating graph convolutional networks (GCN), Transformer, and BERT language models to optimize the representation of spatial transcriptomics data. The Graph Convolutional Network (GCN) employs a multi-layer architecture to extract features from gene expression matrices. Through iterative aggregation of neighborhood information, it captures spatial dependencies among cells and gene co-expression patterns, thereby constructing hierarchical cell embeddings. Subsequently, the model integrates an attention mechanism to assign weights to critical features and leverages Transformer layers to model global relationships, refining the ability of learned representations to reflect variations in spatial patterns. Finally, the model incorporates the BERT language model, mapping cell embeddings into textual inputs to exploit its deep semantic representation capabilities for high-dimensional feature extraction. These features are then fused with the embeddings generated by the Transformer, further optimizing feature learning for spatial transcriptomics data. This approach holds significant application value in improving the accuracy of tasks such as cell type classification and gene regulatory network construction, providing a novel computational framework for deep mining of spatial multi-scale biological data. [Display omitted] •Combines GCN, Transformer, and BERT to overcome single-model limitations.•Dynamic attention captures long-range dependencies.•BERT's bidirectional encoding captures gene interactions.•Multi-scale feature fusion enhances spatial analysis.•Probabilistic decoder models spatial gene variability.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1476-9271
1476-928X
1476-928X
DOI:10.1016/j.compbiolchem.2025.108482