A transformer-based deep neural network model for SSVEP classification

Steady-state visual evoked potential (SSVEP) is one of the most commonly used control signals in the brain–computer interface (BCI) systems. However, the conventional spatial filtering methods for SSVEP classification highly depend on the subject-specific calibration data. The need for the methods t...

Full description

Saved in:
Bibliographic Details
Published inNeural networks Vol. 164; pp. 521 - 534
Main Authors Chen, Jianbo, Zhang, Yangsong, Pan, Yudong, Xu, Peng, Guan, Cuntai
Format Journal Article
LanguageEnglish
Published United States Elsevier Ltd 01.07.2023
Subjects
Online AccessGet full text
ISSN0893-6080
1879-2782
1879-2782
DOI10.1016/j.neunet.2023.04.045

Cover

More Information
Summary:Steady-state visual evoked potential (SSVEP) is one of the most commonly used control signals in the brain–computer interface (BCI) systems. However, the conventional spatial filtering methods for SSVEP classification highly depend on the subject-specific calibration data. The need for the methods that can alleviate the demand for the calibration data becomes urgent. In recent years, developing the methods that can work in inter-subject scenario has become a promising new direction. As a popular deep learning model nowadays, Transformer has been used in EEG signal classification tasks owing to its excellent performance. Therefore, in this study, we proposed a deep learning model for SSVEP classification based on Transformer architecture in inter-subject scenario, termed as SSVEPformer, which was the first application of Transformer on the SSVEP classification. Inspired by previous studies, we adopted the complex spectrum features of SSVEP data as the model input, which could enable the model to simultaneously explore the spectral and spatial information for classification. Furthermore, to fully utilize the harmonic information, an extended SSVEPformer based on the filter bank technology (FB-SSVEPformer) was proposed to improve the classification performance. Experiments were conducted using two open datasets (Dataset 1: 10 subjects, 12 targets; Dataset 2: 35 subjects, 40 targets). The experimental results show that the proposed models could achieve better results in terms of classification accuracy and information transfer rate than other baseline methods. The proposed models validate the feasibility of deep learning models based on Transformer architecture for SSVEP data classification, and could serve as potential models to alleviate the calibration procedure in the practical application of SSVEP-based BCI systems.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0893-6080
1879-2782
1879-2782
DOI:10.1016/j.neunet.2023.04.045