Cross-subject dual-domain fusion network with task-related and task-discriminant component analysis enhancing one-shot SSVEP classification
This study addresses the significant challenge of developing efficient decoding algorithms for classifying steady-state visual evoked potentials (SSVEPs) in scenarios characterized by extreme scarcity of calibration data, where only one calibration is available for each stimulus target. To tackle th...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
14.11.2023
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.2311.07932 |
Cover
Summary: | This study addresses the significant challenge of developing efficient
decoding algorithms for classifying steady-state visual evoked potentials
(SSVEPs) in scenarios characterized by extreme scarcity of calibration data,
where only one calibration is available for each stimulus target. To tackle
this problem, we introduce a novel cross-subject dual-domain fusion network
(CSDuDoFN) incorporating task-related and task-discriminant component analysis
(TRCA and TDCA) for one-shot SSVEP classification. The CSDuDoFN framework is
designed to comprehensively transfer information from source subjects, while
TRCA and TDCA are employed to exploit the single available calibration of the
target subject. Specifically, we develop multi-reference least-squares
transformation (MLST) to map data from both source subjects and the target
subject into the domain of sine-cosine templates, thereby mitigating
inter-individual variability and benefiting transfer learning. Subsequently,
the transformed data in the sine-cosine templates domain and the original
domain data are separately utilized to train a convolutional neural network
(CNN) model, with the adequate fusion of their feature maps occurring at
distinct network layers. To further capitalize on the calibration of the target
subject, source aliasing matrix estimation (SAME) data augmentation is
incorporated into the training process of the ensemble TRCA (eTRCA) and TDCA
models. Ultimately, the outputs of the CSDuDoFN, eTRCA, and TDCA are combined
for SSVEP classification. The effectiveness of our proposed approach is
comprehensively evaluated on three publicly available SSVEP datasets, achieving
the best performance on two datasets and competitive performance on one. This
underscores the potential for integrating brain-computer interface (BCI) into
daily life. |
---|---|
DOI: | 10.48550/arxiv.2311.07932 |