EEG-based imagined words classification using Hilbert transform and deep networks

The completely paralyzed and quadriplegic patients cannot communicate with others. However, the imagined thoughts of these patients can be used to drive assistive devices by brain-computer interfacing (BCI), the success of which relies on better classification accuracies. In this paper, we have perf...

Full description

Saved in:
Bibliographic Details
Published inMultimedia tools and applications Vol. 83; no. 1; pp. 2725 - 2748
Main Authors Agarwal, Prabhakar, Kumar, Sandeep
Format Journal Article
LanguageEnglish
Published New York Springer US 01.01.2024
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN1380-7501
1573-7721
DOI10.1007/s11042-023-15664-8

Cover

More Information
Summary:The completely paralyzed and quadriplegic patients cannot communicate with others. However, the imagined thoughts of these patients can be used to drive assistive devices by brain-computer interfacing (BCI), the success of which relies on better classification accuracies. In this paper, we have performed an experiment for the classification of imagined words, which can provide an alternative neural path of speech communication for deprived people. A 32-channel industry-standard physiological signal system is used to measure imagined electroencephalogram (EEG) signals of five words (sos, stop, medicine, washroom, comehere) from 13 subjects. We have used the Hilbert transform to calculate time and joint time–frequency features from the imagined EEG signals. The above features are extracted individually in electrodes corresponding to nine brain regions. Each region of the brain is further analyzed in seven EEG frequency bands. The imagined speech features from each of the 63 combinations of brain region and frequency band are classified by the proposed deep architectures like long short term memory (LSTM), gated recurrent unit, and convolutional neural network (CNN). Some combinations are also classified by six traditional machine learning classifiers for performance comparison. In a five-class classification framework, we achieved the average and maximum accuracy of 71.75% and 94.29%. CNN gave high accuracy, but LSTM gave less network prediction time. Our results show that the alpha band can classify imagined speech better than other frequency bands. We have implemented subject-independent BCI, and the results are better than the state-of-the-art methods present in the literature.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-023-15664-8