A novel technique for identifying attentional selection in a dichotic environment

Healthy humans have an innate ability to concentrate on the voice of their choice even in noisy surroundings. But, a complete understanding of the process of segregation and selection of a particular sound in brain is still unclear. Recent studies have successfully demonstrated reconstruction of sti...

Full description

Saved in:
Bibliographic Details
Published inAnnual IEEE India Conference pp. 1 - 5
Main Authors Shree, Priya, Swami, Piyush, Suresh, Varsha, Gandhi, Tapan Kumar
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.12.2016
Subjects
Online AccessGet full text
ISSN2325-9418
DOI10.1109/INDICON.2016.7838885

Cover

More Information
Summary:Healthy humans have an innate ability to concentrate on the voice of their choice even in noisy surroundings. But, a complete understanding of the process of segregation and selection of a particular sound in brain is still unclear. Recent studies have successfully demonstrated reconstruction of stimuli speech envelopes through mathematical modeling. In order to determine the attentional focus of the listener in multi-speaker settings, the existing models rely on the correlation between the reconstructed speech signals and the electroencephalogram (EEG) signals acquired while listening to the actual speech. However, realization of these type of models requires substantial time to reconstruct the stimulus and classify the direction of attention. Present study, proposes a novel solution for "cocktail party problem" by using machine learning approach. In this work, classification features viz. standard deviation, mean absolute values, mean absolute deviation and root-mean-square values were extracted from EEG data. The extracted features were fed into the artificial neural network (ANN) model with randomized sub-sampling procedure. The final outcomes showed ceiling level of performance to predict the attentional focus within subjects. These findings attest the robustness of the developed model for auditory stream segregation.
ISSN:2325-9418
DOI:10.1109/INDICON.2016.7838885