Speech Emotion Recognition Using Gammatone Cepstral Coefficients and Deep Learning Features

Speech emotion recognition finds various applications, such as enhancing human-computer interaction and aiding remote mental health monitoring. This work proposes a method for speech emotion recognition using a combination of handcrafted and deep learning features. In particular, it studies the use...

Full description

Saved in:
Bibliographic Details
Published in2023 IEEE International Conference on Machine Learning and Applied Network Technologies (ICMLANT) pp. 1 - 4
Main Author Sharan, Roneel V.
Format Conference Proceeding
LanguageEnglish
Published IEEE 14.12.2023
Subjects
Online AccessGet full text
DOI10.1109/ICMLANT59547.2023.10372986

Cover

More Information
Summary:Speech emotion recognition finds various applications, such as enhancing human-computer interaction and aiding remote mental health monitoring. This work proposes a method for speech emotion recognition using a combination of handcrafted and deep learning features. In particular, it studies the use of gammatone cepstral coefficients, which make use of gammatone filters which model the human auditory filters, and deep learning feature embeddings extracted from a pretrained network for audio analysis. A multilayer perceptron is employed for classification on the combined feature set where feature selection is performed using one-way analysis of variance. The proposed method is evaluated on a dataset of 535 speech recordings containing 7 types of emotions from 10 subjects. An average accuracy of 0.7631 is achieved in classifying the emotions using speech in leave-one-subject-out cross-validation. Analysis of the results shows that the use of gammatone cepstral coefficients provides improvement in classification accuracy over the conventional mel-frequency cepstral coefficients and the accuracy improves when combined with deep learning features.
DOI:10.1109/ICMLANT59547.2023.10372986