Automated Loudness Growth Prediction From EEG Signals Using Autoencoder and Multi-Target Regression

Accurately assessing loudness perception is crucial for optimizing hearing aid fittings, especially for individuals who are unable to perform subjective tests. This study presents an automated method for estimating frequency-specific loudness growth curves using tone-burst auditory brainstem respons...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 13; pp. 106561 - 106572
Main Authors Rama Harshita, D., Tiwari, Nitya, Padole, Himanshu, Nataraj, K. S.
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN2169-3536
2169-3536
DOI10.1109/ACCESS.2025.3580833

Cover

More Information
Summary:Accurately assessing loudness perception is crucial for optimizing hearing aid fittings, especially for individuals who are unable to perform subjective tests. This study presents an automated method for estimating frequency-specific loudness growth curves using tone-burst auditory brainstem responses (ABRs), which are subsets of EEG (electroencephalography) signals. Unlike traditional methods that rely on manually engineered features, the proposed method uses a convolutional autoencoder to learn latent representations of ABR signals, reducing dimensionality while preserving critical auditory information. The extracted features are mapped to psychoacoustic loudness growth estimates using a multi-target regression model based on a convolutional neural network. An ablation study was conducted to analyze the impact of different autoencoder configurations on feature extraction performance. The results demonstrate strong predictive consistency, with high Pearson correlation coefficients (PCC <inline-formula> <tex-math notation="LaTeX">\geq 0.9 </tex-math></inline-formula>) and low mean square errors (MSE <inline-formula> <tex-math notation="LaTeX">\leq 0.0011 </tex-math></inline-formula>) across different stimulus frequencies and subjects.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2025.3580833