Baby Cry Classification Using Structure-Tuned Artificial Neural Networks with Data Augmentation and MFCC Features
Babies express their needs, such as hunger, discomfort, or sleeplessness, by crying. However, understanding these cries correctly can be challenging for parents. This can delay the baby’s needs, increase parents’ stress levels, and negatively affect the baby’s development. In this paper, an integrat...
Saved in:
| Published in | Applied sciences Vol. 15; no. 5; p. 2648 |
|---|---|
| Main Authors | , |
| Format | Journal Article |
| Language | English |
| Published |
Basel
MDPI AG
01.03.2025
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 2076-3417 2076-3417 |
| DOI | 10.3390/app15052648 |
Cover
| Summary: | Babies express their needs, such as hunger, discomfort, or sleeplessness, by crying. However, understanding these cries correctly can be challenging for parents. This can delay the baby’s needs, increase parents’ stress levels, and negatively affect the baby’s development. In this paper, an integrated system for the classification of baby sounds is proposed. The proposed method includes data augmentation, feature extraction, hyperparameter tuning, and model training steps. In the first step, various data augmentation techniques were applied to increase the training data’s diversity and strengthen the model’s generalization capacity. The MFCC (Mel-Frequency Cepstral Coefficients) method was used in the second step to extract meaningful and distinctive features from the sound data. MFCC represents sound signals based on the frequencies the human ear perceives and provides a strong basis for classification. The obtained features were classified with an artificial neural network (ANN) model with optimized hyperparameters. The hyperparameter optimization of the model was performed using the grid search algorithm, and the most appropriate parameters were determined. The training, validation, and test data sets were separated at 75%, 10%, and 15% ratios, respectively. The model’s performance was tested on mixed sounds. The test results were analyzed, and the proposed method showed the highest performance, with a 90% accuracy rate. In the comparison study with an artificial neural network (ANN) on the Donate a Cry data set, the F1 score was reported as 46.99% and the test accuracy as 85.93%. In this paper, additional techniques such as data augmentation, hyperparameter tuning, and MFCC feature extraction allowed the model accuracy to reach 90%. The proposed method offers an effective solution for classifying baby sounds and brings a new approach to this field. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 2076-3417 2076-3417 |
| DOI: | 10.3390/app15052648 |