EmoMatchSpanishDB: study of speech emotion recognition machine learning models in a new Spanish elicited database
In this paper we present a new speech emotion dataset on Spanish. The database is created using an elicited approach and is composed by fifty non-actors expressing the Ekman’s six basic emotions of anger, disgust, fear, happiness, sadness, and surprise, plus neutral tone. This article describes how...
Saved in:
Published in | Multimedia tools and applications Vol. 83; no. 5; pp. 13093 - 13112 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
New York
Springer US
01.02.2024
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
ISSN | 1573-7721 1380-7501 1573-7721 |
DOI | 10.1007/s11042-023-15959-w |
Cover
Abstract | In this paper we present a new speech emotion dataset on Spanish. The database is created using an elicited approach and is composed by fifty non-actors expressing the Ekman’s six basic emotions of anger, disgust, fear, happiness, sadness, and surprise, plus neutral tone. This article describes how this database has been created from the recording step to the performed crowdsourcing perception test step. The crowdsourcing has facilitated to statistically validate the emotion of each collected audio sample and also to filter noisy data samples. Hence we obtained two datasets EmoSpanishDB and EmoMatchSpanishDB. The first includes those recorded audios that had consensus during the crowdsourcing process. The second selects from EmoSpanishDB only those audios whose emotion also matches with the originally elicited. Last, we present a baseline comparative study between different state of the art machine learning techniques in terms of accuracy, precision, and recall for both datasets. The results obtained for EmoMatchSpanishDB improves the ones obtained for EmoSpanishDB and thereof, we recommend to follow the methodology that was used for the creation of emotional databases. |
---|---|
AbstractList | In this paper we present a new speech emotion dataset on Spanish. The database is created using an elicited approach and is composed by fifty non-actors expressing the Ekman’s six basic emotions of anger, disgust, fear, happiness, sadness, and surprise, plus neutral tone. This article describes how this database has been created from the recording step to the performed crowdsourcing perception test step. The crowdsourcing has facilitated to statistically validate the emotion of each collected audio sample and also to filter noisy data samples. Hence we obtained two datasets EmoSpanishDB and EmoMatchSpanishDB. The first includes those recorded audios that had consensus during the crowdsourcing process. The second selects from EmoSpanishDB only those audios whose emotion also matches with the originally elicited. Last, we present a baseline comparative study between different state of the art machine learning techniques in terms of accuracy, precision, and recall for both datasets. The results obtained for EmoMatchSpanishDB improves the ones obtained for EmoSpanishDB and thereof, we recommend to follow the methodology that was used for the creation of emotional databases. |
Author | Salvador, Antonio Barba Pãez, Diego Gachet Garcia-Cuesta, Esteban |
Author_xml | – sequence: 1 givenname: Esteban surname: Garcia-Cuesta fullname: Garcia-Cuesta, Esteban email: esteban.garcia@fi.upm.es organization: Departamento de Inteligencia Artificial, Universidad Politécnica de Madrid – sequence: 2 givenname: Antonio Barba surname: Salvador fullname: Salvador, Antonio Barba organization: Departamento de Ciencia Y Computación, Universidad Europea de Madrid – sequence: 3 givenname: Diego Gachet surname: Pãez fullname: Pãez, Diego Gachet organization: Departmento de Automãtica, Universidad Francisco de Vitoria |
BookMark | eNp9kEtPwzAQhC0EEm3hD3CyxDngRxIn3KCUh1TEAThbxt60rhK7tVNV_fekTSUQh552DvPtzs4QnTrvAKErSm4oIeI2UkpSlhDGE5qVWZlsTtCAZoInQjB6-kefo2GMC0JonrF0gFaTxr-pVs8_lsrZOH98uMOxXZst9hWOSwA9x9D41nqHA2g_c3avG6Xn1gGuQQVn3Qw33kAdsXVYYQcbfNiHobbatmCwUa36VhEu0Fml6giXhzlCX0-Tz_FLMn1_fh3fTxPNc94mGTFGcKgyRjhhinZPEcgKU2pgRqcqF6TSuSkBeCUoN6IgWrMyo8BZmULKR-i637sMfrWG2MqFXwfXnZSspIXIBctF5yp6lw4-xgCV7NKq3YttULaWlMhdwbIvWHYFy33BctOh7B-6DLZRYXsc4j0UO7ObQfhNdYT6AVXXkbA |
CitedBy_id | crossref_primary_10_3390_s24175797 crossref_primary_10_1016_j_bspc_2025_107636 crossref_primary_10_1109_TASLPRO_2025_3540662 |
Cites_doi | 10.1007/s10772-018-9491-z 10.1007/s10796-017-9734-6 10.1016/j.brat.2007.08.003 10.1016/j.specom.2011.10.005 10.1109/EMEIT.2011.6023178 10.1080/13546805.2017.1330190 10.31887/DCNS.2006.8.1/ftremeau 10.21437/SpeechProsody.2010-123 10.1037/a0022572 10.1155/2014/757121 10.21437/Interspeech.2005-446 10.1177/1754073910374661 10.25080/Majora-7b98e3ed-003 10.1109/TPAMI.2019.2944808 10.1016/j.csl.2014.01.003 10.21437/Interspeech.2012-118 10.1007/s10772-011-9125-1 10.1515/9781614515159.207 10.7551/mitpress/1140.001.0001 10.1109/ICASSP.2014.6854297 10.1016/j.wocn.2018.07.001 10.1109/MIS.2012.110 10.1007/s10579-019-09450-y 10.21437/Interspeech.2016-129 10.1007/s10772-019-09605-w 10.1057/s41599-020-0499-z 10.1049/el.2009.1977 10.1016/j.cpr.2010.05.001 10.1109/TAFFC.2015.2457417 10.1109/PCTHEALTH.2008.4571041 10.1007/s13278-018-0505-2 10.5370/KIEE.2016.65.1.116 10.1017/S0140525X00076512 10.1098/rsos.160855 |
ContentType | Journal Article |
Copyright | The Author(s) 2023 The Author(s) 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
Copyright_xml | – notice: The Author(s) 2023 – notice: The Author(s) 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
DBID | C6C AAYXX CITATION 3V. 7SC 7WY 7WZ 7XB 87Z 8AL 8AO 8FD 8FE 8FG 8FK 8FL 8G5 ABUWG AFKRA ARAPS AZQEC BENPR BEZIV BGLVJ CCPQU DWQXO FRNLG F~G GNUQQ GUQSH HCIFZ JQ2 K60 K6~ K7- L.- L7M L~C L~D M0C M0N M2O MBDVC P5Z P62 PHGZM PHGZT PKEHL PQBIZ PQBZA PQEST PQGLB PQQKQ PQUKI Q9U |
DOI | 10.1007/s11042-023-15959-w |
DatabaseName | Springer Nature OA Free Journals CrossRef ProQuest Central (Corporate) Computer and Information Systems Abstracts ABI/INFORM Collection ABI/INFORM Global (PDF only) ProQuest Central (purchase pre-March 2016) ABI/INFORM Collection Computing Database (Alumni Edition) ProQuest Pharma Collection Technology Research Database ProQuest SciTech Collection ProQuest Technology Collection ProQuest Central (Alumni) (purchase pre-March 2016) ABI/INFORM Collection (Alumni) ProQuest Research Library ProQuest Central (Alumni) ProQuest Central UK/Ireland Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Central Business Premium Collection Technology Collection ProQuest One Community College ProQuest Central Business Premium Collection (Alumni) ABI/INFORM Global (Corporate) ProQuest Central Student ProQuest Research Library SciTech Premium Collection ProQuest Computer Science Collection ProQuest Business Collection (Alumni Edition) ProQuest Business Collection Computer Science Database ABI/INFORM Professional Advanced Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional ABI/INFORM Global Computing Database Research Library Research Library (Corporate) Advanced Technologies & Aerospace Database ProQuest Advanced Technologies & Aerospace Collection Proquest Central Premium ProQuest One Academic (New) ProQuest One Academic Middle East (New) ProQuest One Business ProQuest One Business (Alumni) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central Basic |
DatabaseTitle | CrossRef ABI/INFORM Global (Corporate) ProQuest Business Collection (Alumni Edition) ProQuest One Business Research Library Prep Computer Science Database ProQuest Central Student Technology Collection Technology Research Database Computer and Information Systems Abstracts – Academic ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection Computer and Information Systems Abstracts ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College Research Library (Alumni Edition) ProQuest Pharma Collection ABI/INFORM Complete ProQuest Central ABI/INFORM Professional Advanced ProQuest One Applied & Life Sciences ProQuest Central Korea ProQuest Research Library ProQuest Central (New) Advanced Technologies Database with Aerospace ABI/INFORM Complete (Alumni Edition) Advanced Technologies & Aerospace Collection Business Premium Collection ABI/INFORM Global ProQuest Computing ABI/INFORM Global (Alumni Edition) ProQuest Central Basic ProQuest Computing (Alumni Edition) ProQuest One Academic Eastern Edition ProQuest Technology Collection ProQuest SciTech Collection ProQuest Business Collection Computer and Information Systems Abstracts Professional Advanced Technologies & Aerospace Database ProQuest One Academic UKI Edition ProQuest One Business (Alumni) ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) Business Premium Collection (Alumni) |
DatabaseTitleList | CrossRef ABI/INFORM Global (Corporate) |
Database_xml | – sequence: 1 dbid: C6C name: Springer_OA刊 url: http://www.springeropen.com/ sourceTypes: Publisher – sequence: 2 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Computer Science |
EISSN | 1573-7721 |
EndPage | 13112 |
ExternalDocumentID | 10_1007_s11042_023_15959_w |
GrantInformation_xml | – fundername: Universidad Politécnica de Madrid – fundername: Universidad Europea de Madrid grantid: 2019/UEM60 |
GroupedDBID | -4Z -59 -5G -BR -EM -Y2 -~C .4S .86 .DC .VR 06D 0R~ 0VY 123 1N0 1SB 2.D 203 28- 29M 2J2 2JN 2JY 2KG 2LR 2P1 2VQ 2~H 30V 3EH 3V. 4.4 406 408 409 40D 40E 5QI 5VS 67Z 6NX 7WY 8AO 8FE 8FG 8FL 8G5 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAOBN AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDZT ABECU ABFTV ABHLI ABHQN ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABUWG ABWNU ABXPI ACAOD ACBXY ACDTI ACGFO ACGFS ACHSB ACHXU ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACREN ACSNA ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADMLS ADRFC ADTPH ADURQ ADYFF ADYOE ADZKW AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFGCZ AFKRA AFLOW AFQWF AFWTZ AFYQB AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMTXH AMXSW AMYLF AMYQR AOCGG ARAPS ARCSS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN AZQEC B-. BA0 BBWZM BDATZ BENPR BEZIV BGLVJ BGNMA BPHCQ BSONS C6C CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 DWQXO EBLON EBS EIOEI EJD ESBYG FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRNLG FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNUQQ GNWQR GQ6 GQ7 GQ8 GROUPED_ABI_INFORM_COMPLETE GUQSH GXS H13 HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I-F I09 IHE IJ- IKXTQ ITG ITH ITM IWAJR IXC IXE IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ K60 K6V K6~ K7- KDC KOV KOW LAK LLZTM M0C M0N M2O M4Y MA- N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM OVD P19 P2P P62 P9O PF0 PQBIZ PQBZA PQQKQ PROAC PT4 PT5 Q2X QOK QOS R4E R89 R9I RHV RNI RNS ROL RPX RSV RZC RZE RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TEORI TH9 TSG TSK TSV TUC TUS U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YLTOR Z45 Z7R Z7S Z7W Z7X Z7Y Z7Z Z81 Z83 Z86 Z88 Z8M Z8N Z8Q Z8R Z8S Z8T Z8U Z8W Z92 ZMTXR ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ACMFV ACSTC ADKFA AEZWR AFDZB AFHIU AFOHR AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION PHGZM PHGZT 7SC 7XB 8AL 8FD 8FK ABRTQ JQ2 L.- L7M L~C L~D MBDVC PKEHL PQEST PQGLB PQUKI Q9U |
ID | FETCH-LOGICAL-c363t-50dd73ef520302a19590e58d9ce2dc4a670fc6d9ee3f713d780cc2951e3294e43 |
IEDL.DBID | C6C |
ISSN | 1573-7721 1380-7501 |
IngestDate | Fri Jul 25 22:20:56 EDT 2025 Tue Jul 01 04:13:24 EDT 2025 Thu Apr 24 22:58:54 EDT 2025 Fri Feb 21 02:41:17 EST 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 5 |
Keywords | Affective analysis Machine learning Speech emotion recognition EmoMatchSpanishDB Language resources |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c363t-50dd73ef520302a19590e58d9ce2dc4a670fc6d9ee3f713d780cc2951e3294e43 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
OpenAccessLink | https://doi.org/10.1007/s11042-023-15959-w |
PQID | 2918767267 |
PQPubID | 54626 |
PageCount | 20 |
ParticipantIDs | proquest_journals_2918767267 crossref_citationtrail_10_1007_s11042_023_15959_w crossref_primary_10_1007_s11042_023_15959_w springer_journals_10_1007_s11042_023_15959_w |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2024-02-01 |
PublicationDateYYYYMMDD | 2024-02-01 |
PublicationDate_xml | – month: 02 year: 2024 text: 2024-02-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York – name: Dordrecht |
PublicationSubtitle | An International Journal |
PublicationTitle | Multimedia tools and applications |
PublicationTitleAbbrev | Multimed Tools Appl |
PublicationYear | 2024 |
Publisher | Springer US Springer Nature B.V |
Publisher_xml | – name: Springer US – name: Springer Nature B.V |
References | JadoulYThompsonBde BoerBIntroducing Parselmouth: A Python interface to PraatJ Phon20187111510.1016/j.wocn.2018.07.001 LausenAHammerschmidtKEmotion recognition and confidence ratings predicted by vocal stimulus type and prosodic parametersHumanit Soc Sci Commun20207210.1057/s41599-020-0499-z Schuller B, Steidl S, Batliner A, Hirschberg J, Burgoon JK, Baird A, and Evanini K. (2016). The interspeech 2016 computational paralinguistics challenge: Deception, sincerity and native language. In 17TH Ann Conf Int Speech Comm Assoc (Interspeech 2016),. Vols 1–5 (Vol. 8, pp. 2001–2005). ISCA. Vaidyanathan PP (2008) The Theory of Linear Prediction. Chapter 8. California Institute of Technology. Morgan and Claypool Publishers Series AttwoodASEaseyKEDaliliMNSkinnerALWoodsACrickLIlettEPenton-VoakISMunafóMRState anxiety and emotional face recognition in healthy volunteersR Soc Open Sci.20174516085510.1098/rsos.160855 PremackDWoodruffGDoes the chimpanzee have a theory of mind?Behav Brain Sci Special Issue: Cognition and Consiousness in Nonhuman Species.19781451552610.1017/S0140525X00076512 Chang-Hong L, Liao WK, Hsieh WC, Liao WJ, Wang JC (2014) Emotion identification using extremely low frequency components of speech feature contours. Hindawi Publishing Corporation. Sci World J. Volume 2014 Shen P, Changjun Z. and Chen X. (2011) Automatic Speech Emotion Recognition Using Support Vector Machine. Int Conf Electr Mech Eng Inf Technol Snow R, O' Connor, Jurafsky D. and Ng A. (2008). evaluating Non-Expert annotations for natural language tasks, Proceedings of EMNLP-08. GrichkovtsovaIMorelMLacheretAThe role of voice quality and prosodic contour in affective speech perceptionSpeech Comm.201254341442910.1016/j.specom.2011.10.005 Parada-CabaleiroECostantiniGBatlinerADEMoS: an Italian emotional speech corpusLang Res Eval202054234138310.1007/s10579-019-09450-y TsengHHHuangYLChenJTLiangKYLinCCChenSHFacial and prosodic emotion recognition in social anxiety disorderCogn Neuropsychiatry.201722433134510.1080/13546805.2017.1330190 Sailunaz K, Dhaliwal M, Rokne J, and Alhajj R. (2018) Emotion detection from text and speech: a survey. SocNetw Anal Min. 8(1) PoornaSSNairGJMultistage classification scheme to enhance speech emotion recognitionInt J Speech Technol.201922232734010.1007/s10772-019-09605-w Rozgic V, Ananthakrishnan S, Saleem S, Kumar R, Vembu AN, Prasad R. (2012) Emotion recognition using acoustic and lexical features. In: INTERSPEECH. Portland, USA SwainMRoutrayAKabisatpathyPDatabases, features and classifiers for speech emotion recognition: a reviewInt J Speech Technol20182119312010.1007/s10772-018-9491-z PobletMGarcia-CuestaECasanovasPCrowdsourcing roles, methods and tools for data-intensive disaster managementInf Syst Front20182061363137910.1007/s10796-017-9734-6 RodriguezIACálculo de frecuencias de aparición de fonemas y alófonos en espanol actual utilizando un transcriptor automaticoLoquens201631e029 EkmanPSchererKEkmanPExpression and the nature of emotionApproaches to Emotion1984Hillsdale, NJErlbaum319344 PicardR(1997) Affective Computing1997CambridgeThe MIT Press Florian E, Wöllmer M, Schuller B (2010) openSMILE - The Munich Versatile and Fast Open-Source Audio Feature Extractor, Proc. ACM Multimedia (MM), ACM, Florence, Italy, pp. 1459–1462 Iriondo I, Guaus R, Rodriguez A, Lázaro P, Montoya N, Blanco JM, Bernadas D, Oliver JM, Tena D, and Longhi L (2000) Validation of an acoustical modeling of emotional expression in Spanish using speech synthesis techniques. In ITRW on speechand emotion, New Castle, Northern Ireland, UK Sept. 2000 TrémeauFA review of emotion deficits in schizophreniaDialogues Clin Neurosci200681597010.31887/DCNS.2006.8.1/ftremeau Burkhardt F, Paeschke, Rolfes M, Sendlmeier W, Weiss B (2005) 1129 A database of German emotional speech. In: Proc. Interspeech, pp. 1517–1520 Jiang D, Lu L, Zhang H, Tao J and Cai L. (2002). Music type classification by spectral contrast feature. In Multimedia and Expo, 2002. ICME‘02. Proceedings. 2002 IEEE Int Conf. vol. 1, pp. 113–116. IEEE, 2002 StaugaardSRThreatening faces and social anxiety: A literature reviewClin Psychol Rev201030666969010.1016/j.cpr.2010.05.001 XuZMeyerPFingscheidtT"On the Effects of Speaker Gender in Emotion Recognition Training Data," Speech Communication; 13th ITG-Symposium2018GermanyOldenburg15 KossaifiJSEWA DB: A Rich Database for Audio-Visual Emotion and Sentiment Research in the WildIn IEEE Trans Patt Anal Mach Intell.20214331022104010.1109/TPAMI.2019.2944808 Real Academia Española y Asociación de Academias de la Lengua Española. (2005). Diccionario panhispánico de dudas. Madrid: Santillana ByunSLeeSEmotion Recognition Using Tone and Tempo Based on Voice for IoTTrans Korean Inst Electr Eng20166511612110.5370/KIEE.2016.65.1.116 Scherer KR, Banziger T and Roesch E. (2010). A Blueprint for Affective Computing: A source book and manual. Oxford University press. Montoro JM, Gutierrez-Arriola J, Colas J, Enriquez E, and Pardo JM. (1999). Analysis and modeling of emotional speech in Spanish. In Proc. int. conf. on phonetic sciences (pp. 957-960) Cao H, Verma R, Nenkova A (2014) Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech. Comput Speech Lang. CavanaghSRUrryHLShinLMMood-induced shifts in attentional bias to emotional information predict ill-and well-beingEmotion201111224124810.1037/a0022572 MadhuNNote on measures for spectral flatnessElectron Lett20094523119510.1049/el.2009.1977 QuadfliegSWendBMohrAMiltnerWHStraubeTRecognition and evaluation of emotional prosody in individuals with generalized social phobia: A pilot studyBehav Res Ther.200745123096310310.1016/j.brat.2007.08.003 Amer MR, Siddiquie B, Richey C, Divakaran A (2014) Emotion recognition in speech using deep networks. In: ICASSP. Florence, Italy, pp 3752–3756 Polzehl T, Schmitt A, and Metze Florian. (2010). Approaching multi-lingual emotion recognition from speech - on language dependency of acoustic/prosodic features for anger recognition. In Speech Prosody’2010 Conference, paper 442, Chicago, IL, USA May 10–14 Schuller B, Wöllmer M, Eyben F, and Rigoll G. (2009) Spectral or Voice Quality? Feature Type Relevance for the Discrimination of Emotion Pairs. The Role of Prosody in Affective Speech (S. Hancil, ed.), vol. 97 of Linguistic Insights, Studies in Language and Communication, pp. 285-307, Peter Lang Publishing Group KoolagudiSGRaoKSEmotion recognition from speech: a reviewInt J Speech Technol2012159911710.1007/s10772-011-9125-1 Marchi E, Ringeval F, and Schuller B. (2014) Voice-enabled assistive robots for handling autism spectrum conditions: an examination of the role of prosody,” Speech and Automata in Health Care (Speech Technology and Text Mining in Medicine and Healthcare). De Gruyter, Boston/Berlin/Munich. pp. 207-236 McFee B, Raffel C, Liang D, Ellis DPW, McVicar M, EB, and Nieto O. (2015) librosa: Audio and music signal analysis in python. In Proceedings of the 14th python in science conference, pp. 18–25 Tacconi D, Mayora O, Lukowicz P, Arnrich B, Setz C, Troster G, and Haring C (2008) Activity and emotion recognition to support early diagnosis of psychiatric diseases. In 2008 Second Int Conf Perv Comput Technol Healthcare, pp. 100-102 Muhammadi J, Rabiee HR, and Hosseini A. (2013). Crowd Labeling: a survey. arXiv: Artificial Intelligence. MehrabianA(1971) Silent Messages1971Belmont, CAWadsworth Publishing Co. EybenFSchererKRSchullerBWSundbergJAndréEBussoCTruongKPThe Geneva minimalistic acoustic parameter set (GeMAPS) for voice research and affective computingIEEE Trans Affect Comput20157219020210.1109/TAFFC.2015.2457417 IzardCEThe many meanings/aspects of emotion: Emotion definitions, functions, activation, and regulationEmot Rev20102436337010.1177/1754073910374661 CalvoRAD’MelloSFrontiers of Affect-Aware Learning TechnologiesIntell. Syst. IEEE.20122727868910.1109/MIS.2012.110 S Byun (15959_CR4) 2016; 65 CE Izard (15959_CR14) 2010; 2 SR Cavanagh (15959_CR7) 2011; 11 15959_CR22 15959_CR44 15959_CR21 I Grichkovtsova (15959_CR12) 2012; 54 15959_CR24 E Parada-Cabaleiro (15959_CR26) 2020; 54 15959_CR40 15959_CR41 15959_CR8 15959_CR29 15959_CR6 F Eyben (15959_CR10) 2015; 7 15959_CR25 IA Rodriguez (15959_CR34) 2016; 3 15959_CR47 Z Xu (15959_CR48) 2018 J Kossaifi (15959_CR18) 2021; 43 15959_CR3 15959_CR1 D Premack (15959_CR31) 1978; 1 R Picard (15959_CR27) 1997 P Ekman (15959_CR9) 1984 15959_CR11 15959_CR33 Y Jadoul (15959_CR15) 2018; 71 15959_CR13 15959_CR35 A Lausen (15959_CR19) 2020; 7 HH Tseng (15959_CR46) 2017; 22 N Madhu (15959_CR20) 2009; 45 SS Poorna (15959_CR30) 2019; 22 M Poblet (15959_CR28) 2018; 20 SG Koolagudi (15959_CR17) 2012; 15 S Quadflieg (15959_CR32) 2007; 45 M Swain (15959_CR43) 2018; 21 A Mehrabian (15959_CR23) 1971 15959_CR37 F Trémeau (15959_CR45) 2006; 8 15959_CR36 AS Attwood (15959_CR2) 2017; 4 RA Calvo (15959_CR5) 2012; 27 15959_CR39 15959_CR16 15959_CR38 SR Staugaard (15959_CR42) 2010; 30 |
References_xml | – reference: PobletMGarcia-CuestaECasanovasPCrowdsourcing roles, methods and tools for data-intensive disaster managementInf Syst Front20182061363137910.1007/s10796-017-9734-6 – reference: Scherer KR, Banziger T and Roesch E. (2010). A Blueprint for Affective Computing: A source book and manual. Oxford University press. – reference: XuZMeyerPFingscheidtT"On the Effects of Speaker Gender in Emotion Recognition Training Data," Speech Communication; 13th ITG-Symposium2018GermanyOldenburg15 – reference: Chang-Hong L, Liao WK, Hsieh WC, Liao WJ, Wang JC (2014) Emotion identification using extremely low frequency components of speech feature contours. Hindawi Publishing Corporation. Sci World J. Volume 2014 – reference: Schuller B, Wöllmer M, Eyben F, and Rigoll G. (2009) Spectral or Voice Quality? Feature Type Relevance for the Discrimination of Emotion Pairs. The Role of Prosody in Affective Speech (S. Hancil, ed.), vol. 97 of Linguistic Insights, Studies in Language and Communication, pp. 285-307, Peter Lang Publishing Group – reference: KossaifiJSEWA DB: A Rich Database for Audio-Visual Emotion and Sentiment Research in the WildIn IEEE Trans Patt Anal Mach Intell.20214331022104010.1109/TPAMI.2019.2944808 – reference: MadhuNNote on measures for spectral flatnessElectron Lett20094523119510.1049/el.2009.1977 – reference: Sailunaz K, Dhaliwal M, Rokne J, and Alhajj R. (2018) Emotion detection from text and speech: a survey. SocNetw Anal Min. 8(1) – reference: Marchi E, Ringeval F, and Schuller B. (2014) Voice-enabled assistive robots for handling autism spectrum conditions: an examination of the role of prosody,” Speech and Automata in Health Care (Speech Technology and Text Mining in Medicine and Healthcare). De Gruyter, Boston/Berlin/Munich. pp. 207-236 – reference: AttwoodASEaseyKEDaliliMNSkinnerALWoodsACrickLIlettEPenton-VoakISMunafóMRState anxiety and emotional face recognition in healthy volunteersR Soc Open Sci.20174516085510.1098/rsos.160855 – reference: PremackDWoodruffGDoes the chimpanzee have a theory of mind?Behav Brain Sci Special Issue: Cognition and Consiousness in Nonhuman Species.19781451552610.1017/S0140525X00076512 – reference: TsengHHHuangYLChenJTLiangKYLinCCChenSHFacial and prosodic emotion recognition in social anxiety disorderCogn Neuropsychiatry.201722433134510.1080/13546805.2017.1330190 – reference: Cao H, Verma R, Nenkova A (2014) Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech. Comput Speech Lang. – reference: KoolagudiSGRaoKSEmotion recognition from speech: a reviewInt J Speech Technol2012159911710.1007/s10772-011-9125-1 – reference: EybenFSchererKRSchullerBWSundbergJAndréEBussoCTruongKPThe Geneva minimalistic acoustic parameter set (GeMAPS) for voice research and affective computingIEEE Trans Affect Comput20157219020210.1109/TAFFC.2015.2457417 – reference: EkmanPSchererKEkmanPExpression and the nature of emotionApproaches to Emotion1984Hillsdale, NJErlbaum319344 – reference: Amer MR, Siddiquie B, Richey C, Divakaran A (2014) Emotion recognition in speech using deep networks. In: ICASSP. Florence, Italy, pp 3752–3756 – reference: Iriondo I, Guaus R, Rodriguez A, Lázaro P, Montoya N, Blanco JM, Bernadas D, Oliver JM, Tena D, and Longhi L (2000) Validation of an acoustical modeling of emotional expression in Spanish using speech synthesis techniques. In ITRW on speechand emotion, New Castle, Northern Ireland, UK Sept. 2000 – reference: McFee B, Raffel C, Liang D, Ellis DPW, McVicar M, EB, and Nieto O. (2015) librosa: Audio and music signal analysis in python. In Proceedings of the 14th python in science conference, pp. 18–25 – reference: Vaidyanathan PP (2008) The Theory of Linear Prediction. Chapter 8. California Institute of Technology. Morgan and Claypool Publishers Series – reference: Burkhardt F, Paeschke, Rolfes M, Sendlmeier W, Weiss B (2005) 1129 A database of German emotional speech. In: Proc. Interspeech, pp. 1517–1520 – reference: PicardR(1997) Affective Computing1997CambridgeThe MIT Press – reference: PoornaSSNairGJMultistage classification scheme to enhance speech emotion recognitionInt J Speech Technol.201922232734010.1007/s10772-019-09605-w – reference: Tacconi D, Mayora O, Lukowicz P, Arnrich B, Setz C, Troster G, and Haring C (2008) Activity and emotion recognition to support early diagnosis of psychiatric diseases. In 2008 Second Int Conf Perv Comput Technol Healthcare, pp. 100-102 – reference: QuadfliegSWendBMohrAMiltnerWHStraubeTRecognition and evaluation of emotional prosody in individuals with generalized social phobia: A pilot studyBehav Res Ther.200745123096310310.1016/j.brat.2007.08.003 – reference: CalvoRAD’MelloSFrontiers of Affect-Aware Learning TechnologiesIntell. Syst. IEEE.20122727868910.1109/MIS.2012.110 – reference: GrichkovtsovaIMorelMLacheretAThe role of voice quality and prosodic contour in affective speech perceptionSpeech Comm.201254341442910.1016/j.specom.2011.10.005 – reference: Parada-CabaleiroECostantiniGBatlinerADEMoS: an Italian emotional speech corpusLang Res Eval202054234138310.1007/s10579-019-09450-y – reference: SwainMRoutrayAKabisatpathyPDatabases, features and classifiers for speech emotion recognition: a reviewInt J Speech Technol20182119312010.1007/s10772-018-9491-z – reference: ByunSLeeSEmotion Recognition Using Tone and Tempo Based on Voice for IoTTrans Korean Inst Electr Eng20166511612110.5370/KIEE.2016.65.1.116 – reference: RodriguezIACálculo de frecuencias de aparición de fonemas y alófonos en espanol actual utilizando un transcriptor automaticoLoquens201631e029 – reference: TrémeauFA review of emotion deficits in schizophreniaDialogues Clin Neurosci200681597010.31887/DCNS.2006.8.1/ftremeau – reference: Real Academia Española y Asociación de Academias de la Lengua Española. (2005). Diccionario panhispánico de dudas. Madrid: Santillana – reference: Jiang D, Lu L, Zhang H, Tao J and Cai L. (2002). Music type classification by spectral contrast feature. In Multimedia and Expo, 2002. ICME‘02. Proceedings. 2002 IEEE Int Conf. vol. 1, pp. 113–116. IEEE, 2002 – reference: CavanaghSRUrryHLShinLMMood-induced shifts in attentional bias to emotional information predict ill-and well-beingEmotion201111224124810.1037/a0022572 – reference: Shen P, Changjun Z. and Chen X. (2011) Automatic Speech Emotion Recognition Using Support Vector Machine. Int Conf Electr Mech Eng Inf Technol – reference: Polzehl T, Schmitt A, and Metze Florian. (2010). Approaching multi-lingual emotion recognition from speech - on language dependency of acoustic/prosodic features for anger recognition. In Speech Prosody’2010 Conference, paper 442, Chicago, IL, USA May 10–14 – reference: MehrabianA(1971) Silent Messages1971Belmont, CAWadsworth Publishing Co. – reference: Snow R, O' Connor, Jurafsky D. and Ng A. (2008). evaluating Non-Expert annotations for natural language tasks, Proceedings of EMNLP-08. – reference: IzardCEThe many meanings/aspects of emotion: Emotion definitions, functions, activation, and regulationEmot Rev20102436337010.1177/1754073910374661 – reference: Muhammadi J, Rabiee HR, and Hosseini A. (2013). Crowd Labeling: a survey. arXiv: Artificial Intelligence. – reference: LausenAHammerschmidtKEmotion recognition and confidence ratings predicted by vocal stimulus type and prosodic parametersHumanit Soc Sci Commun20207210.1057/s41599-020-0499-z – reference: JadoulYThompsonBde BoerBIntroducing Parselmouth: A Python interface to PraatJ Phon20187111510.1016/j.wocn.2018.07.001 – reference: Schuller B, Steidl S, Batliner A, Hirschberg J, Burgoon JK, Baird A, and Evanini K. (2016). The interspeech 2016 computational paralinguistics challenge: Deception, sincerity and native language. In 17TH Ann Conf Int Speech Comm Assoc (Interspeech 2016),. Vols 1–5 (Vol. 8, pp. 2001–2005). ISCA. – reference: StaugaardSRThreatening faces and social anxiety: A literature reviewClin Psychol Rev201030666969010.1016/j.cpr.2010.05.001 – reference: Rozgic V, Ananthakrishnan S, Saleem S, Kumar R, Vembu AN, Prasad R. (2012) Emotion recognition using acoustic and lexical features. In: INTERSPEECH. Portland, USA – reference: Montoro JM, Gutierrez-Arriola J, Colas J, Enriquez E, and Pardo JM. (1999). Analysis and modeling of emotional speech in Spanish. In Proc. int. conf. on phonetic sciences (pp. 957-960) – reference: Florian E, Wöllmer M, Schuller B (2010) openSMILE - The Munich Versatile and Fast Open-Source Audio Feature Extractor, Proc. ACM Multimedia (MM), ACM, Florence, Italy, pp. 1459–1462 – volume: 21 start-page: 93 issue: 1 year: 2018 ident: 15959_CR43 publication-title: Int J Speech Technol doi: 10.1007/s10772-018-9491-z – ident: 15959_CR33 – volume: 20 start-page: 1363 issue: 6 year: 2018 ident: 15959_CR28 publication-title: Inf Syst Front doi: 10.1007/s10796-017-9734-6 – volume: 45 start-page: 3096 issue: 12 year: 2007 ident: 15959_CR32 publication-title: Behav Res Ther. doi: 10.1016/j.brat.2007.08.003 – ident: 15959_CR37 – volume: 54 start-page: 414 issue: 3 year: 2012 ident: 15959_CR12 publication-title: Speech Comm. doi: 10.1016/j.specom.2011.10.005 – ident: 15959_CR39 – ident: 15959_CR40 doi: 10.1109/EMEIT.2011.6023178 – volume: 22 start-page: 331 issue: 4 year: 2017 ident: 15959_CR46 publication-title: Cogn Neuropsychiatry. doi: 10.1080/13546805.2017.1330190 – volume: 8 start-page: 59 issue: 1 year: 2006 ident: 15959_CR45 publication-title: Dialogues Clin Neurosci doi: 10.31887/DCNS.2006.8.1/ftremeau – ident: 15959_CR29 doi: 10.21437/SpeechProsody.2010-123 – volume: 11 start-page: 241 issue: 2 year: 2011 ident: 15959_CR7 publication-title: Emotion doi: 10.1037/a0022572 – ident: 15959_CR8 doi: 10.1155/2014/757121 – ident: 15959_CR25 – ident: 15959_CR3 doi: 10.21437/Interspeech.2005-446 – start-page: 319 volume-title: Approaches to Emotion year: 1984 ident: 15959_CR9 – volume: 2 start-page: 363 issue: 4 year: 2010 ident: 15959_CR14 publication-title: Emot Rev doi: 10.1177/1754073910374661 – ident: 15959_CR22 doi: 10.25080/Majora-7b98e3ed-003 – volume: 43 start-page: 1022 issue: 3 year: 2021 ident: 15959_CR18 publication-title: In IEEE Trans Patt Anal Mach Intell. doi: 10.1109/TPAMI.2019.2944808 – ident: 15959_CR6 doi: 10.1016/j.csl.2014.01.003 – ident: 15959_CR35 doi: 10.21437/Interspeech.2012-118 – ident: 15959_CR11 – ident: 15959_CR13 – volume: 15 start-page: 99 year: 2012 ident: 15959_CR17 publication-title: Int J Speech Technol doi: 10.1007/s10772-011-9125-1 – ident: 15959_CR21 doi: 10.1515/9781614515159.207 – volume: 3 start-page: e029 issue: 1 year: 2016 ident: 15959_CR34 publication-title: Loquens – start-page: 1 volume-title: "On the Effects of Speaker Gender in Emotion Recognition Training Data," Speech Communication; 13th ITG-Symposium year: 2018 ident: 15959_CR48 – volume-title: (1997) Affective Computing year: 1997 ident: 15959_CR27 doi: 10.7551/mitpress/1140.001.0001 – ident: 15959_CR1 doi: 10.1109/ICASSP.2014.6854297 – volume: 71 start-page: 1 year: 2018 ident: 15959_CR15 publication-title: J Phon doi: 10.1016/j.wocn.2018.07.001 – volume: 27 start-page: 86 issue: 27 year: 2012 ident: 15959_CR5 publication-title: Intell. Syst. IEEE. doi: 10.1109/MIS.2012.110 – volume: 54 start-page: 341 issue: 2 year: 2020 ident: 15959_CR26 publication-title: Lang Res Eval doi: 10.1007/s10579-019-09450-y – ident: 15959_CR38 doi: 10.21437/Interspeech.2016-129 – volume: 22 start-page: 327 issue: 2 year: 2019 ident: 15959_CR30 publication-title: Int J Speech Technol. doi: 10.1007/s10772-019-09605-w – volume: 7 start-page: 2 year: 2020 ident: 15959_CR19 publication-title: Humanit Soc Sci Commun doi: 10.1057/s41599-020-0499-z – volume: 45 start-page: 1195 issue: 23 year: 2009 ident: 15959_CR20 publication-title: Electron Lett doi: 10.1049/el.2009.1977 – ident: 15959_CR16 – ident: 15959_CR41 – volume: 30 start-page: 669 issue: 6 year: 2010 ident: 15959_CR42 publication-title: Clin Psychol Rev doi: 10.1016/j.cpr.2010.05.001 – volume: 7 start-page: 190 issue: 2 year: 2015 ident: 15959_CR10 publication-title: IEEE Trans Affect Comput doi: 10.1109/TAFFC.2015.2457417 – ident: 15959_CR44 doi: 10.1109/PCTHEALTH.2008.4571041 – ident: 15959_CR36 doi: 10.1007/s13278-018-0505-2 – volume: 65 start-page: 116 year: 2016 ident: 15959_CR4 publication-title: Trans Korean Inst Electr Eng doi: 10.5370/KIEE.2016.65.1.116 – volume: 1 start-page: 515 issue: 4 year: 1978 ident: 15959_CR31 publication-title: Behav Brain Sci Special Issue: Cognition and Consiousness in Nonhuman Species. doi: 10.1017/S0140525X00076512 – volume: 4 start-page: 160855 issue: 5 year: 2017 ident: 15959_CR2 publication-title: R Soc Open Sci. doi: 10.1098/rsos.160855 – volume-title: (1971) Silent Messages year: 1971 ident: 15959_CR23 – ident: 15959_CR24 – ident: 15959_CR47 |
SSID | ssj0016524 |
Score | 2.3915157 |
Snippet | In this paper we present a new speech emotion dataset on Spanish. The database is created using an elicited approach and is composed by fifty non-actors... |
SourceID | proquest crossref springer |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 13093 |
SubjectTerms | Audio data Comparative studies Computer Communication Networks Computer Science Crowdsourcing Data Structures and Information Theory Datasets Emotion recognition Emotions Machine learning Multimedia Information Systems Special Purpose and Application-Based Systems Speech recognition State-of-the-art reviews |
SummonAdditionalLinks | – databaseName: ProQuest Central dbid: BENPR link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3fT9swED6x8rI9MGCgdWOTH_a2WUv9KwkSQisrQkhU0wYSb5FjX9ZK0BbaiX-fc-K0DIm-RUrsh5zv7rN9930AX7RVjpC84Zm0ijYoiDwLpJ-lp2TtKiErH44GLobm7EqdX-vrDRi2vTChrLKNiXWg9lMXzsi_i7xHjpsKkx7P7nhQjQq3q62Eho3SCv6ophh7BZsUknXSgc3-YPjr9_Jewegoc5slnHJlL7bRNM10vdCqQjmMU4rXOX_4P1Wt8OezK9M6E51uw1aEkOxHY_Md2MDJLrxt5RlY9NZdePOEa_Ad3A1upxcUd0d_yP_H89HP_iGruWXZtGLzGaIbMWw0fdiyqoieb-tqS2RRXuIvq7Vz5mw8YZYRJmdxPoY3YxfwKwtFpyE57sHV6eDy5IxHvQXupJELrhPvU4mVFuT5wgbamQR15nOHwjtlTZpUzvgcUVa0t_VpljgnCKKhFLlCJfehM5lO8D0wW5lSZ7TXskoro0yWySoRaHKPVpem7EKv_bWFi2TkQRPjpljRKAdzFGSOojZH8dCFr8sxs4aKY-3XB63FiuiW82K1iLrwrbXi6vXLs31YP9tHeC0I7DTV3AfQWdz_w08EVhbl57gCHwGMMeXt priority: 102 providerName: ProQuest |
Title | EmoMatchSpanishDB: study of speech emotion recognition machine learning models in a new Spanish elicited database |
URI | https://link.springer.com/article/10.1007/s11042-023-15959-w https://www.proquest.com/docview/2918767267 |
Volume | 83 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3PT9swFH6CcmEHxoBp3aDygRtEBP9KslvbtSAQFQIqwSlK7Je1Umlh6dR_f8-p2wJik3aJI8X2Ic_P32f7-XsAhyqThpi8DmKRSVqgIAaxE_3MLYG1KbgorNsauOrp8768uFf3XibH3YV5c35_UhI8SR4QsgQEvCoJZuuwoWjidaO5rdvLEwOtuPSXYt5v9xp4VmzyzQFohSvdbdjyhJA15xb8BGs43oGPi2QLzPveDnx4oRy4C8-dx8kVzaKDW_LmYTn40frOKqVYNilY-YRoBgznGXrYMkaI3h-r2ElkPlnET1ZlwinZcMwyRgyb-f4YjobGsVHmQkgd1O1Bv9u5a58HPntCYIQW00CF1kYCC8XJj3nmRGRCVLFNDHJrZKajsDDaJoiioJWqjeLQGE6ECwVPJErxGWrjyRi_AMsKnauYVk6ZVFJLHceiCDnqxGKmcp3X4XTxa1PjpcVdhotRuhJFduZIyRxpZY50VoejZZunubDGP2vvLyyWeicrU56c0lwecR3V4XhhxdXnv_f29f-qf4NNTlRmHqu9D7Xpr994QFRkmjdgPe6eNWCj2W21eq48e7jsUNnq9K5vGtUIpWefN_8ASH3dOQ |
linkProvider | Springer Nature |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3db9MwED-N7gF44GOAKAzwAzyBRerYToI0IcY6dWytEGzS3jLHvtBKW9uRoop_jr-Nc-K0gMTe9hYpyT34zvdh3_1-AC-VkZYyec3T2EgqUBB56kE_C0fB2pYiLp0_GhiO9OBEfjpVpxvwq52F8W2VrU-sHbWbWX9G_lZkPdq4idDJ-_kl96xR_na1pdAwgVrB7dQQY2Gw4xB_LqmEq3YO9kjfr4TY7x9_HPDAMsBtrOMFV5FzSYylEmTvwniwlQhV6jKLwllpdBKVVrsMMS6ponNJGlkrKDHBWGQSZUxyb8Cm9AcoHdjc7Y8-f1ndY2gVaHXTiFNs7oWxnWZ4r-dHYyhmckopVMaXf4fGdb77zxVtHfn278GdkLKyD42N3YcNnG7B3ZYOggXvsAW3_8A2fACX_YvZkPz8-Cv5m0k13tt9x2osWzYrWTVHtGOGDYcQW3Ux0fNF3d2JLNBZfGM1V0_FJlNmGNUALMhjeD6xPl9mvsnVB-OHcHItK_8IOtPZFB8DM6UuVEq1nZFKaqnTNC4jgTpzaFShiy702qXNbQA_9xwc5_kattmrIyd15LU68mUXXq_-mTfQH1d-vd1qLA9uoMrXRtuFN60W16__L-3J1dJewM3B8fAoPzoYHT6FW4ISraaTfBs6i-8_8BklSoviebBGBmfXvQF-Ax2rIls |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9NAEB6VVkJwKFBApBTYA5zAqrMv20gVoiRRS2lUAZV6c9e7YxKpTVIcFPEX-VXM2usEkOitN0u257AzO4_dme8DeKmMtJTJ6ygVRlKBghilHvSzcBSsbclF6fzRwPFQH5zKj2fqbA1-tbMwvq2y9Ym1o3ZT68_Id3nWpY2bcJ3slqEt4qQ3eDe7ijyDlL9pbek0TKBZcHs13FgY8jjCnwsq56q9wx7p_hXng_7XDwdRYByIrNBiHqnYuURgqTjZPjceeCVGlbrMIndWGp3EpdUuQxQlVXcuSWNrOSUpKHgmUQqSews2Eor6VAhu7PeHJ5-XdxpaBYrdNI4oTnfDCE8zyNf1YzIUPyNKL1QWLf4Ok6vc95_r2joKDu7DZkhf2fvG3h7AGk624F5LDcGCp9iCu3_gHD6Eq_7l9Jh8_ugL-Z5xNertv2U1ri2blqyaIdoRw4ZPiC07muj5su70RBaoLb6xmrenYuMJM4zqARbkMbwYW587M9_w6gPzIzi9kZV_DOuT6QSfADOlLlRKdZ6RSmqp01SUMUedOTSq0EUHuu3S5jYAoXs-jot8BeHs1ZGTOvJaHfmiA6-X_8waGJBrv95pNZYHl1DlKwPuwJtWi6vX_5e2fb20F3CbNkL-6XB49BTucMq5mqbyHViff_-BzyhnmhfPgzEyOL9p-_8N1uYmnw |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=EmoMatchSpanishDB%3A+study+of+speech+emotion+recognition+machine+learning+models+in+a+new+Spanish+elicited+database&rft.jtitle=Multimedia+tools+and+applications&rft.au=Garcia-Cuesta%2C+Esteban&rft.au=Salvador%2C+Antonio+Barba&rft.au=P%C3%A3ez%2C+Diego+Gachet&rft.date=2024-02-01&rft.pub=Springer+US&rft.eissn=1573-7721&rft.volume=83&rft.issue=5&rft.spage=13093&rft.epage=13112&rft_id=info:doi/10.1007%2Fs11042-023-15959-w&rft.externalDocID=10_1007_s11042_023_15959_w |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1573-7721&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1573-7721&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1573-7721&client=summon |