Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias

Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influen...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in behavioral neuroscience Vol. 11; p. 144
Main Authors Invitto, Sara, Calcagnì, Antonio, Mignozzi, Arianna, Scardino, Rosanna, Piraino, Giulia, Turchi, Daniele, De Feudis, Irio, Brunetti, Antonio, Bevilacqua, Vitoantonio, de Tommaso, Marina
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Research Foundation 02.08.2017
Frontiers Media S.A
Subjects
Online AccessGet full text
ISSN1662-5153
1662-5153
DOI10.3389/fnbeh.2017.00144

Cover

More Information
Summary:Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians). Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP) and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment). A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
Edited by: Giuseppe Placidi, University of L'Aquila, Italy
Reviewed by: Michela Balconi, Università Cattolica del Sacro Cuore, Italy; Anna Esposito, Università degli Studi della Campania “Luigi Vanvitelli” Caserta, Italy
ISSN:1662-5153
1662-5153
DOI:10.3389/fnbeh.2017.00144