The Avatar’s Gist: How to Transfer Affective Components From Dynamic Walking to Static Body Postures
Dynamic virtual representations of the human being can communicate a broad range of affective states through body movements, thus effectively studying emotion perception. However, the possibility of modeling static body postures preserving affective information is still fundamental in a broad spectr...
Saved in:
Published in | Frontiers in neuroscience Vol. 16; p. 842433 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Switzerland
Frontiers Research Foundation
15.06.2022
Frontiers Media S.A |
Subjects | |
Online Access | Get full text |
ISSN | 1662-453X 1662-4548 1662-453X |
DOI | 10.3389/fnins.2022.842433 |
Cover
Summary: | Dynamic virtual representations of the human being can communicate a broad range of affective states through body movements, thus effectively studying emotion perception. However, the possibility of modeling static body postures preserving affective information is still fundamental in a broad spectrum of experimental settings exploring time-locked cognitive processes. We propose a novel automatic method for creating virtual affective body postures starting from kinematics data. Exploiting body features related to postural cues and movement velocity, we transferred the affective components from dynamic walking to static body postures of male and female virtual avatars. Results of two online experiments showed that participants coherently judged different valence and arousal levels in the avatar’s body posture, highlighting the reliability of the proposed methodology. In addition, esthetic and postural cues made women more emotionally expressive than men. Overall, we provided a valid methodology to create affective body postures of virtual avatars, which can be used within different virtual scenarios to understand better the way we perceive the affective state of others. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 Edited by: An ı l Ufuk Batmaz, Kadir Has University, Turkey Reviewed by: Christos Mousas, Purdue University, United States; Dominik M. Endres, University of Marburg, Germany; Christian Graff, Université Grenoble Alpes, France This article was submitted to Perception Science, a section of the journal Frontiers in Neuroscience |
ISSN: | 1662-453X 1662-4548 1662-453X |
DOI: | 10.3389/fnins.2022.842433 |