Gender Recognition from 3D Shape Parameters
Gender recognition from images is generally approached by extracting the salient visual features of the observed subject, either focusing on the facial appearance or by analyzing the full body. In real-world scenarios, image-based gender recognition approaches tend to fail, providing unreliable resu...
        Saved in:
      
    
          | Published in | Image Analysis and Processing. ICIAP 2022 Workshops pp. 203 - 214 | 
|---|---|
| Main Authors | , , | 
| Format | Book Chapter | 
| Language | English | 
| Published | 
        Cham
          Springer International Publishing
    
        2022
     | 
| Series | Lecture Notes in Computer Science | 
| Subjects | |
| Online Access | Get full text | 
| ISBN | 3031133234 9783031133237  | 
| ISSN | 0302-9743 1611-3349  | 
| DOI | 10.1007/978-3-031-13324-4_18 | 
Cover
| Summary: | Gender recognition from images is generally approached by extracting the salient visual features of the observed subject, either focusing on the facial appearance or by analyzing the full body. In real-world scenarios, image-based gender recognition approaches tend to fail, providing unreliable results. Face-based methods are compromised by environmental conditions, occlusions (presence of glasses, masks, hair), and poor resolution. Using a full-body perspective leads to other downsides: clothing and hairstyle may not be discriminative enough for classification, and background cluttering could be problematic. We propose a novel approach for body-shape-based gender classification. Our contribution consists in introducing the so-called Skinned Multi-Person Linear model (SMPL) as 3D human mesh. The proposed solution is robust to poor image resolution and the number of features for the classification is limited, making the recognition task computationally affordable, especially in the classification stage, where less complex learning architectures can be easily trained. The obtained information is fed to an SVM classifier, trained and tested using three different datasets, namely (i) FVG, containing videos of walking subjects (ii) AMASS, collected by converting MOCAP data of people performing different activities into realistic 3D human meshes, and (iii) SURREAL, characterized by synthetic human body models. Additionally, we demonstrate that our approach leads to reliable results even when the parametric 3D mesh is extracted from a single image. Considering the lack of benchmarks in this area, we trained and tested the FVG dataset with a pre-trained Resnet50, for comparing our model-based method with an image-based approach. | 
|---|---|
| ISBN: | 3031133234 9783031133237  | 
| ISSN: | 0302-9743 1611-3349  | 
| DOI: | 10.1007/978-3-031-13324-4_18 |