Towards Geometric-Photometric Joint Alignment for Facial Mesh Registration
This paper presents a Geometric-Photometric Joint Alignment~(GPJA) method, which aligns discrete human expressions at pixel-level accuracy by combining geometric and photometric information. Common practices for registering human heads typically involve aligning landmarks with facial template meshes...
        Saved in:
      
    
          | Main Authors | , , | 
|---|---|
| Format | Journal Article | 
| Language | English | 
| Published | 
          
        04.03.2024
     | 
| Subjects | |
| Online Access | Get full text | 
| DOI | 10.48550/arxiv.2403.02629 | 
Cover
| Summary: | This paper presents a Geometric-Photometric Joint Alignment~(GPJA) method,
which aligns discrete human expressions at pixel-level accuracy by combining
geometric and photometric information. Common practices for registering human
heads typically involve aligning landmarks with facial template meshes using
geometry processing approaches, but often overlook dense pixel-level
photometric consistency. This oversight leads to inconsistent texture
parametrization across different expressions, hindering the creation of
topologically consistent head meshes widely used in movies and games. GPJA
overcomes this limitation by leveraging differentiable rendering to align
vertices with target expressions, achieving joint alignment in both geometry
and photometric appearances automatically, without requiring semantic
annotation or pre-aligned meshes for training. It features a holistic rendering
alignment mechanism and a multiscale regularized optimization for robust
convergence on large deformation. The method utilizes derivatives at vertex
positions for supervision and employs a gradient-based algorithm which
guarantees smoothness and avoids topological artifacts during the geometry
evolution. Experimental results demonstrate faithful alignment under various
expressions, surpassing the conventional non-rigid ICP-based methods and the
state-of-the-art deep learning based method. In practical, our method generates
meshes of the same subject across diverse expressions, all with the same
texture parametrization. This consistency benefits face animation,
re-parametrization, and other batch operations for face modeling and
applications with enhanced efficiency. | 
|---|---|
| DOI: | 10.48550/arxiv.2403.02629 |