Towards Geometric–Photometric Joint Alignment for facial mesh registration

This paper presents a Geometric-Photometric Joint Alignment (GPJA) method, which aligns discrete human expressions at pixel-level accuracy by combining geometric and photometric information. Common practices for registering human heads typically involve aligning landmarks with facial template meshes...

Full description

Saved in:
Bibliographic Details
Published inComputers & graphics Vol. 128; p. 104214
Main Authors Wang, Xizhi, Wang, Yaxiong, Li, Mengjian
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.05.2025
Subjects
Online AccessGet full text
ISSN0097-8493
DOI10.1016/j.cag.2025.104214

Cover

More Information
Summary:This paper presents a Geometric-Photometric Joint Alignment (GPJA) method, which aligns discrete human expressions at pixel-level accuracy by combining geometric and photometric information. Common practices for registering human heads typically involve aligning landmarks with facial template meshes using geometry processing approaches, but often overlook dense pixel-level photometric consistency. This oversight leads to inconsistent texture parametrization across different expressions, hindering the creation of topologically consistent head meshes widely used in movies and games. GPJA overcomes this limitation by leveraging differentiable rendering to align vertices with target expressions, achieving joint alignment in both geometry and photometric appearances automatically, without requiring semantic annotation or pre-aligned meshes for training. It features a holistic rendering alignment mechanism and a multiscale regularized optimization for robust convergence on large deformation. The method utilizes derivatives at vertex positions for supervision and employs a gradient-based algorithm which guarantees smoothness and avoids topological artifacts during the geometry evolution. Experimental results demonstrate faithful alignment under various expressions, surpassing the conventional non-rigid ICP-based methods and the state-of-the-art deep learning based method. In practical, our method generates meshes of the same subject across diverse expressions, all with the same texture parametrization. This consistency benefits face animation, re-parametrization, and other batch operations for face modeling and applications with enhanced efficiency. [Display omitted] •Novel method achieving geometric–photometric joint alignment.•Holistic rendering alignment and multiscale optimization for robust results.•Creating topology–consistent facial models without semantic annotations.
ISSN:0097-8493
DOI:10.1016/j.cag.2025.104214