BlendFields: Few-Shot Example-Driven Facial Modeling
Jun 8, 2023·,,,,,,,·
0 min read
Kacper Kania
Stephan J. Garbin
Andrea Tagliasacchi
Virginia Estellers
Kwang Moo Yi
Julien Valentin
Tomasz Trzciński
Marek Kowalski
Abstract
Generating faithful visualizations of human faces requires capturing both coarse and fine-level details of the face geometry and appearance. Existing methods are either data-driven, requiring an extensive corpus of data not publicly accessible to the research community, or fail to capture fine details because they rely on geometric face models that cannot represent fine-grained details in texture with a mesh discretization and linear deformation designed to model only a coarse face geometry. We introduce a method that bridges this gap by drawing inspiration from traditional computer graphics techniques. Unseen expressions are modeled by blending appearance from a sparse set of extreme poses. This blending is performed by measuring local volumetric changes in those expressions and locally reproducing their appearance whenever a similar expression is performed at test time. We show that our method generalizes to unseen expressions, adding fine-grained effects on top of smooth volumetric deformations of a face, and demonstrate how it generalizes beyond faces.
Type
Publication
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2023