One-shot Facial Expression Reenactment using 3D Morphable Models
Date
2022
Authors
Vei, Roman
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The recent advance in generative adversarial networks has shown promising results
in solving the problem of head reenactment. It aims to generate novel images with
altered poses and emotions while preserving the identity of a human head from a
single photo. Current approaches have limitations, making them inapplicable for
real-world applications. Specifically, most algorithms are computationally expensive,
have no apparent tools for manual image manipulation, require audio or take
multiple input images to generate novel images.
Our method addresses the single-shot face reenactment problem with an end-toend
algorithm. The proposed method utilizes head 3D morphable model (3DMM)
parameters to encode identity, pose, and expression. With the proposed approach,
the pose and emotion of a person on an image is changed by manipulating its 3DMM
parameters. Our work consists of a face mesh prediction network and a GAN-based
renderer. A predictor is a neural network with simple encoder architecture that regresses
3D mesh parameters. A renderer is a GAN network with warping and rendering
submodules that renders images from a single source image and target image
3DMM parameters.
This work proposes a novel head reenactment framework that is computationally
efficient and uses 3DMM parameters that are easy to alter, making the proposed
method applicable in real-life applications. It is first to our knowledge approach
that simultaneously solves two of these problems: 3DMM parameters prediction
and face reenactment, and benefits from both.
Description
Keywords
Citation
Vei, Roman. One-shot Facial Expression Reenactment using 3D Morphable Models / Roman Vei; Supervisors: Eugene Khvedchenya, Orest Kupyn; Ukrainian Catholic University, Faculty of Applied Sciences, Department of Computer Sciences. – Lviv 2022. – 47 p.