StyleRig: Rigging StyleGAN for 3D Control over Portrait Images
Abstract
StyleGAN generates photorealistic portrait images of faces with eyes, teeth, hair and context (neck, shoulders, background), but lacks a rig-like control over semantic face parameters that are interpretable in 3D, such as face pose, expressions, and scene illumination. Three-dimensional morphable face models (3DMMs) on the other hand offer control over the semantic parameters, but lack photorealism when rendered and only model the face interior, not other parts of a portrait image (hair, mouth interior, background). We present the first method to provide a face rig-like control over a pretrained and fixed StyleGAN via a 3DMM. A new rigging network, \textit{RigNet} is trained between the 3DMM's semantic parameters and StyleGAN's input. The network is trained in a self-supervised manner, without the need for manual annotations. At test time, our method generates portrait images with the photorealism of StyleGAN and provides explicit control over the 3D semantic parameters of the face.
Downloads
Citation
@inproceedings{tewari2020stylerig, title = {StyleRig: Rigging StyleGAN for 3D Control over Portrait Images, CVPR 2020}, author={Tewari, Ayush and Elgharib, Mohamed and Bharaj, Gaurav and Bernard, Florian and Seidel, Hans-Peter and P{\'e}rez, Patrick and Z{\"o}llhofer, Michael and Theobalt, Christian}, booktitle = {{IEEE} Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {june}, organization = {{IEEE}}, year = {2020}, }