InverseFaceNet: Deep Single-Shot Inverse Face Rendering From A Single Image
CVPR 2018
Hyeongwoo Kim1 |
Michael Zollhöfer1,2 |
Ayush Tewari1 |
Justus Thies3 |
Christian Richardt4 |
Christian Theobalt1 |
1MPI Informatics |
2Stanford University |
3Technical University of Munich |
4University of Bath |
Abstract
We introduce InverseFaceNet, a deep convolutional inverse rendering framework for faces that jointly estimates facial pose, shape, expression, reflectance and illumination from a single input image in a single shot. By estimating all these parameters from just a single image, advanced editing possibilities on a single face image, such as appearance editing and relighting, become feasible. Previous learning-based face reconstruction approaches do not jointly recover all dimensions, or are severely limited in terms of visual quality. In contrast, we propose to recover high-quality facial pose, shape, expression, reflectance and illumination using a deep neural network that is trained using a large, synthetically created dataset. Our approach builds on a novel loss function that measures model-space similarity directly in parameter space and significantly improves reconstruction accuracy. In addition, we propose an analysis-by-synthesis breeding approach which iteratively updates the synthetic training corpus based on the distribution of real-world images, and we demonstrate that this strategy outperforms completely synthetically trained networks. Finally, we show high-quality reconstructions and compare our approach to several state-of-the-art approaches.
|
|
Bibtex
@inproceedings{kim2018inverse,
title = {InverseFaceNet: Deep Single-Shot Inverse Face Rendering From A Single Image},
author = {Kim, Hyeongwoo and Zoll{\"o}fer, Michael and Tewari, Ayush and Thies, Justus and Richardt, Christian and Theobalt, Christian},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages = {},
year = {2018}
}
|
|