FML: Face Model Learning from Videos

CVPR 2019 (Oral)


A. Tewari 1 F. Bernard 1 P. Garrido 2 G. Bharaj 2 M. Elgharib 1 H-P. Seidel 1 P. Perez 3 M. Zollhöfer 4 C.Theobalt 1
1MPI Informatics, Saarland Informatics Campus 2Technicolor 3Valeo.ai 4Stanford University




Full Video More Qualitative Results Poster Talk


Update: There have been minor changes to the quantitative numbers reported in the paper, due to an error in the data loading script. Please check the pdf version in this page for the updated numbers. Note that the new numbers do not change any claims regarding comparisons to the state-of-the-art..

Abstract

Monocular image-based 3D reconstruction of faces is a long-standing problem in computer vision. Since image data is a 2D projection of a 3D face, the resulting depth ambiguity makes the problem ill-posed. Most existing methods rely on data-driven priors that are built from limited 3D face scans. In contrast, we propose multi-frame video-based self-supervised training of a deep network that (i) learns a face identity model both in shape and appearance while (ii) jointly learning to reconstruct 3D faces. Our face model is learned using only corpora of in-the-wild video clips collected from the Internet. This virtually endless source of training data enables learning of a highly general 3D face model. In order to achieve this, we propose a novel multi-frame consistency loss that ensures consistent shape and appearance across multiple frames of a subject's face, thus minimizing depth ambiguity. At test time we can use an arbitrary number of frames, so that we can perform both monocular as well as multi-frame reconstruction.


Paper Supplemental


Bibtex

 
@InProceedings{tewari2019fml,
title={Fml: Face model learning from videos},
author={Tewari, Ayush and Bernard, Florian and Garrido, Pablo and Bharaj, Gaurav and Elgharib, Mohamed and Seidel, Hans-Peter and P{\'e}rez, Patrick and Z{\"o}llhofer, Michael and Theobalt, Christian},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={10812--10822},
year={2019}
}