VoRF: Volumetric Relightable Faces
1Max Planck Institute for Informatics, Saarland Informatics Campus
2Friedrich-Alexander-Universität Erlangen-Nürnberg
3IST-Austria
4Harvard University
5MIT CSAIL
Best Paper Award Honourable Mention
Abstract
Portrait viewpoint and illumination editing is an important problem with several applications in VR/AR, movies, and photography. Comprehensive knowledge of geometry and illumination is critical for obtaining photorealistic results. Current methods are unable to explicitly model in 3D while handing both viewpoint and illumination editing from a single image. In this paper, we propose VoRF, a novel approach that can take evena single portrait image as input and relight human heads under novel illuminations that can be viewed from arbitrary viewpoints. VoRF represents a human head as a continuous volumetric field and learns a prior model of human heads using a coordinate-based MLP with individual latent spaces for identity and illumination. The prior model is learnt in an auto-decoder manner over a diverse class of head shapes and appearances, allowing VoRF to generalize to novel test identities from a single input image. Additionally, VoRF has a reflectance MLP that uses the intermediate features of the prior model for rendering One-Light-at-A-Time (OLAT) images under novel views. We synthesize novel illuminations by combining these OLAT images with target environment maps. Qualitative and quantitative evaluations demonstrate the effectiveness of VoRF for relighting and novel view synthesis, even when applied to unseen subjects under uncontrolled illuminations.
Downloads
Citation
@article{prao2022vorf, title = {VoRF: Volumetric Relightable Faces}, author = {Rao, Pramod and {B R}, Mallikarjun and Fox, Gereon and Weyrich, Tim and Bickel, Bernd and Seidel, Hans-Peter and Pfister, Hanspeter and Matusik, Wojciech and Tewari, Ayush and Theobalt, Christian and Elgharib, Mohamed }, booktitle = {British Machine Vision Conference (BMVC)}, year={2022} }
Acknowledgments
This work was supported by the ERC Consolidator Grant 4DReply (770784).