Abstract

Creating a digital human avatar that is relightable, drivable, and photorealistic is a challenging and important problem in Vision and Graphics. Humans are highly articulated creating pose-dependent appearance effects like self-shadows and wrinkles, and skin as well as clothing require complex and space-varying BRDF models. While recent human relighting approaches can recover plausible material-light decompositions from multi-view video, they do not generalize to novel poses and still suffer from visual artifacts. To address this, we propose Relightable Neural Actor, the first video-based method for learning a photorealistic neural human model that can be relighted, allows appearance editing, and can be controlled by arbitrary skeletal poses. Importantly, for learning our human avatar, we solely require a multi-view recording of the human under a known, but static lighting condition. To achieve this, we represent the geometry of the actor with a drivable density field that models pose-dependent clothing deformations and provides a mapping between 3D and UV space, where normal, visibility, and materials are encoded. To evaluate our approach in real-world scenarios, we collect a new dataset with four actors recorded under different light conditions, indoors and outdoors, providing the first benchmark of its kind for human relighting, and demonstrating state-of-the-art relighting results for novel human poses.

Main Video

Propose Method with Intrinsic Decomposition

OLAT Rendering

Self-shadowing

Citation

@inproceedings{relightneuralactor2024eccv,
title = {Relightable Neural Actor with Intrinsic Decomposition and Pose Control},
author = {Luvizon, Diogo and Golyanik, Vladislav and Kortylewski, Adam and Habermann, Marc and Theobalt, Christian},
year = {2024},
booktitle={European Conference on Computer Vision (ECCV)},
}