Bridging the gap between the synthesized world and the real world to achieve physically accurate relighting, appearance editing, and capture.
Recent advances in generative AI, graphics, and vision have led to remarkable progress in relighting, appearance capture and editing, and inverse rendering—spanning scales from small objects to humans and entire scenes. The next major opportunity lies in bridging the gap between the synthesized world and the real world to achieve physically accurate relighting, appearance editing, and capture.
This workshop brings together researchers across these areas to highlight recent advances, discuss open challenges, and explore future directions. A cross-cutting theme will be the use of generative AI in physical appearance modeling.
Meta
Digital Humans & Capture
Univ. of Washington
Generative Models
Google Deepmind
Generative Relighting
Nvidia & U of Toronto
Inverse Rendering
Zhejiang University
Material Acquisition
UIUC
Physics-based Rendering
Tentative Half-Day Program
Contact: xzhou@mpi-inf.mpg.de or mhaberma@mpi-inf.mpg.de