DEMEA: Deep Mesh Autoencoders
for Non-rigidly Deforming Objects
Abstract
Mesh autoencoders are commonly used for dimensionality reduction, sampling and mesh modeling. We propose a general-purpose DEep MEsh Autoencoder (DEMEA) which adds a novel embedded deformation layer to a graph-convolutional mesh autoencoder. The embedded deformation layer (EDL) is a differentiable deformable geometric proxy which explicitly models point displacements of non-rigid deformations in a lower dimensional space and serves as a local rigidity regularizer. DEMEA decouples the parameterization of the deformation from the final mesh resolution since the deformation is defined over a lower dimensional embedded deformation graph. We perform a large-scale study on four different datasets of deformable objects. Reasoning about the local rigidity of meshes using EDL allows us to achieve higher-quality results for highly deformable objects, compared to directly regressing vertex positions. We demonstrate multiple applications of DEMEA, including non-rigid 3D reconstruction from depth and shading cues, non-rigid surface tracking, as well as the transfer of deformations over different meshes.
Downloads
Additional Videos
-
Highlights (1.5 mins)
-
Talk (10 mins)
Citation
@article{Tretschk2020DEMEA, author = {Tretschk, Edgar and Tewari, Ayush and Zollh\"{o}fer, Michael and Golyanik, Vladislav and Theobalt, Christian}, title = "{{DEMEA}: Deep Mesh Autoencoders for Non-Rigidly Deforming Objects}", journal = {European Conference on Computer Vision (ECCV)}, year = "2020" }
Acknowledgments
This work was supported by the ERC Consolidator Grant 4DRepLy (770784), the Max Planck Center for Visual Computing and Communications (MPC-VCC), and an Oculus research grant.
Contact
For questions, clarifications, please get in touch with:Edgar Tretschk tretschk@mpi-inf.mpg.de
Vladislav Golyanik golyanik@mpi-inf.mpg.de