We introduce a supervised-learning framework for non-rigid point set alignment of a new kind — Displacements on Voxels Networks (DispVoxNets) — which abstracts away from the point set representation and regresses 3D displacement fields on regularly sampled proxy 3D voxel grids. Thanks to recently released collections of deformable objects with known intra-state correspondences, DispVoxNets learn a deformation model and further priors (e.g., weak point topology preservation) for different object categories such as cloths, human bodies and faces. DispVoxNets cope with large deformations, noise and clustered outliers more robustly than the state-of-the-art. At test time, our approach runs orders of magnitude faster than previous techniques. All properties of DispVoxNets are ascertained numerically and qualitatively in extensive experiments and comparisons to several previous methods.



BibTeX, 1 KB

       author = {{Shimada}, Soshi and {Golyanik}, Vladislav and {Tretschk}, Edgar and {Stricker}, Didier and {Theobalt}, Christian}, 
       title = {DispVoxNets: Non-Rigid Point Set Alignment with Supervised Learning Proxies}, 
       booktitle = {International Conference on 3D Vision (3DV)}, 
       year = {2019} 


This work was supported by the project VIDETE (01IW18002) of the German Federal Ministry of Education and Research (BMBF) and the ERC Consolidator Grant 4DReply (770784).


For questions, clarifications, please get in touch with:
Soshi Shimada sshimada@mpi-inf.mpg.de
Vladislav Golyanik golyanik@mpi-inf.mpg.de

This page is Zotero translator friendly. Page last updated Imprint. Data Protection.