VolumeDeform:
Real-time Volumetric Non-rigid Reconstruction

ECCV 2016

M. Innmann 1 M. Zollhöfer 2 M. Nießner 3 C. Theobalt 2 M. Stamminger 1
1 University of Erlangen-Nuremberg 2 Max Planck Institute for Informatics 3 Stanford University


Abstract

We present a novel approach for the reconstruction of dynamic geometric shapes using a single hand-held consumer-grade RGB-D sensor at real-time rates. Our method does not require a pre-defined shape template to start with and builds up the scene model from scratch during the scanning process. Geometry and motion are parameterized in a unified manner by a volumetric representation that encodes a distance field of the surface geometry as well as the non-rigid space deformation. Motion tracking is based on a set of extracted sparse color features in combination with a dense depth-based constraint formulation. This enables accurate tracking and drastically reduces drift inherent to standard model-to-depth alignment. We cast finding the optimal deformation of space as a non-linear regularized variational optimization problem by enforcing local smoothness and proximity to the input constraints. The problem is tackled in real-time at the camera's capture rate using a data-parallel flip-flop optimization strategy. Our results demonstrate robust tracking even for fast motion and scenes that lack geometric features.


Paper Supplemental Video Poster Data

Video


Bibtex

 
@article{innmann2016volume,
title = {{VolumeDeform: Real-time Volumetric Non-rigid Reconstruction}},
author = {Matthias Innmann and Michael Zollh{\"o}fer and Matthias Nie{\ss}ner and Christian Theobalt and Marc Stamminger},
booktitle = {Proceedings of European Conference on Computer Vision ({ECCV})},
numpages = {17},
month = {October},
year = {2016}
}