Download Video: HD (MP4, 71 MB)

This work was published at WACV 2024.


Event cameras report events whenever an individual pixel changes brightness. The discrete and asynchronous nature of events makes recovering pixel brightness signals a challenging task, even if conventional brightness frames are recorded along with events. Recent works have addressed this task with neural networks, which tend to be biased towards their training distribution. All methods need to deal with noise in the events to produce very high output framerates. We introduce a new approach to event-based reconstruction, not learning-based: Our model assigns each event an explicit confidence weight to account for the uncertainty arising from noise. We also introduce a novel loss term to balance confidences against each other and show that interpolation of brightness signals between events can benefit from Bézier curves. We demonstrate that allowing brightness changes between exposures can improve reconstruction quality. Our evaluation shows that our method improves the state of the art in the tasks of event-based deblurring and event-based frame interpolation.



    author    = {Fox, Gereon and Pan, Xingang and Tewari, Ayush and Elgharib, Mohamed and Theobalt, Christian},
    title     = {Unsupervised Event-Based Video Reconstruction},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2024},
    pages     = {4179-4188}


We thank Kartik Teotia, for helping with the evaluation of related methods, and Viktor Rudnev, for supplying some of the sequences used in the evaluation.

This page is Zotero translator friendly. Imprint. Data Protection.