Deblur-NSFF: Neural Scene Flow Fields for Blurry Dynamic Scenes

In this work, we present a method to address the problem of novel view and time synthesis of complex dynamic scenes considering the input video is subject to blurriness caused due to camera or object motion or out-of-focus blur. Neural Scene Flow Field (NSFF) has shown remarkable results by training...

Full description

Saved in:
Bibliographic Details
Published inProceedings / IEEE Workshop on Applications of Computer Vision pp. 3646 - 3655
Main Authors Luthra, Achleshwar, Gantha, Shiva Souhith, Song, Xiyun, Yu, Heather, Lin, Zongfang, Peng, Liang
Format Conference Proceeding
LanguageEnglish
Published IEEE 03.01.2024
Subjects
Online AccessGet full text
ISSN2642-9381
DOI10.1109/WACV57701.2024.00362

Cover

More Information
Summary:In this work, we present a method to address the problem of novel view and time synthesis of complex dynamic scenes considering the input video is subject to blurriness caused due to camera or object motion or out-of-focus blur. Neural Scene Flow Field (NSFF) has shown remarkable results by training a dynamic NeRF to capture motion in the scene, but this method is not robust to unstable camera handling which can lead to blurred renderings. We propose Deblur-NSFF, a method that learns spatially-varying blur kernels to simulate the blurring process and gradually learns a sharp time-conditioned NeRF representation. We describe how to optimize our representation for sharp space-time view synthesis. Given blurry input frames, we perform both quantitative and qualitative comparison with state-of-the-art methods on modified NVIDIA Dynamic Scene dataset. We also compare our method with Deblur-NeRF, a method that has been designed to handle blur in static scenes. The demonstrated results show that our method outperforms prior work.
ISSN:2642-9381
DOI:10.1109/WACV57701.2024.00362