Deblur-NSFF: Neural Scene Flow Fields for Blurry Dynamic Scenes
In this work, we present a method to address the problem of novel view and time synthesis of complex dynamic scenes considering the input video is subject to blurriness caused due to camera or object motion or out-of-focus blur. Neural Scene Flow Field (NSFF) has shown remarkable results by training...
Saved in:
| Published in | Proceedings / IEEE Workshop on Applications of Computer Vision pp. 3646 - 3655 |
|---|---|
| Main Authors | , , , , , |
| Format | Conference Proceeding |
| Language | English |
| Published |
IEEE
03.01.2024
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 2642-9381 |
| DOI | 10.1109/WACV57701.2024.00362 |
Cover
| Abstract | In this work, we present a method to address the problem of novel view and time synthesis of complex dynamic scenes considering the input video is subject to blurriness caused due to camera or object motion or out-of-focus blur. Neural Scene Flow Field (NSFF) has shown remarkable results by training a dynamic NeRF to capture motion in the scene, but this method is not robust to unstable camera handling which can lead to blurred renderings. We propose Deblur-NSFF, a method that learns spatially-varying blur kernels to simulate the blurring process and gradually learns a sharp time-conditioned NeRF representation. We describe how to optimize our representation for sharp space-time view synthesis. Given blurry input frames, we perform both quantitative and qualitative comparison with state-of-the-art methods on modified NVIDIA Dynamic Scene dataset. We also compare our method with Deblur-NeRF, a method that has been designed to handle blur in static scenes. The demonstrated results show that our method outperforms prior work. |
|---|---|
| AbstractList | In this work, we present a method to address the problem of novel view and time synthesis of complex dynamic scenes considering the input video is subject to blurriness caused due to camera or object motion or out-of-focus blur. Neural Scene Flow Field (NSFF) has shown remarkable results by training a dynamic NeRF to capture motion in the scene, but this method is not robust to unstable camera handling which can lead to blurred renderings. We propose Deblur-NSFF, a method that learns spatially-varying blur kernels to simulate the blurring process and gradually learns a sharp time-conditioned NeRF representation. We describe how to optimize our representation for sharp space-time view synthesis. Given blurry input frames, we perform both quantitative and qualitative comparison with state-of-the-art methods on modified NVIDIA Dynamic Scene dataset. We also compare our method with Deblur-NeRF, a method that has been designed to handle blur in static scenes. The demonstrated results show that our method outperforms prior work. |
| Author | Luthra, Achleshwar Peng, Liang Lin, Zongfang Song, Xiyun Gantha, Shiva Souhith Yu, Heather |
| Author_xml | – sequence: 1 givenname: Achleshwar surname: Luthra fullname: Luthra, Achleshwar email: achleshl@andrew.cmu.edu organization: Carnegie Mellon University – sequence: 2 givenname: Shiva Souhith surname: Gantha fullname: Gantha, Shiva Souhith email: sgantha3@gatech.edu organization: Georgia Institute of Technology – sequence: 3 givenname: Xiyun surname: Song fullname: Song, Xiyun email: xsong@futurewei.com organization: Futurewei Technologies – sequence: 4 givenname: Heather surname: Yu fullname: Yu, Heather email: hyu@futurewei.com organization: Futurewei Technologies – sequence: 5 givenname: Zongfang surname: Lin fullname: Lin, Zongfang email: zlin1@futurewei.com organization: Futurewei Technologies – sequence: 6 givenname: Liang surname: Peng fullname: Peng, Liang email: lpeng@futurewei.com organization: Futurewei Technologies |
| BookMark | eNotzNFKwzAUgOEoCm5zb7CLvEDrSU7SJN7I7KwKY15s6OVI01OodK2kDunbbzCv_puPf8puur4jxhYCUiHAPXwt809tDIhUglQpAGbyis2dcRY1oLBOwjWbyEzJxKEVd2w6DN9n5oTDCXtaUdkeY7LZFsUj39Ax-pZvA3XEi7b_40VDbTXwuo_8-eziyFdj5w9NuKDhnt3Wvh1o_t8Z2xUvu_wtWX-8vufLddJIUL9JRegdZk4rRc5XJgilRGarYIIlU6pSy7IGjwaD0kYaCOCVl0ZTaTUBztjism2IaP8Tm4OP416AsuiUxBP-lUm7 |
| CODEN | IEEPAD |
| ContentType | Conference Proceeding |
| DBID | 6IE 6IL CBEJK RIE RIL |
| DOI | 10.1109/WACV57701.2024.00362 |
| DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Xplore POP ALL IEEE Xplore All Conference Proceedings IEEE/IET Electronic Library IEEE Proceedings Order Plans (POP All) 1998-Present |
| DatabaseTitleList | |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE/IET Electronic Library url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences |
| EISBN | 9798350318920 |
| EISSN | 2642-9381 |
| EndPage | 3655 |
| ExternalDocumentID | 10483942 |
| Genre | orig-research |
| GroupedDBID | 6IE 6IF 6IK 6IL 6IM 6IN AAJGR AAWTH ABLEC ADZIZ ALMA_UNASSIGNED_HOLDINGS BEFXN BFFAM BGNUA BKEBE BPEOZ CBEJK CHZPO IEGSK IPLJI M43 OCL RIE RIL RNS |
| ID | FETCH-LOGICAL-i204t-de3a9369544e9ad7c144168dc7c8e7b4b52bf0a373c457270c0a4a275eb85e03 |
| IEDL.DBID | RIE |
| IngestDate | Wed Aug 27 02:11:47 EDT 2025 |
| IsPeerReviewed | false |
| IsScholarly | true |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-i204t-de3a9369544e9ad7c144168dc7c8e7b4b52bf0a373c457270c0a4a275eb85e03 |
| PageCount | 10 |
| ParticipantIDs | ieee_primary_10483942 |
| PublicationCentury | 2000 |
| PublicationDate | 2024-Jan.-3 |
| PublicationDateYYYYMMDD | 2024-01-03 |
| PublicationDate_xml | – month: 01 year: 2024 text: 2024-Jan.-3 day: 03 |
| PublicationDecade | 2020 |
| PublicationTitle | Proceedings / IEEE Workshop on Applications of Computer Vision |
| PublicationTitleAbbrev | WACV |
| PublicationYear | 2024 |
| Publisher | IEEE |
| Publisher_xml | – name: IEEE |
| SSID | ssj0039193 |
| Score | 2.2524693 |
| Snippet | In this work, we present a method to address the problem of novel view and time synthesis of complex dynamic scenes considering the input video is subject to... |
| SourceID | ieee |
| SourceType | Publisher |
| StartPage | 3646 |
| SubjectTerms | 3D computer vision Algorithms Applications Cameras Computer vision Dynamics Interpolation Proposals Rendering (computer graphics) Training Virtual / augmented reality |
| Title | Deblur-NSFF: Neural Scene Flow Fields for Blurry Dynamic Scenes |
| URI | https://ieeexplore.ieee.org/document/10483942 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3fS8MwEA66J5_mj4m_yYOvmVlzWRpfRKdlCA7BqXsbSXoFcWxjaxH9603aTkUQfCttoCHh7vtyubuPkFPrpBNpBiySVjMAp5nxsMcyp6Vn-JlxZejibtDtP8LtSI7qYvWyFgYRy-QzbIfH8i4_nbkihMq8hYPHc_Aed13F3apYa-V2hfZUpK6N63B99nzZe5JK8XAGjEKHbBEEcX4oqJQAkjTJYPXrKm_ktV3ktu0-fnVl_PfcNknru1aP3n-h0BZZw-k2adbkktamu9whF96zTIoFGzwkyTkNPTnMxH_2ro4mk9kbTUIq25J6Dkuv_LjFO72uxOqrQcsWGSY3w16f1eoJ7CXikLMUhQlqfRIAtUmVC0enbpw65WJUFqyMbMaNUMKB9CyGO27AREqijSVysUsa09kU9wgFxVNvtjJCyKDjlDYuNgYzqxCUlWaftMJ6jOdVf4zxaikO_nh_SDbCnpSBDHFEGvmiwGMP7bk9Kbf0E7wOoj4 |
| linkProvider | IEEE |
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3fS8MwEA4yH_Rp_pj42zz4mpk1l6XxRXRapm5FcOreRpKmII5NthbRv96k7VQEwbfSBlou3H2X6333IXSsDTcsSYEEXEsCYCRRDvZIaiR3GX6qTFG66Mft7gPcDPmwIqsXXBhrbdF8Zpv-sviXn0xN7ktlzsPB4Tm4iLvMAYCXdK1F4GXSJSMVO65F5cnTeeeRC0H9KTDwM7KZl8T5oaFSQEhUR_Hi5WXnyEszz3TTfPyay_jvr1tDjW-2Hr77wqF1tGQnG6hepZe4ct75JjpzsWWcz0h8H0Wn2E_lUGP32AU7HI2nbzjyzWxz7LJYfOHWzd7xZSlXXy6aN9Aguhp0uqTSTyDPAYWMJJYpr9fnbGWlSoTxh6d2mBhhQis0aB7olCommAHu8hhqqAIVCG51yC1lW6g2mU7sNsIgaOIclwcWUmgZIZUJlbKpFhaE5moHNbw9Rq_lhIzRwhS7f9w_QivdQb836l3Ht3to1e9PUdZg-6iWzXJ74IA-04fF9n4Cbreliw |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=Proceedings+%2F+IEEE+Workshop+on+Applications+of+Computer+Vision&rft.atitle=Deblur-NSFF%3A+Neural+Scene+Flow+Fields+for+Blurry+Dynamic+Scenes&rft.au=Luthra%2C+Achleshwar&rft.au=Gantha%2C+Shiva+Souhith&rft.au=Song%2C+Xiyun&rft.au=Yu%2C+Heather&rft.date=2024-01-03&rft.pub=IEEE&rft.eissn=2642-9381&rft.spage=3646&rft.epage=3655&rft_id=info:doi/10.1109%2FWACV57701.2024.00362&rft.externalDocID=10483942 |