Deep Image Deblurring: A Survey
Image deblurring is a classic problem in low-level computer vision with the aim to recover a sharp image from a blurred input image. Advances in deep learning have led to significant progress in solving this problem, and a large number of deblurring networks have been proposed. This paper presents a...
Saved in:
Published in | International journal of computer vision Vol. 130; no. 9; pp. 2103 - 2130 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
Springer US
01.09.2022
Springer Springer Nature B.V |
Subjects | |
Online Access | Get full text |
ISSN | 0920-5691 1573-1405 |
DOI | 10.1007/s11263-022-01633-5 |
Cover
Abstract | Image deblurring is a classic problem in low-level computer vision with the aim to recover a sharp image from a blurred input image. Advances in deep learning have led to significant progress in solving this problem, and a large number of deblurring networks have been proposed. This paper presents a comprehensive and timely survey of recently published deep-learning based image deblurring approaches, aiming to serve the community as a useful literature review. We start by discussing common causes of image blur, introduce benchmark datasets and performance metrics, and summarize different problem formulations. Next, we present a taxonomy of methods using convolutional neural networks (CNN) based on architecture, loss function, and application, offering a detailed review and comparison. In addition, we discuss some domain-specific deblurring applications including face images, text, and stereo image pairs. We conclude by discussing key challenges and future research directions. |
---|---|
AbstractList | Image deblurring is a classic problem in low-level computer vision with the aim to recover a sharp image from a blurred input image. Advances in deep learning have led to significant progress in solving this problem, and a large number of deblurring networks have been proposed. This paper presents a comprehensive and timely survey of recently published deep-learning based image deblurring approaches, aiming to serve the community as a useful literature review. We start by discussing common causes of image blur, introduce benchmark datasets and performance metrics, and summarize different problem formulations. Next, we present a taxonomy of methods using convolutional neural networks (CNN) based on architecture, loss function, and application, offering a detailed review and comparison. In addition, we discuss some domain-specific deblurring applications including face images, text, and stereo image pairs. We conclude by discussing key challenges and future research directions. |
Audience | Academic |
Author | Lai, Wei-Sheng Ren, Wenqi Zhang, Kaihao Luo, Wenhan Yang, Ming-Hsuan Li, Hongdong Stenger, Björn |
Author_xml | – sequence: 1 givenname: Kaihao surname: Zhang fullname: Zhang, Kaihao organization: Australian National University – sequence: 2 givenname: Wenqi surname: Ren fullname: Ren, Wenqi organization: Sun Yat-sen University – sequence: 3 givenname: Wenhan surname: Luo fullname: Luo, Wenhan organization: Sun Yat-sen University – sequence: 4 givenname: Wei-Sheng surname: Lai fullname: Lai, Wei-Sheng organization: School of Engineering, University of California at Merced – sequence: 5 givenname: Björn surname: Stenger fullname: Stenger, Björn organization: Rakuten Institute of Technology, Rakuten Group Inc – sequence: 6 givenname: Ming-Hsuan orcidid: 0000-0003-4848-2304 surname: Yang fullname: Yang, Ming-Hsuan email: mhyang@ucmerced.edu organization: School of Engineering, University of California at Merced – sequence: 7 givenname: Hongdong surname: Li fullname: Li, Hongdong organization: Australian National University |
BookMark | eNp9kM9LwzAUgINMcE7_AS8OPHnofC9pktbb2PwxGAhOzyFtX0fH1s6kFfff21lB5mHkEAjf9x75zlmvrEpi7AphhAD6ziNyJQLgPABUQgTyhPVRahFgCLLH-hBzCKSK8Yyde78CAB5x0WfXU6LtcLaxSxpOKVk3zhXl8n44Hi4a90m7C3aa27Wny997wN4fH94mz8H85Wk2Gc-DVMS8DpS2CWiL3KoIpcozTMI8idFmmMYZWJVlHFNJiSSSirTVkeUh5DLKwjCCSAzYTTd366qPhnxtVlXjynal4RpQ8DCWe2rUUUu7JlOUeVU7m7Yno02Rtknyon0faxQ6lqhkK9weCC1T01e9tI33ZrZ4PWSjjk1d5b2j3KRFbeuiVZwt1gbB7FubrrVpW5uf1mav8n_q1hUb63bHJdFJfrtvTu7vy0esbx0tjxs |
CitedBy_id | crossref_primary_10_3390_electronics13122265 crossref_primary_10_1007_s11263_024_01987_y crossref_primary_10_1364_OE_543481 crossref_primary_10_1016_j_cviu_2024_104027 crossref_primary_10_1587_transinf_2024EDL8043 crossref_primary_10_3390_rs15205071 crossref_primary_10_1109_TCI_2023_3281189 crossref_primary_10_1016_j_cam_2024_116423 crossref_primary_10_3390_rs16050874 crossref_primary_10_3390_app13137738 crossref_primary_10_1007_s11760_024_03278_y crossref_primary_10_1109_TIP_2023_3321515 crossref_primary_10_25130_tjes_31_1_2 crossref_primary_10_1002_mrm_29710 crossref_primary_10_1016_j_asoc_2022_109973 crossref_primary_10_1016_j_neunet_2023_12_003 crossref_primary_10_1142_S0218001425540011 crossref_primary_10_3390_s22186879 crossref_primary_10_1145_3687977 crossref_primary_10_11834_jig_230555 crossref_primary_10_3390_s23052385 crossref_primary_10_3934_era_2025049 crossref_primary_10_1007_s00371_024_03632_8 crossref_primary_10_1016_j_inffus_2025_103036 crossref_primary_10_1109_TIP_2023_3334556 crossref_primary_10_1016_j_compag_2023_107802 crossref_primary_10_3390_s24248020 crossref_primary_10_1016_j_eng_2024_08_013 crossref_primary_10_3390_s24186033 crossref_primary_10_1007_s11036_024_02405_w crossref_primary_10_1016_j_autcon_2023_105257 crossref_primary_10_1111_cgf_14748 crossref_primary_10_3390_rs17050834 crossref_primary_10_1007_s11760_023_02888_2 crossref_primary_10_1117_1_JEI_33_3_033011 crossref_primary_10_1007_s44196_024_00450_7 crossref_primary_10_1364_PRJ_489989 crossref_primary_10_1080_17499518_2025_2471796 crossref_primary_10_1109_TCSVT_2024_3486756 crossref_primary_10_3390_s23031706 crossref_primary_10_3847_1538_3881_ad6b98 crossref_primary_10_1016_j_displa_2023_102633 crossref_primary_10_1109_TPAMI_2024_3465455 crossref_primary_10_3390_math11102239 crossref_primary_10_3788_CJL240673 crossref_primary_10_1109_TNNLS_2024_3359810 crossref_primary_10_1016_j_engappai_2024_109145 crossref_primary_10_1080_00401706_2023_2271017 crossref_primary_10_1364_OE_527304 crossref_primary_10_1016_j_ymssp_2024_112240 crossref_primary_10_1364_AO_510860 crossref_primary_10_3390_electronics13020404 crossref_primary_10_3390_s24206545 crossref_primary_10_1007_s11263_023_01853_3 crossref_primary_10_1007_s13369_023_07986_4 crossref_primary_10_1117_1_JEI_32_6_063022 crossref_primary_10_1364_OE_469582 crossref_primary_10_3390_s23239462 crossref_primary_10_1016_j_dt_2023_04_007 crossref_primary_10_1007_s11263_024_02198_1 crossref_primary_10_1109_JSEN_2024_3477927 crossref_primary_10_1088_1361_6501_ad91d2 crossref_primary_10_1109_TIP_2024_3501855 crossref_primary_10_1109_TPAMI_2023_3330416 crossref_primary_10_3952_physics_2023_63_3_8 crossref_primary_10_1016_j_commatsci_2025_113725 crossref_primary_10_1002_cpe_8355 crossref_primary_10_1109_JIOT_2022_3175730 crossref_primary_10_1109_TCI_2023_3288335 crossref_primary_10_1007_s11263_024_02148_x crossref_primary_10_1109_TPAMI_2024_3419007 crossref_primary_10_1088_1361_6560_ad40f8 crossref_primary_10_1016_j_knosys_2023_111156 crossref_primary_10_3390_math11010115 crossref_primary_10_1109_ACCESS_2024_3399230 crossref_primary_10_17311_tas_2024_157_179 crossref_primary_10_3390_jimaging9070133 crossref_primary_10_1007_s11263_023_01978_5 crossref_primary_10_1109_ACCESS_2024_3417814 crossref_primary_10_1109_ACCESS_2025_3536302 crossref_primary_10_1109_TIP_2023_3244417 crossref_primary_10_1109_TCI_2024_3443732 crossref_primary_10_1007_s00530_024_01466_x crossref_primary_10_1109_TNNLS_2023_3339614 crossref_primary_10_1134_S1054661822030270 crossref_primary_10_1016_j_compag_2023_108413 crossref_primary_10_1016_j_pmcj_2023_101801 crossref_primary_10_1016_j_media_2024_103256 crossref_primary_10_1109_TCSVT_2024_3432612 crossref_primary_10_1109_TGRS_2024_3513640 crossref_primary_10_1016_j_ndteint_2023_102923 crossref_primary_10_1109_JSTARS_2024_3365612 crossref_primary_10_1016_j_cub_2024_11_064 crossref_primary_10_1109_TCSVT_2023_3319330 crossref_primary_10_1007_s11263_023_01883_x crossref_primary_10_1007_s00371_024_03315_4 crossref_primary_10_1109_TITS_2023_3258634 crossref_primary_10_1016_j_engappai_2024_108524 crossref_primary_10_1016_j_compeleceng_2024_109659 crossref_primary_10_1016_j_engappai_2023_107513 crossref_primary_10_1016_j_infrared_2023_104640 crossref_primary_10_1051_epjpv_2022033 crossref_primary_10_1109_TIM_2023_3343743 crossref_primary_10_3390_drones7020096 crossref_primary_10_1109_TRPMS_2023_3341903 crossref_primary_10_3390_s24154801 crossref_primary_10_7717_peerj_cs_2540 crossref_primary_10_1016_j_eswa_2025_127009 crossref_primary_10_3390_rs16193660 crossref_primary_10_1111_mice_13231 crossref_primary_10_1016_j_jvcir_2024_104065 crossref_primary_10_1089_tmj_2023_0703 crossref_primary_10_1080_13682199_2022_2161996 crossref_primary_10_1007_s00371_023_02771_8 crossref_primary_10_21595_jme_2023_23765 crossref_primary_10_1007_s41095_023_0336_6 crossref_primary_10_1007_s11263_022_01620_w crossref_primary_10_3233_XST_221335 crossref_primary_10_3389_fphy_2024_1334298 crossref_primary_10_1142_S1793962324500454 crossref_primary_10_1016_j_sigpro_2023_109108 crossref_primary_10_3390_e25101467 crossref_primary_10_1016_j_cviu_2024_104094 crossref_primary_10_1109_JSEN_2024_3404964 crossref_primary_10_1109_TMM_2024_3355630 crossref_primary_10_1016_j_bspc_2024_106918 crossref_primary_10_3390_math11132869 crossref_primary_10_1016_j_eij_2024_100568 crossref_primary_10_1109_TSP_2025_3544463 crossref_primary_10_1109_TIP_2024_3411819 crossref_primary_10_3390_math12010078 crossref_primary_10_3390_s22186923 crossref_primary_10_46842_ipn_cien_v27n1a04 crossref_primary_10_1016_j_eswa_2023_123005 crossref_primary_10_1109_TCI_2024_3354427 crossref_primary_10_3390_a16120574 |
Cites_doi | 10.1109/TIP.2018.2867733 10.1109/TIP.2014.2362059 10.1109/CVPR.2017.300 10.1109/ICCV.2017.34 10.1109/CVPRW.2019.00251 10.1109/CVPR.2019.01125 10.1145/1661412.1618491 10.1007/978-3-030-01249-6_22 10.1007/978-3-642-33712-3_49 10.1109/ICCV.2017.123 10.1109/TIP.2015.2465162 10.1109/CVPRW.2019.00247 10.1109/LSP.2012.2227726 10.1109/CVPR.2019.01048 10.1016/S0734-189X(87)80153-6 10.1007/978-3-319-10584-0_4 10.1109/CVPR.2019.00911 10.1109/CVPR.2016.188 10.1109/ICCV.2017.36 10.1109/ICCPHOT.2018.8368468 10.1109/CVPR.2019.00281 10.1109/CVPR42600.2020.00311 10.1109/TIP.2020.2990354 10.1109/TIP.2012.2191563 10.1109/TIP.2003.818022 10.1109/CVPR.2017.33 10.1007/978-3-319-10602-1_3 10.1109/ISCAS.1999.778770 10.1007/978-3-030-58607-2_7 10.1109/LSP.2019.2947379 10.1109/CVPR42600.2020.00281 10.1109/ICCV.2013.248 10.1109/CVPRW50498.2020.00216 10.1109/ICCV.2019.00567 10.1109/CVPR42600.2020.00328 10.1109/TIP.2005.859389 10.1109/CVPR.2016.90 10.1145/1179352.1141956 10.1109/CVPR.2019.00177 10.1109/CVPR.2017.699 10.1109/ICPR.2010.579 10.1109/CVPR.2017.19 10.1109/CVPR.2017.35 10.1109/AFGR.2002.1004130 10.1109/TNNLS.2020.2968289 10.1109/TIP.2005.859378 10.1109/ICCV.2017.509 10.1007/978-3-319-46454-1_39 10.1109/CVPR42600.2020.00516 10.1109/ICCV.2013.296 10.1111/j.1467-8659.2012.03067.x 10.1109/83.841940 10.1007/978-3-030-58595-2_12 10.1109/CVPR42600.2020.00338 10.1007/s11263-019-01288-9 10.1111/j.1467-8659.2007.01080.x 10.1007/978-3-030-58598-3_41 10.1109/TIP.2012.2192126 10.1109/CVPRW.2018.00118 10.1109/ICCV.2011.6126280 10.5244/C.29.6 10.1109/CVPR.2019.00613 10.1109/TIP.2012.2214050 10.1109/CVPR42600.2020.00340 10.1109/CVPR.2017.632 10.1109/ICCV.2015.425 10.1109/TIP.2020.3036745 10.1007/978-3-319-46475-6_35 10.1109/ICCV.2001.937655 10.1109/ICCV.2011.6126278 10.1109/TIP.2011.2147325 10.1109/CVPR.2019.00829 10.1109/CVPR.2011.5995568 10.1109/CVPR.2018.00931 10.1109/TIP.2003.819861 10.1109/CVPR.2009.5206815 10.1007/s11263-018-1138-7 10.1109/TIP.2020.2980173 10.1109/TIP.2017.2753658 10.1007/978-3-319-46475-6_43 10.1109/ICASSP.2019.8682542 10.1109/ICCV.2017.322 10.1109/CVPR.2014.379 10.1007/978-3-642-15549-9_12 10.1109/ICCV.2019.00897 10.1109/CVPR.2017.408 10.1109/CVPR.2018.00267 10.1109/ICASSP.1993.319807 10.1109/CVPR42600.2020.00366 10.1007/978-3-030-58539-6_12 10.5244/C.31.113 10.1109/ICCV.2017.491 10.1007/978-3-319-10578-9_51 10.1016/0031-3203(94)00146-D 10.1109/ACSSC.2003.1292216 10.1109/ICCV.2017.244 10.1109/CVPR.2014.371 10.1109/CVPR.2018.00862 10.1109/CVPR.2018.00854 10.1007/978-3-030-58539-6_20 10.1109/CVPR.2018.00652 10.1007/s11263-014-0727-3 10.1109/ICCV.2011.6126276 10.1109/CVPR.2010.5539954 10.1109/CVPR42600.2020.00585 10.1109/CVPR.2017.405 10.1007/978-3-319-46487-9_14 10.1109/CVPR.2007.383214 10.1007/978-3-642-33786-4_3 10.1109/ICCVW.2009.5457520 10.1007/s41233-016-0002-1 10.1109/WACV.2019.00208 10.1109/CVPR.2019.01047 10.1109/CVPR.2018.00344 10.1109/CVPR.2019.00699 10.1109/CVPRW.2019.00267 10.1109/CVPR.2013.132 10.1109/97.995823 10.1007/s11263-011-0502-7 10.1109/ICCV.2017.352 10.1109/CVPR.2013.142 10.1007/978-3-030-01237-3_45 10.1109/CVPR.2013.85 10.1109/ICCV.2017.435 10.1109/CVPR42600.2020.00368 10.1109/ICCV.2017.356 10.1109/CVPR.2019.00397 10.1109/ICCV.2017.37 10.1109/CVPR.2013.84 10.1109/ICCV.2013.392 10.1109/CVPR.2016.204 10.1109/CVPR.2018.00853 10.1109/CVPR.2015.7298677 10.1007/978-3-030-01219-9_7 10.1109/TPAMI.2015.2481418 10.1109/CVPR.2017.737 10.1109/LSP.2010.2043888 10.1109/CVPR.2018.00068 10.1109/ICCV.2019.00257 10.1109/TSP.2009.2018358 10.1109/ICCV.2019.00948 10.1109/CVPR.2019.00700 10.1007/978-3-642-33715-4_38 10.1109/CVPR.2013.147 |
ContentType | Journal Article |
Copyright | The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 COPYRIGHT 2022 Springer The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. |
Copyright_xml | – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 – notice: COPYRIGHT 2022 Springer – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. |
DBID | AAYXX CITATION ISR 3V. 7SC 7WY 7WZ 7XB 87Z 8AL 8FD 8FE 8FG 8FK 8FL ABUWG AFKRA ARAPS AZQEC BENPR BEZIV BGLVJ CCPQU DWQXO FRNLG F~G GNUQQ HCIFZ JQ2 K60 K6~ K7- L.- L7M L~C L~D M0C M0N P5Z P62 PHGZM PHGZT PKEHL PQBIZ PQBZA PQEST PQGLB PQQKQ PQUKI PYYUZ Q9U |
DOI | 10.1007/s11263-022-01633-5 |
DatabaseName | CrossRef Gale In Context: Science ProQuest Central (Corporate) Computer and Information Systems Abstracts ABI/INFORM Collection ABI/INFORM Global (PDF only) ProQuest Central (purchase pre-March 2016) ABI/INFORM Collection Computing Database (Alumni Edition) Technology Research Database ProQuest SciTech Collection ProQuest Technology Collection ProQuest Central (Alumni) (purchase pre-March 2016) ABI/INFORM Collection (Alumni) ProQuest Central (Alumni) ProQuest Central UK/Ireland Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Central Business Premium Collection Technology collection ProQuest One Community College ProQuest Central Korea Business Premium Collection (Alumni) ABI/INFORM Global (Corporate) ProQuest Central Student SciTech Premium Collection ProQuest Computer Science Collection ProQuest Business Collection (Alumni Edition) ProQuest Business Collection Computer Science Database ABI/INFORM Professional Advanced Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional ABI/INFORM Global Computing Database ProQuest advanced technologies & aerospace journals ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Premium ProQuest One Academic (New) ProQuest One Academic Middle East (New) ProQuest One Business (UW System Shared) ProQuest One Business (Alumni) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition ABI/INFORM Collection China ProQuest Central Basic |
DatabaseTitle | CrossRef ABI/INFORM Global (Corporate) ProQuest Business Collection (Alumni Edition) ProQuest One Business Computer Science Database ProQuest Central Student Technology Collection Technology Research Database Computer and Information Systems Abstracts – Academic ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection Computer and Information Systems Abstracts ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ABI/INFORM Complete ProQuest Central ABI/INFORM Professional Advanced ProQuest One Applied & Life Sciences ProQuest Central Korea ProQuest Central (New) Advanced Technologies Database with Aerospace ABI/INFORM Complete (Alumni Edition) Advanced Technologies & Aerospace Collection Business Premium Collection ABI/INFORM Global ProQuest Computing ABI/INFORM Global (Alumni Edition) ProQuest Central Basic ProQuest Computing (Alumni Edition) ProQuest One Academic Eastern Edition ABI/INFORM China ProQuest Technology Collection ProQuest SciTech Collection ProQuest Business Collection Computer and Information Systems Abstracts Professional Advanced Technologies & Aerospace Database ProQuest One Academic UKI Edition ProQuest One Business (Alumni) ProQuest One Academic ProQuest Central (Alumni) ProQuest One Academic (New) Business Premium Collection (Alumni) |
DatabaseTitleList | ABI/INFORM Global (Corporate) |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences Computer Science |
EISSN | 1573-1405 |
EndPage | 2130 |
ExternalDocumentID | A713795165 10_1007_s11263_022_01633_5 |
GroupedDBID | -4Z -59 -5G -BR -EM -Y2 -~C .4S .86 .DC .VR 06D 0R~ 0VY 199 1N0 1SB 2.D 203 28- 29J 2J2 2JN 2JY 2KG 2KM 2LR 2P1 2VQ 2~H 30V 3V. 4.4 406 408 409 40D 40E 5GY 5QI 5VS 67Z 6NX 6TJ 78A 7WY 8FE 8FG 8FL 8TC 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAOBN AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDBF ABDZT ABECU ABFTD ABFTV ABHLI ABHQN ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABUWG ABWNU ABXPI ACAOD ACBXY ACDTI ACGFO ACGFS ACHSB ACHXU ACIHN ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACREN ACUHS ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADMLS ADRFC ADTPH ADURQ ADYFF ADYOE ADZKW AEAQA AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFGCZ AFKRA AFLOW AFQWF AFWTZ AFYQB AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMTXH AMXSW AMYLF AMYQR AOCGG ARAPS ARCSS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN AZQEC B-. B0M BA0 BBWZM BDATZ BENPR BEZIV BGLVJ BGNMA BPHCQ BSONS CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 DWQXO EAD EAP EAS EBLON EBS EDO EIOEI EJD EMK EPL ESBYG ESX F5P FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRNLG FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNUQQ GNWQR GQ6 GQ7 GQ8 GROUPED_ABI_INFORM_COMPLETE GXS H13 HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I-F I09 IAO IHE IJ- IKXTQ ISR ITC ITM IWAJR IXC IZIGR IZQ I~X I~Y I~Z J-C J0Z JBSCW JCJTX JZLTJ K60 K6V K6~ K7- KDC KOV KOW LAK LLZTM M0C M0N M4Y MA- N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM OVD P19 P2P P62 P9O PF0 PQBIZ PQBZA PQQKQ PROAC PT4 PT5 QF4 QM1 QN7 QO4 QOK QOS R4E R89 R9I RHV RNI RNS ROL RPX RSV RZC RZE RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TAE TEORI TSG TSK TSV TUC TUS U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YLTOR Z45 Z7R Z7S Z7V Z7W Z7X Z7Y Z7Z Z83 Z86 Z88 Z8M Z8N Z8P Z8Q Z8R Z8S Z8T Z8W Z92 ZMTXR ~8M ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ACMFV ACSTC ADHKG ADKFA AEZWR AFDZB AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION ICD PHGZM PHGZT AEIIB PMFND 7SC 7XB 8AL 8FD 8FK ABRTQ JQ2 L.- L7M L~C L~D PKEHL PQEST PQGLB PQUKI Q9U |
ID | FETCH-LOGICAL-c392t-67ab07a12a68156fd1b4fb91ad1c9d0a6dd21c5eb5ee56e7a78a240f58d448083 |
IEDL.DBID | U2A |
ISSN | 0920-5691 |
IngestDate | Wed Aug 13 04:48:58 EDT 2025 Tue Jun 10 21:04:16 EDT 2025 Fri Jun 27 05:26:42 EDT 2025 Tue Jul 01 04:30:58 EDT 2025 Thu Apr 24 22:51:48 EDT 2025 Fri Feb 21 02:45:58 EST 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 9 |
Keywords | Deep learning Low-level vision Image restoration Image deblurring Image enhancement |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c392t-67ab07a12a68156fd1b4fb91ad1c9d0a6dd21c5eb5ee56e7a78a240f58d448083 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0003-4848-2304 |
PQID | 2701324958 |
PQPubID | 1456341 |
PageCount | 28 |
ParticipantIDs | proquest_journals_2701324958 gale_infotracacademiconefile_A713795165 gale_incontextgauss_ISR_A713795165 crossref_citationtrail_10_1007_s11263_022_01633_5 crossref_primary_10_1007_s11263_022_01633_5 springer_journals_10_1007_s11263_022_01633_5 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20220900 2022-09-00 20220901 |
PublicationDateYYYYMMDD | 2022-09-01 |
PublicationDate_xml | – month: 9 year: 2022 text: 20220900 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | International journal of computer vision |
PublicationTitleAbbrev | Int J Comput Vis |
PublicationYear | 2022 |
Publisher | Springer US Springer Springer Nature B.V |
Publisher_xml | – name: Springer US – name: Springer – name: Springer Nature B.V |
References | Zhang, J., Pan, J., Ren, J., Song, Y., Bao, L., Lau, R.W., & Yang, M.H. (2018). Dynamic scene deblurring using spatially variant recurrent neural networks. In IEEE Conference on Computer Vision and Pattern Recognition. Jiang, P., Ling, H., Yu, J., & Peng, J. (2013). Salient region detection by ufo: Uniqueness, focusness and objectness. In IEEE International Conference on Computer Vision. Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., & Matas, J. (2018). Deblurgan: Blind motion deblurring using conditional adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition. Zhong, L., Cho, S., Metaxas, D., Paris, S., & Wang, J. (2013). Handling noise in single image deblurring using directional filters. In IEEE Conference on Computer Vision and Pattern Recognition. Chen, H., Gu, J., Gallo, O., Liu, M.Y., Veeraraghavan, A., & Kautz, J. (2018). Reblur2deblur: Deblurring videos via self-supervised learning. In IEEE International Conference on Computational Photography. Ren, W., Pan, J., Cao, X., & Yang, M.H. (2017). Video deblurring via semantic segmentation and pixel-wise non-linear kernel. In IEEE International Conference on Computer Vision. Schuler, C.J., Christopher Burger, H., Harmeling, S., & Scholkopf, B. (2013). A machine learning approach for non-blind image deconvolution. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1067–1074. LiLPanJLaiWSGaoCSangNYangMHDynamic scene deblurring by depth guided modelIEEE Transactions on Image Processing2020295273528810.1109/TIP.2020.2980173 Purohit, K., & Rajagopalan, A. (2019). Region-adaptive dense network for efficient motion deblurring. arXiv preprint arXiv:1903.11394 Li, P., Prieto, L., Mery, D., & Flynn, P. (2018). Face recognition in low quality images: a survey. arXiv preprint arXiv:1805.11519. WhyteOSivicJZissermanAPonceJNon-uniform deblurring for shaken imagesInternational Journal of Computer Vision2012982168186291235910.1007/s11263-011-0502-7 Wang, X., Chan, K.C., Yu, K., Dong, C., & Change Loy, C. (2019). EDVR: Video restoration with enhanced deformable convolutional networks. In IEEE Conference on Computer Vision and Pattern Recognition Workshop. Aljadaany, R., Pal, D.K., & Savvides, M. (2019). Douglas-rachford networks: Learning both the image prior and data fidelity terms for blind image deconvolution. In IEEE Conference on Computer Vision and Pattern Recognition. Kupyn, O., Martyniuk, T., Wu, J., & Wang, Z. (2019). Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In IEEE International Conference on Computer Vision. Zhou, S., Zhang, J., Pan, J., Xie, H., Zuo, W., & Ren, J. (2019). Spatio-temporal filter adaptive network for video deblurring. In IEEE International Conference on Computer Vision. Shen, W., Bao, W., Zhai, G., Chen, L., Min, X., & Gao, Z. (2020). Blurry video frame interpolation. In IEEE Conference on Computer Vision and Pattern Recognition. Köhler, R., Hirsch, M., Mohler, B., Schölkopf, B., & Harmeling, S. (2012). Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database. In European Conference on Computer Vision. Fergus, R., Singh, B., Hertzmann, A., Roweis, S.T., & Freeman, W.T. (2006). Removing camera shake from a single photograph. In ACM SIGGRAPH. Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In IEEE Conference on Computer Vision and Pattern Recognition. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., & Polosukhin, I. (2017) Attention is all you need. arXiv preprint arXiv:1706.03762 Hore, A., & Ziou, D. (2010). Image quality metrics: Psnr vs. ssim. In IEEE International Conference on Pattern Recognition. Hyun Kim, T., Mu Lee, K., Scholkopf, B., & Hirsch, M. (2017). Online video deblurring via dynamic temporal blending network. In IEEE International Conference on Computer Vision. Lumentut, J.S., Kim, T.H., Ramamoorthi, R., & Park, I.K. (2019). Fast and full-resolution light field deblurring using a deep neural network. arXiv preprint arXiv:1904.00352 Godard, C., Mac Aodha, O., & Brostow, G.J. (2017). Unsupervised monocular depth estimation with left-right consistency. In IEEE Conference on Computer Vision and Pattern Recognition. HoßfeldTHeegaardPEVarelaMMöllerSQoe beyond the mos: an in-depth look at qoe via better metrics and their relation to mosQuality and User Experience201611210.1007/s41233-016-0002-1 Zhao, W., Zheng, B., Lin, Q., & Lu, H. (2019). Enhancing diversity of defocus blur detectors via cross-ensemble network. In IEEE Conference on Computer Vision and Pattern Recognition. ChrysosGGFavaroPZafeiriouSMotion deblurring of facesInternational Journal of Computer Vision20191276–780182310.1007/s11263-018-1138-7 Yasarla, R., Perazzi, F., & Patel, V.M. (2019). Deblurring face images using uncertainty guided multi-stream semantic networks. arXiv preprint arXiv:1907.13106 Pan, J., Hu, Z., Su, Z., & Yang, M.H. (2014). Deblurring face images with exemplars. In European Conference on Computer Vision. SheikhHRBovikACImage information and visual qualityIEEE Transactions on Image Processing200615243044410.1109/TIP.2005.859378 Sun, L., Cho, S., Wang, J., & Hays, J. (2013). Edge-based blur kernel estimation using patch priors. In IEEE International Conference on Computational Photography. Shen, Z., Lai, W.S., Xu, T., Kautz, J., & Yang, M.H. (2020). Exploiting semantics for face image deblurring. International Journal of Computer Vision pp. 1–18. Lu, Y. (2017). Out-of-focus blur: Image de-blurring. arXiv preprint arXiv:1710.00620 Tao, X., Gao, H., Shen, X., Wang, J., & Jia, J. (2018). Scale-recurrent network for deep image deblurring. In IEEE Conference on Computer Vision and Pattern Recognition. Xu, L., Tao, X., & Jia, J. (2014). Inverse kernels for fast spatial deconvolution. In European Conference on Computer Vision. Park, P.D., Kang, D.U., Kim, J., & Chun, S.Y. (2020). Multi-temporal recurrent neural networks for progressive non-uniform single image deblurring with incremental temporal training. In European Conference on Computer Vision. Zhang, H., Dai, Y., Li, H., & Koniusz, P. (2019). Deep stacked hierarchical multi-patch network for image deblurring. In IEEE Conference on Computer Vision and Pattern Recognition. Blau, Y., & Michaeli, T. (2018). The perception-distortion tradeoff. In IEEE Conference on Computer Vision and Pattern Recognition. Zhang, K., Zuo, W., & Zhang, L. (2018). Learning a single convolutional super-resolution network for multiple degradations. In IEEE Conference on Computer Vision and Pattern Recognition. Kruse, J., Rother, C., & Schmidt, U. (2017). Learning to push the limits of efficient fft-based image deconvolution. In IEEE International Conference on Computer Vision. Hradiš, M., Kotera, J., Zemcık, P., & Šroubek, F. (2015). Convolutional neural networks for direct text deblurring. In British Machine Vision Conference. Jin, M., Hirsch, M., & Favaro, P. (2018). Learning face deblurring fast and wide. In IEEE Conference on Computer Vision and Pattern Recognition Workshop. Eslami, S.A., Heess, N., Weber, T., Tassa, Y., Szepesvari, D., Hinton, G.E., et al. (2016). Attend, infer, repeat: Fast scene understanding with generative models. In Advances in Neural Information Processing Systems. Zoran, D., & Weiss, Y. (2011). From learning models of natural image patches to whole image restoration. In IEEE International Conference on Computer Vision. Jiang, Z., Zhang, Y., Zou, D., Ren, J., Lv, J., & Liu, Y. (2020). Learning event-based motion deblurring. arXiv preprint arXiv:2004.05794 Gao, H., Tao, X., Shen, X., & Jia, J. (2019). Dynamic scene deblurring with parameter selective sharing and nested skip connections. In IEEE Conference on Computer Vision and Pattern Recognition. Cho, H., Wang, J., & Lee, S. (2012). Text image deblurring using text-specific properties. In European Conference on Computer Vision. Krishnan, D., & Fergus, R. (2009). Fast image deconvolution using hyper-laplacian priors. In Advances in Neural Information Processing Systems. Zhang, K., Van Gool, L., & Timofte, R. (2020). Deep unfolding network for image super-resolution. In IEEE Conference on Computer Vision and Pattern Recognition. Zhang, K., Zuo, W., & Zhang, L. (2019). Deep plug-and-play super-resolution for arbitrary blur kernels. In IEEE Conference on Computer Vision and Pattern Recognition. Cho, S., Wang, J., & Lee, S. (2011). Handling outliers in non-blind image deconvolution. In IEEE International Conference on Computer Vision. Purohit, K., Shah, A., & Rajagopalan, A. (2019). Bringing alive blurred moments. In IEEE Conference on Computer Vision and Pattern Recognition. ChenSJShenHLMultispectral image out-of-focus deblurring using interchannel correlationIEEE Transactions on Image Processing2015241144334445339032010.1109/TIP.2015.2465162 Lin, S., Zhang, J., Pan, J., Jiang, Z., Zou, D., Wang, Y., Chen, J., & Ren, J. (2020). Learning event-driven video deblurring and interpolation. In European Conference on Computer Vision. MoorthyAKBovikACBlind image quality assessment: From natural scene statistics to perceptual qualityIEEE Transactions on Image Processing2011201233503364285048110.1109/TIP.2011.2147325 Niklaus, S., Mai, L., & Liu, F. (2017). Video frame interpolation via adaptive separable convolution. In IEEE International Conference on Computer Vision. PanciGCampisiPColonneseSScaranoGMultichannel blind image deconvolution using the bussgang algorithm: Spatial and multiresolution approachesIEEE Transactions on Image Processing2003121113241337202677610.1109/TIP.2003.818022 Johnson, J., Alahi, A., & Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 Hyun Kim, T., Ahn, B., & Mu 1633_CR6 1633_CR147 1633_CR7 1633_CR144 1633_CR8 1633_CR145 1633_CR1 1633_CR2 1633_CR3 1633_CR148 1633_CR4 1633_CR149 CH Son (1633_CR121) 2011; 57 1633_CR142 1633_CR143 1633_CR140 1633_CR141 1633_CR32 S Bae (1633_CR5) 2007; 26 1633_CR34 1633_CR35 1633_CR36 1633_CR37 1633_CR38 1633_CR30 1633_CR31 1633_CR133 1633_CR134 G Panci (1633_CR95) 2003; 12 1633_CR137 1633_CR131 C Gu (1633_CR33) 2021; 30 O Whyte (1633_CR139) 2012; 98 1633_CR130 1633_CR43 1633_CR44 1633_CR45 1633_CR46 1633_CR47 1633_CR48 1633_CR49 1633_CR40 1633_CR42 1633_CR168 1633_CR169 1633_CR166 1633_CR167 1633_CR18 1633_CR160 1633_CR19 1633_CR161 GG Chrysos (1633_CR20) 2019; 127 1633_CR164 1633_CR165 1633_CR162 1633_CR163 1633_CR10 1633_CR98 1633_CR11 1633_CR99 1633_CR12 1633_CR14 1633_CR16 1633_CR17 1633_CR90 1633_CR91 1633_CR92 1633_CR93 CJ Schuler (1633_CR109) 2015; 38 1633_CR94 1633_CR96 1633_CR97 1633_CR157 1633_CR158 1633_CR155 1633_CR156 AK Moorthy (1633_CR83) 2011; 20 1633_CR159 1633_CR29 1633_CR150 1633_CR154 1633_CR151 1633_CR152 1633_CR22 1633_CR23 1633_CR24 1633_CR25 M Vairy (1633_CR132) 1995; 28 1633_CR26 1633_CR27 1633_CR28 G Boracchi (1633_CR9) 2012; 21 1633_CR102 1633_CR103 1633_CR100 1633_CR101 X Xu (1633_CR146) 2017; 27 1633_CR107 1633_CR104 1633_CR105 HR Sheikh (1633_CR111) 2006; 15 SJ Chen (1633_CR15) 2015; 24 1633_CR76 1633_CR77 1633_CR78 A Kheradmand (1633_CR54) 2014; 23 1633_CR79 O Whyte (1633_CR138) 2014; 110 1633_CR108 1633_CR71 1633_CR72 1633_CR73 1633_CR74 1633_CR75 1633_CR87 1633_CR88 F Chen (1633_CR13) 2009; 57 1633_CR89 N Damera-Venkata (1633_CR21) 2000; 9 L Li (1633_CR65) 2020; 29 1633_CR81 1633_CR84 1633_CR85 1633_CR86 1633_CR124 Z Wang (1633_CR136) 2004; 13 1633_CR125 1633_CR122 1633_CR123 1633_CR128 1633_CR129 1633_CR126 1633_CR127 1633_CR120 RA Hummel (1633_CR41) 1987; 38 AK Moorthy (1633_CR82) 2010; 17 1633_CR55 1633_CR56 1633_CR57 1633_CR58 1633_CR59 1633_CR50 1633_CR51 MA Saad (1633_CR106) 2012; 21 1633_CR52 1633_CR53 HR Sheikh (1633_CR112) 2005; 14 1633_CR113 1633_CR114 1633_CR117 1633_CR118 Z Wang (1633_CR135) 2002; 9 A Mittal (1633_CR80) 2012; 21 1633_CR115 1633_CR116 1633_CR110 L Liu (1633_CR70) 2014; 29 T Hoßfeld (1633_CR39) 2016; 1 1633_CR66 1633_CR67 1633_CR68 1633_CR69 1633_CR119 1633_CR60 K Zhang (1633_CR153) 2018; 28 1633_CR61 1633_CR62 1633_CR63 1633_CR64 |
References_xml | – reference: Zhang, K., Van Gool, L., & Timofte, R. (2020). Deep unfolding network for image super-resolution. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Hore, A., & Ziou, D. (2010). Image quality metrics: Psnr vs. ssim. In IEEE International Conference on Pattern Recognition. – reference: Zhang, J., Pan, J., Lai, W.S., Lau, R.W., & Yang, M.H. (2017). Learning fully convolutional networks for iterative non-blind deconvolution. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Denton, E.L., Chintala, S., Fergus, R., et al. (2015). Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in Neural Information Processing Systems. – reference: Schuler, C.J., Christopher Burger, H., Harmeling, S., & Scholkopf, B. (2013). A machine learning approach for non-blind image deconvolution. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1067–1074. – reference: Bigdeli, S.A., Zwicker, M., Favaro, P., & Jin, M. (2017). Deep mean-shift priors for image restoration. In Advances in Neural Information Processing Systems, pp. 763–772. – reference: Jin, M., Hirsch, M., & Favaro, P. (2018). Learning face deblurring fast and wide. In IEEE Conference on Computer Vision and Pattern Recognition Workshop. – reference: Yasarla, R., Perazzi, F., & Patel, V.M. (2019). Deblurring face images using uncertainty guided multi-stream semantic networks. arXiv preprint arXiv:1907.13106 – reference: Wieschollek, P., Hirsch, M., Scholkopf, B., & Lensch, H. (2017). Learning blind motion deblurring. In IEEE International Conference on Computer Vision. – reference: Fiori, S., Uncini, A., & Piazza, F. (1999). Blind deconvolution by modified bussgang algorithm. In The IEEE International Symposium on Circuits and Systems, vol. 3, pp. 1–4. – reference: Szeliski, R. (2010). Computer vision: algorithms and applications. Springer Science & Business Media. – reference: Lu, B., Chen, J.C., & Chellappa, R. (2019). Unsupervised domain-specific deblurring via disentangled representations. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Shen, Z., Lai, W.S., Xu, T., Kautz, J., & Yang, M.H. (2018). Deep semantic face deblurring. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Le, V., Brandt, J., Lin, Z., Bourdev, L., & Huang, T.S. (2012). Interactive facial feature localization. In European Conference on Computer Vision. – reference: Niklaus, S., Mai, L., & Liu, F. (2017). Video frame interpolation via adaptive separable convolution. In IEEE International Conference on Computer Vision. – reference: Zhong, Z., Gao, Y., Yinqiang, Z., & Bo, Z. (2020). Efficient spatio-temporal recurrent neural network for video deblurring. In European Conference on Computer Vision. – reference: Gong, D., Yang, J., Liu, L., Zhang, Y., Reid, I., Shen, C., Van Den Hengel, A., & Shi, Q. (2017). From motion blur to motion flow: a deep learning solution for removing heterogeneous motion blur. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Lai, W.S., Huang, J.B., Hu, Z., Ahuja, N., & Yang, M.H. (2016). A comparative study for single image blind deblurring. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Chakrabarti, A. (2016). A neural approach to blind motion deblurring. In European Conference on Computer Vision. – reference: WhyteOSivicJZissermanAPonceJNon-uniform deblurring for shaken imagesInternational Journal of Computer Vision2012982168186291235910.1007/s11263-011-0502-7 – reference: Sim, T., Baker, S., & Bsat, M. (2002). The cmu pose, illumination, and expression (PIE) database. In IEEE International Conference on Automatic Face Gesture Recognition. – reference: XuXPanJZhangYJYangMHMotion blur kernel estimation via deep learningIEEE Transactions on Image Processing2017271194205372984210.1109/TIP.2017.2753658 – reference: Pan, J., Hu, Z., Su, Z., & Yang, M.H. (2014). Deblurring text images via l0-regularized intensity and gradient prior. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Kettunen, M., Härkönen, E., & Lehtinen, J. (2019). E-lpips: robust perceptual image similarity via random transformation ensembles. arXiv preprint arXiv:1906.03973 – reference: Rim, J., Lee, H., Won, J., & Cho, S. (2020). Real-world blur dataset for learning and benchmarking deblurring algorithms. In European Conference on Computer Vision. – reference: HummelRAKimiaBZuckerSWDeblurring gaussian blurComputer Vision, Graphics, and Image Processing1987381668010.1016/S0734-189X(87)80153-6 – reference: Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Li, P., Prieto, L., Mery, D., & Flynn, P. (2018). Face recognition in low quality images: a survey. arXiv preprint arXiv:1805.11519. – reference: Lumentut, J.S., Kim, T.H., Ramamoorthi, R., & Park, I.K. (2019). Fast and full-resolution light field deblurring using a deep neural network. arXiv preprint arXiv:1904.00352 – reference: Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., & Matas, J. (2018). Deblurgan: Blind motion deblurring using conditional adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: WangZBovikACSheikhHRSimoncelliEPImage quality assessment: from error visibility to structural similarityIEEE Transactions on Image Processing200413460061210.1109/TIP.2003.819861 – reference: Xu, L., Ren, J.S., Liu, C., & Jia, J. (2014). Deep convolutional neural network for image deconvolution. In Advances in Neural Information Processing Systems. – reference: Shen, Z., Wang, W., Lu, X., Shen, J., Ling, H., Xu, T., & Shao, L. (2019). Human-aware motion deblurring. In IEEE International Conference on Computer Vision. – reference: Kang, S.B. (2007). Automatic removal of chromatic aberration from a single image. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Dong, J., Roth, S., & Schiele, B. (2020). Deep wiener deconvolution: Wiener meets deep learning for image deblurring. Advances in Neural Information Processing Systems. – reference: Sun, L., Cho, S., Wang, J., & Hays, J. (2013). Edge-based blur kernel estimation using patch priors. In IEEE International Conference on Computational Photography. – reference: Ren, W., Pan, J., Cao, X., & Yang, M.H. (2017). Video deblurring via semantic segmentation and pixel-wise non-linear kernel. In IEEE International Conference on Computer Vision. – reference: Suin, M., Purohit, K., & Rajagopalan, A. (2020). Spatially-attentive patch-hierarchical network for adaptive motion deblurring. arXiv preprint arXiv:2004.05343 – reference: Shen, Z., Lai, W.S., Xu, T., Kautz, J., & Yang, M.H. (2020). Exploiting semantics for face image deblurring. International Journal of Computer Vision pp. 1–18. – reference: Kruse, J., Rother, C., & Schmidt, U. (2017). Learning to push the limits of efficient fft-based image deconvolution. In IEEE International Conference on Computer Vision. – reference: Pan, J., Hu, Z., Su, Z., & Yang, M.H. (2014). Deblurring face images with exemplars. In European Conference on Computer Vision. – reference: Lin, S., Zhang, J., Pan, J., Jiang, Z., Zou, D., Wang, Y., Chen, J., & Ren, J. (2020). Learning event-driven video deblurring and interpolation. In European Conference on Computer Vision. – reference: Masia, B., Corrales, A., Presa, L., & Gutierrez, D. (2011). Coded apertures for defocus deblurring. In Symposium Iberoamericano de Computacion Grafica. – reference: Gong, D., Zhang, Z., Shi, Q., van den Hengel, A., Shen, C., & Zhang, Y. (2020). Learning deep gradient descent optimization for image deconvolution. IEEE Transactions on Neural Networks and Learning Systems. – reference: Lu, Y. (2017). Out-of-focus blur: Image de-blurring. arXiv preprint arXiv:1710.00620 – reference: Jin, M., Roth, S., & Favaro, P. (2017). Noise-blind image deblurring. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Johnson, J., Alahi, A., & Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision. – reference: Xu, L., Zheng, S., & Jia, J. (2013). Unnatural l0 sparse representation for natural image deblurring. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Gast, J., Sellent, A., & Roth, S. (2016). Parametric object motion from blur. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Purohit, K., & Rajagopalan, A. (2019). Region-adaptive dense network for efficient motion deblurring. arXiv preprint arXiv:1903.11394 – reference: Hirsch, M., Schuler, C.J., Harmeling, S., & Schölkopf, B. (2011). Fast removal of non-uniform camera shake. In IEEE International Conference on Computer Vision. – reference: Madam Nimisha, T., Sunil, K., & Rajagopalan, A. (2018). Unsupervised class-specific deblurring. In European Conference on Computer Vision – reference: Blau, Y., & Michaeli, T. (2018). The perception-distortion tradeoff. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Brooks, T., & Barron, J.T. (2019). Learning to synthesize motion blur. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Chen, X., He, X., Yang, J., & Wu, Q. (2011). An effective document image deblurring algorithm. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Cho, S., Wang, J., & Lee, S. (2011). Handling outliers in non-blind image deconvolution. In IEEE International Conference on Computer Vision. – reference: Ren, W., Yang, J., Deng, S., Wipf, D., Cao, X., & Tong, X. (2019). Face video deblurring using 3d facial priors. In IEEE International Conference on Computer Vision. – reference: Shi, J., Xu, L., & Jia, J. (2014). Discriminative blur detection features. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Fergus, R., Singh, B., Hertzmann, A., Roweis, S.T., & Freeman, W.T. (2006). Removing camera shake from a single photograph. In ACM SIGGRAPH. – reference: Jiang, Z., Zhang, Y., Zou, D., Ren, J., Lv, J., & Liu, Y. (2020). Learning event-based motion deblurring. arXiv preprint arXiv:2004.05794 – reference: MoorthyAKBovikACBlind image quality assessment: From natural scene statistics to perceptual qualityIEEE Transactions on Image Processing2011201233503364285048110.1109/TIP.2011.2147325 – reference: WhyteOSivicJZissermanADeblurring shaken and partially saturated imagesInternational Journal of Computer Vision2014110218520110.1007/s11263-014-0727-3 – reference: Abuolaim, A., & Brown, M.S. (2020). Defocus deblurring using dual-pixel data. In European Conference on Computer Vision. – reference: He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Purohit, K., Shah, A., & Rajagopalan, A. (2019). Bringing alive blurred moments. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Xia, F., Wang, P., Chen, L.C., & Yuille, A.L. (2016). Zoom better to see clearer: Human and object parsing with hierarchical auto-zoom net. In European Conference on Computer Vision. – reference: Zhang, H., Dai, Y., Li, H., & Koniusz, P. (2019). Deep stacked hierarchical multi-patch network for image deblurring. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Zhang, X., Dong, H., Hu, Z., Lai, W.S., Wang, F., & Yang, M.H. (2018). Gated fusion network for joint image deblurring and super-resolution. arXiv preprint arXiv:1807.10806 – reference: BaeSDurandFDefocus magnificationComputer Graphics Forum200726357157910.1111/j.1467-8659.2007.01080.x – reference: Kupyn, O., Martyniuk, T., Wu, J., & Wang, Z. (2019). Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In IEEE International Conference on Computer Vision. – reference: Zhang, K., Luo, W., Zhong, Y., Stenger, B., Ma, L., Liu, W., & Li, H. (2020). Deblurring by realistic blurring. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Ren, D., Zhang, K., Wang, Q., Hu, Q., & Zuo, W. (2019). Neural blind deconvolution using deep priors. arXiv preprint arXiv:1908.02197 – reference: Xu, X., Sun, D., Pan, J., Zhang, Y., Pfister, H., & Yang, M.H. (2017). Learning to super-resolve blurry face and text images. In IEEE International Conference on Computer Vision. – reference: Köhler, R., Hirsch, M., Mohler, B., Schölkopf, B., & Harmeling, S. (2012). Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database. In European Conference on Computer Vision. – reference: MittalAMoorthyAKBovikACNo-reference image quality assessment in the spatial domainIEEE Transactions on Image Processing2012211246954708300114510.1109/TIP.2012.2214050 – reference: Nah, S., Baik, S., Hong, S., Moon, G., Son, S., Timofte, R., & Mu Lee, K. (2019). Ntire 2019 challenge on video deblurring and super-resolution: Dataset and study. In IEEE Conference on Computer Vision and Pattern Recognition Workshop. – reference: Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 – reference: Aittala, M., & Durand, F. (2018). Burst image deblurring using permutation invariant convolutional neural networks. In European Conference on Computer Vision. – reference: Jolicoeur-Martineau, A. (2018). The relativistic discriminator: a key element missing from standard gan. arXiv preprint arXiv:1807.00734 – reference: Zhong, L., Cho, S., Metaxas, D., Paris, S., & Wang, J. (2013). Handling noise in single image deblurring using directional filters. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Anwar, S., Hayder, Z., & Porikli, F. (2017). Depth estimation and blur removal from a single out-of-focus image. In British Machine Vision Conference. – reference: Cho, S., & Lee, S. (2009). Fast motion deblurring. In ACM SIGGRAPH Asia. – reference: Kim, T.H., Sajjadi, M.S., Hirsch, M., & Schölkopf, B. (2018). Spatio-temporal transformer network for video restoration. In European Conference on Computer Vision. – reference: Zhu, J.Y., Park, T., Isola, P., & Efros, A.A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE International Conference on Computer Vision. – reference: Cho, H., Wang, J., & Lee, S. (2012). Text image deblurring using text-specific properties. In European Conference on Computer Vision. – reference: Kaufman, A., & Fattal, R. (2020). Deblurring using analysis-synthesis networks pair. arXiv preprint arXiv:2004.02956 – reference: Nan, Y., Quan, Y., & Ji, H. (2020). Variational-em-based deep learning for noise-blind image deblurring. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Sellent, A., Rother, C., & Roth, S. (2016). Stereo video deblurring. In European Conference on Computer Vision. – reference: Chen, H., Gu, J., Gallo, O., Liu, M.Y., Veeraraghavan, A., & Kautz, J. (2018). Reblur2deblur: Deblurring videos via self-supervised learning. In IEEE International Conference on Computational Photography. – reference: Liu, Z., Luo, P., Wang, X., & Tang, X. (2015). Deep learning face attributes in the wild. In IEEE International Conference on Computer Vision. – reference: Pan, J., Bai, H., & Tang, J. (2020). Cascaded deep video deblurring using temporal sharpness prior. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Aljadaany, R., Pal, D.K., & Savvides, M. (2019). Douglas-rachford networks: Learning both the image prior and data fidelity terms for blind image deconvolution. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Pham, H., Guan, M., Zoph, B., Le, Q., & Dean, J. (2018). Efficient neural architecture search via parameters sharing. In International Conference on Machine Learning. – reference: SaadMABovikACCharrierCBlind image quality assessment: A natural scene statistics approach in the dct domainIEEE Transactions on Image Processing201221833393352296043010.1109/TIP.2012.2191563 – reference: Mitsa, T., & Varkur, K.L. (1993). Evaluation of contrast sensitivity functions for the formulation of quality measures incorporated in halftoning algorithms. In IEEE International Conference on Acoustics, Speech, and Signal Processing. – reference: Mittal, A., Soundararajan, R., & Bovik, A. C. (2012). Making a “completely blind” image quality analyzer. IEEE Signal Processing Letters,20(3), 209–212. – reference: ChrysosGGFavaroPZafeiriouSMotion deblurring of facesInternational Journal of Computer Vision20191276–780182310.1007/s11263-018-1138-7 – reference: PanciGCampisiPColonneseSScaranoGMultichannel blind image deconvolution using the bussgang algorithm: Spatial and multiresolution approachesIEEE Transactions on Image Processing2003121113241337202677610.1109/TIP.2003.818022 – reference: Martin, D., Fowlkes, C., Tal, D., & Malik, J. (2001). A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In IEEE International Conference on Computer Vision. – reference: Xu, L., & Jia, J. (2010). Two-phase kernel estimation for robust motion deblurring. In European Conference on Computer Vision. – reference: KheradmandAMilanfarPA general framework for regularized, similarity-based image restorationIEEE Transactions on Image Processing2014231251365151327505810.1109/TIP.2014.2362059 – reference: Michaeli, T., & Irani, M. (2014). Blind deblurring using internal patch recurrence. In European Conference on Computer Vision. – reference: Tang, C., Zhu, X., Liu, X., Wang, L., & Zomaya, A. (2019). Defusionnet: Defocus blur detection via recurrently fusing and refining multi-scale deep features. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Li, Y., Tofighi, M., Geng, J., Monga, V., & Eldar, Y. (2019). Deep algorithm unrolling for blind image deblurring. arXiv preprint arXiv:1902.03493 – reference: Schmidt, U., Rother, C., Nowozin, S., Jancsary, J., & Roth, S. (2013). Discriminative non-blind deblurring. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., & Polosukhin, I. (2017) Attention is all you need. arXiv preprint arXiv:1706.03762 – reference: He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask r-cnn. In IEEE International Conference on Computer Vision. – reference: Sun, L., & Hays, J. (2012). Super-resolution from internet-scale scene matching. In IEEE International Conference on Computational Photography. – reference: Tao, X., Gao, H., Shen, X., Wang, J., & Jia, J. (2018). Scale-recurrent network for deep image deblurring. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: GuCLuXHeYZhangCBlur removal via blurred-noisy image pairIEEE Transactions on Image Processing2021301134535910.1109/TIP.2020.3036745 – reference: Levin, A., Weiss, Y., Durand, F., & Freeman, W.T. (2009). Understanding and evaluating blind deconvolution algorithms. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Zhou, S., Zhang, J., Zuo, W., Xie, H., Pan, J., & Ren, J.S. (2019). Davanet: Stereo deblurring with view aggregation. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Nimisha, T.M., Kumar Singh, A., & Rajagopalan, A.N. (2017). Blur-invariant deep learning for blind-deblurring. In IEEE International Conference on Computer Vision. – reference: VairyMVenkateshYVDeblurring gaussian blur using a wavelet array transformPattern Recognition199528796597610.1016/0031-3203(94)00146-D – reference: Zhou, S., Zhang, J., Pan, J., Xie, H., Zuo, W., & Ren, J. (2019). Spatio-temporal filter adaptive network for video deblurring. In IEEE International Conference on Computer Vision. – reference: Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., & Wang, O. (2017). Deep video deblurring for hand-held cameras. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Liu, H., Simonyan, K., & Yang, Y. (2018). Darts: Differentiable architecture search. In International Conference on Learning Representations. – reference: Eigen, D., Puhrsch, C., & Fergus, R. (2014). Depth map prediction from a single image using a multi-scale deep network. In Advances in Neural Information Processing Systems. – reference: Zhang, K., Zuo, W., & Zhang, L. (2019). Deep plug-and-play super-resolution for arbitrary blur kernels. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Zhang, K., Zuo, W., & Zhang, L. (2018). Learning a single convolutional super-resolution network for multiple degradations. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Ye, P., Kumar, J., Kang, L., & Doermann, D. (2012). Unsupervised feature learning framework for no-reference image quality assessment. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Nah, S., Hyun Kim, T., & Mu Lee, K. (2017). Deep multi-scale convolutional neural network for dynamic scene deblurring. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Zhang, J., Pan, J., Ren, J., Song, Y., Bao, L., Lau, R.W., & Yang, M.H. (2018). Dynamic scene deblurring using spatially variant recurrent neural networks. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Hyun Kim, T., Ahn, B., & Mu Lee, K. (2013). Dynamic scene deblurring. In IEEE International Conference on Computer Vision. – reference: Wang, X., Chan, K.C., Yu, K., Dong, C., & Change Loy, C. (2019). EDVR: Video restoration with enhanced deformable convolutional networks. In IEEE Conference on Computer Vision and Pattern Recognition Workshop. – reference: Godard, C., Mac Aodha, O., & Brostow, G.J. (2017). Unsupervised monocular depth estimation with left-right consistency. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: SheikhHRBovikACImage information and visual qualityIEEE Transactions on Image Processing200615243044410.1109/TIP.2005.859378 – reference: Gao, H., Tao, X., Shen, X., & Jia, J. (2019). Dynamic scene deblurring with parameter selective sharing and nested skip connections. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Nah, S., Son, S., & Lee, K.M. (2019). Recurrent neural networks with intra-frame iterations for video deblurring. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Krishnan, D., & Fergus, R. (2009). Fast image deconvolution using hyper-laplacian priors. In Advances in Neural Information Processing Systems. – reference: SonCHParkHMA pair of noisy/blurry patches-based psf estimation and channel-dependent deblurringIEEE Transactions on Image Processing201157417911799 – reference: Zhao, W., Zheng, B., Lin, Q., & Lu, H. (2019). Enhancing diversity of defocus blur detectors via cross-ensemble network. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Chakrabarti, A., Zickler, T., & Freeman, W.T. (2010). Analyzing spatially-varying blur. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Sim, H., & Kim, M. (2019). A deep motion deblurring network based on per-pixel adaptive kernels with residual down-up and up-down modules. In IEEE Conference on Computer Vision and Pattern Recognition Workshop. – reference: Sun, D., Yang, X., Liu, M.Y., & Kautz, J. (2018). PWC-Net: Cnns for optical flow using pyramid, warping, and cost volume. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Wang, Z., Simoncelli, E.P., & Bovik, A.C. (2003). Multiscale structural similarity for image quality assessment. In The Asilomar Conference on Signals, Systems, and Computers. – reference: SchulerCJHirschMHarmelingSSchölkopfBLearning to deblurIEEE Transactions on Pattern Analysis and Machine Intelligence20153871439145110.1109/TPAMI.2015.2481418 – reference: Xu, L., Tao, X., & Jia, J. (2014). Inverse kernels for fast spatial deconvolution. In European Conference on Computer Vision. – reference: Bahat, Y., Efrat, N., & Irani, M. (2017). Non-uniform blind deblurring by reblurring. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Nah, S., Son, S., Timofte, R., & Lee, K.M. (2020). Ntire 2020 challenge on image and video deblurring. arXiv preprint arXiv:2005.01244 – reference: MoorthyAKBovikACA two-step framework for constructing blind image quality indicesIEEE Signal Processing Letters201017551351610.1109/LSP.2010.2043888 – reference: Sun, J., Cao, W., Xu, Z., & Ponce, J. (2015). Learning a convolutional neural network for non-uniform motion blur removal. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: WangZBovikACA universal image quality indexIEEE Signal Processing Letters200293818410.1109/97.995823 – reference: Sun, T., Peng, Y., & Heidrich, W. (2017). Revisiting cross-channel information transfer for chromatic aberration correction. In IEEE International Conference on Computer Vision, pp. 3248–3256. – reference: Damera-VenkataNKiteTDGeislerWSEvansBLBovikACImage quality assessment based on a degradation modelIEEE Transactions on Image Processing20009463665010.1109/83.841940 – reference: Jiang, P., Ling, H., Yu, J., & Peng, J. (2013). Salient region detection by ufo: Uniqueness, focusness and objectness. In IEEE International Conference on Computer Vision. – reference: Hacohen, Y., Shechtman, E., & Lischinski, D. (2013). Deblurring by example using dense correspondence. In IEEE International Conference on Computer Vision. – reference: HoßfeldTHeegaardPEVarelaMMöllerSQoe beyond the mos: an in-depth look at qoe via better metrics and their relation to mosQuality and User Experience201611210.1007/s41233-016-0002-1 – reference: Hradiš, M., Kotera, J., Zemcık, P., & Šroubek, F. (2015). Convolutional neural networks for direct text deblurring. In British Machine Vision Conference. – reference: SheikhHRBovikACDe VecianaGAn information fidelity criterion for image quality assessment using natural scene statisticsIEEE Transactions on Image Processing200514122117212810.1109/TIP.2005.859389 – reference: Zhang, R., Isola, P., Efros, A.A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: BoracchiGFoiAModeling the performance of image restoration from motion blurIEEE Transactions on Image Processing201221835023517296044310.1109/TIP.2012.2192126 – reference: Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems. – reference: Isola, P., Zhu, J.Y., Zhou, T., & Efros, A.A. (2017). Image-to-image translation with conditional adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: ZhangKLuoWZhongYMaLLiuWLiHAdversarial spatio-temporal learning for video deblurringIEEE Transactions on Image Processing2018281291301386318210.1109/TIP.2018.2867733 – reference: Zoph, B., & Le, Q.V. (2017). Neural architecture search with reinforcement learning. In International Conference on Learning Representations. – reference: Ren, W., Zhang, J., Ma, L., Pan, J., Cao, X., Zuo, W., Liu, W., & Yang, M.H. (2018). Deep non-blind deconvolution via generalized low-rank approximation. In Advances in Neural Information Processing Systems. – reference: Zoran, D., & Weiss, Y. (2011). From learning models of natural image patches to whole image restoration. In IEEE International Conference on Computer Vision. – reference: LiuLLiuBHuangHBovikACNo-reference image quality assessment based on spatial and spectral entropiesSignal Processing: Image Communication2014298856863 – reference: ChenFMaJAn empirical identification method of gaussian blur parameter for image deblurringIEEE Transactions on Signal Processing200957724672478265016410.1109/TSP.2009.2018358 – reference: Shen, W., Bao, W., Zhai, G., Chen, L., Min, X., & Gao, Z. (2020). Blurry video frame interpolation. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Zhang, K., Zuo, W., Gu, S., & Zhang, L. (2017). Learning deep cnn denoiser prior for image restoration. In IEEE Conference on Computer Vision and Pattern Recognition. – reference: Park, P.D., Kang, D.U., Kim, J., & Chun, S.Y. (2020). Multi-temporal recurrent neural networks for progressive non-uniform single image deblurring with incremental temporal training. In European Conference on Computer Vision. – reference: Zhang, W., & Cham, W.K. (2009). Single image focus editing. In IEEE International Conference on Computer Vision Workshop. – reference: ChenSJShenHLMultispectral image out-of-focus deblurring using interchannel correlationIEEE Transactions on Image Processing2015241144334445339032010.1109/TIP.2015.2465162 – reference: Eslami, S.A., Heess, N., Weber, T., Tassa, Y., Szepesvari, D., Hinton, G.E., et al. (2016). Attend, infer, repeat: Fast scene understanding with generative models. In Advances in Neural Information Processing Systems. – reference: LiLPanJLaiWSGaoCSangNYangMHDynamic scene deblurring by depth guided modelIEEE Transactions on Image Processing2020295273528810.1109/TIP.2020.2980173 – reference: Mustaniemi, J., Kannala, J., Särkkä, S., Matas, J., & Heikkila, J. (2019). Gyroscope-aided motion deblurring with deep networks. In IEEE Winter Conference on Applications of Computer Vision. – reference: Hyun Kim, T., Mu Lee, K., Scholkopf, B., & Hirsch, M. (2017). Online video deblurring via dynamic temporal blending network. In IEEE International Conference on Computer Vision. – volume: 28 start-page: 291 issue: 1 year: 2018 ident: 1633_CR153 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2018.2867733 – volume: 23 start-page: 5136 issue: 12 year: 2014 ident: 1633_CR54 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2014.2362059 – ident: 1633_CR156 doi: 10.1109/CVPR.2017.300 – ident: 1633_CR140 doi: 10.1109/ICCV.2017.34 – ident: 1633_CR85 doi: 10.1109/CVPRW.2019.00251 – ident: 1633_CR166 doi: 10.1109/CVPR.2019.01125 – ident: 1633_CR69 – ident: 1633_CR18 doi: 10.1145/1661412.1618491 – ident: 1633_CR75 doi: 10.1007/978-3-030-01249-6_22 – ident: 1633_CR62 doi: 10.1007/978-3-642-33712-3_49 – ident: 1633_CR23 – ident: 1633_CR102 doi: 10.1109/ICCV.2017.123 – volume: 24 start-page: 4433 issue: 11 year: 2015 ident: 1633_CR15 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2015.2465162 – ident: 1633_CR134 doi: 10.1109/CVPRW.2019.00247 – ident: 1633_CR81 doi: 10.1109/LSP.2012.2227726 – ident: 1633_CR3 doi: 10.1109/CVPR.2019.01048 – volume: 38 start-page: 66 issue: 1 year: 1987 ident: 1633_CR41 publication-title: Computer Vision, Graphics, and Image Processing doi: 10.1016/S0734-189X(87)80153-6 – ident: 1633_CR168 – ident: 1633_CR93 doi: 10.1007/978-3-319-10584-0_4 – ident: 1633_CR162 doi: 10.1109/CVPR.2019.00911 – ident: 1633_CR61 doi: 10.1109/CVPR.2016.188 – ident: 1633_CR147 doi: 10.1109/ICCV.2017.36 – ident: 1633_CR14 doi: 10.1109/ICCPHOT.2018.8368468 – ident: 1633_CR130 doi: 10.1109/CVPR.2019.00281 – ident: 1633_CR92 doi: 10.1109/CVPR42600.2020.00311 – ident: 1633_CR143 – ident: 1633_CR148 doi: 10.1109/TIP.2020.2990354 – volume: 21 start-page: 3339 issue: 8 year: 2012 ident: 1633_CR106 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2012.2191563 – volume: 12 start-page: 1324 issue: 11 year: 2003 ident: 1633_CR95 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2003.818022 – ident: 1633_CR122 doi: 10.1109/CVPR.2017.33 – ident: 1633_CR144 doi: 10.1007/978-3-319-10602-1_3 – ident: 1633_CR27 doi: 10.1109/ISCAS.1999.778770 – ident: 1633_CR1 doi: 10.1007/978-3-030-58607-2_7 – ident: 1633_CR74 doi: 10.1109/LSP.2019.2947379 – ident: 1633_CR50 – ident: 1633_CR154 doi: 10.1109/CVPR42600.2020.00281 – ident: 1633_CR45 doi: 10.1109/ICCV.2013.248 – ident: 1633_CR88 doi: 10.1109/CVPRW50498.2020.00216 – ident: 1633_CR116 doi: 10.1109/ICCV.2019.00567 – ident: 1633_CR57 – ident: 1633_CR155 doi: 10.1109/CVPR42600.2020.00328 – volume: 14 start-page: 2117 issue: 12 year: 2005 ident: 1633_CR112 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2005.859389 – ident: 1633_CR36 doi: 10.1109/CVPR.2016.90 – ident: 1633_CR127 – ident: 1633_CR26 doi: 10.1145/1179352.1141956 – ident: 1633_CR158 doi: 10.1109/CVPR.2019.00177 – ident: 1633_CR104 – ident: 1633_CR7 – ident: 1633_CR30 doi: 10.1109/CVPR.2017.699 – ident: 1633_CR38 doi: 10.1109/ICPR.2010.579 – ident: 1633_CR133 – ident: 1633_CR63 doi: 10.1109/CVPR.2017.19 – ident: 1633_CR86 doi: 10.1109/CVPR.2017.35 – volume: 57 start-page: 1791 issue: 4 year: 2011 ident: 1633_CR121 publication-title: IEEE Transactions on Image Processing – ident: 1633_CR119 doi: 10.1109/AFGR.2002.1004130 – ident: 1633_CR32 doi: 10.1109/TNNLS.2020.2968289 – volume: 15 start-page: 430 issue: 2 year: 2006 ident: 1633_CR111 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2005.859378 – ident: 1633_CR91 doi: 10.1109/ICCV.2017.509 – ident: 1633_CR141 doi: 10.1007/978-3-319-46454-1_39 – ident: 1633_CR113 doi: 10.1109/CVPR42600.2020.00516 – ident: 1633_CR34 doi: 10.1109/ICCV.2013.296 – ident: 1633_CR77 doi: 10.1111/j.1467-8659.2012.03067.x – volume: 9 start-page: 636 issue: 4 year: 2000 ident: 1633_CR21 publication-title: IEEE Transactions on Image Processing doi: 10.1109/83.841940 – ident: 1633_CR105 doi: 10.1007/978-3-030-58595-2_12 – ident: 1633_CR46 doi: 10.1109/CVPR42600.2020.00338 – volume: 29 start-page: 856 issue: 8 year: 2014 ident: 1633_CR70 publication-title: Signal Processing: Image Communication – ident: 1633_CR115 doi: 10.1007/s11263-019-01288-9 – volume: 26 start-page: 571 issue: 3 year: 2007 ident: 1633_CR5 publication-title: Computer Graphics Forum doi: 10.1111/j.1467-8659.2007.01080.x – ident: 1633_CR68 doi: 10.1007/978-3-030-58598-3_41 – volume: 21 start-page: 3502 issue: 8 year: 2012 ident: 1633_CR9 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2012.2192126 – ident: 1633_CR47 doi: 10.1109/CVPRW.2018.00118 – ident: 1633_CR19 doi: 10.1109/ICCV.2011.6126280 – ident: 1633_CR40 doi: 10.5244/C.29.6 – ident: 1633_CR150 doi: 10.1109/CVPR.2019.00613 – ident: 1633_CR24 – volume: 21 start-page: 4695 issue: 12 year: 2012 ident: 1633_CR80 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2012.2214050 – ident: 1633_CR100 doi: 10.1109/CVPR42600.2020.00340 – ident: 1633_CR44 doi: 10.1109/CVPR.2017.632 – ident: 1633_CR161 – ident: 1633_CR71 doi: 10.1109/ICCV.2015.425 – volume: 30 start-page: 345 issue: 11 year: 2021 ident: 1633_CR33 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2020.3036745 – ident: 1633_CR110 doi: 10.1007/978-3-319-46475-6_35 – ident: 1633_CR66 – ident: 1633_CR76 doi: 10.1109/ICCV.2001.937655 – ident: 1633_CR169 doi: 10.1109/ICCV.2011.6126278 – volume: 20 start-page: 3350 issue: 12 year: 2011 ident: 1633_CR83 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2011.2147325 – ident: 1633_CR87 doi: 10.1109/CVPR.2019.00829 – ident: 1633_CR16 doi: 10.1109/CVPR.2011.5995568 – ident: 1633_CR97 – ident: 1633_CR124 doi: 10.1109/CVPR.2018.00931 – volume: 13 start-page: 600 issue: 4 year: 2004 ident: 1633_CR136 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2003.819861 – ident: 1633_CR64 doi: 10.1109/CVPR.2009.5206815 – volume: 127 start-page: 801 issue: 6–7 year: 2019 ident: 1633_CR20 publication-title: International Journal of Computer Vision doi: 10.1007/s11263-018-1138-7 – volume: 29 start-page: 5273 year: 2020 ident: 1633_CR65 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2020.2980173 – volume: 27 start-page: 194 issue: 1 year: 2017 ident: 1633_CR146 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2017.2753658 – ident: 1633_CR49 doi: 10.1007/978-3-319-46475-6_43 – ident: 1633_CR67 doi: 10.1109/ICASSP.2019.8682542 – ident: 1633_CR35 doi: 10.1109/ICCV.2017.322 – ident: 1633_CR117 doi: 10.1109/CVPR.2014.379 – ident: 1633_CR142 doi: 10.1007/978-3-642-15549-9_12 – ident: 1633_CR60 doi: 10.1109/ICCV.2019.00897 – ident: 1633_CR48 doi: 10.1109/CVPR.2017.408 – ident: 1633_CR152 doi: 10.1109/CVPR.2018.00267 – ident: 1633_CR79 doi: 10.1109/ICASSP.1993.319807 – ident: 1633_CR123 doi: 10.1109/CVPR42600.2020.00366 – ident: 1633_CR164 doi: 10.1007/978-3-030-58539-6_12 – ident: 1633_CR4 doi: 10.5244/C.31.113 – ident: 1633_CR58 doi: 10.1109/ICCV.2017.491 – ident: 1633_CR78 doi: 10.1007/978-3-319-10578-9_51 – ident: 1633_CR101 – volume: 28 start-page: 965 issue: 7 year: 1995 ident: 1633_CR132 publication-title: Pattern Recognition doi: 10.1016/0031-3203(94)00146-D – ident: 1633_CR137 doi: 10.1109/ACSSC.2003.1292216 – ident: 1633_CR167 doi: 10.1109/ICCV.2017.244 – ident: 1633_CR126 – ident: 1633_CR94 doi: 10.1109/CVPR.2014.371 – ident: 1633_CR114 doi: 10.1109/CVPR.2018.00862 – ident: 1633_CR59 doi: 10.1109/CVPR.2018.00854 – ident: 1633_CR96 doi: 10.1007/978-3-030-58539-6_20 – ident: 1633_CR8 doi: 10.1109/CVPR.2018.00652 – volume: 110 start-page: 185 issue: 2 year: 2014 ident: 1633_CR138 publication-title: International Journal of Computer Vision doi: 10.1007/s11263-014-0727-3 – ident: 1633_CR22 – ident: 1633_CR37 doi: 10.1109/ICCV.2011.6126276 – ident: 1633_CR12 doi: 10.1109/CVPR.2010.5539954 – ident: 1633_CR52 doi: 10.1109/CVPR42600.2020.00585 – ident: 1633_CR31 doi: 10.1109/CVPR.2017.405 – ident: 1633_CR11 doi: 10.1007/978-3-319-46487-9_14 – ident: 1633_CR129 – ident: 1633_CR51 doi: 10.1109/CVPR.2007.383214 – ident: 1633_CR56 doi: 10.1007/978-3-642-33786-4_3 – ident: 1633_CR160 doi: 10.1109/ICCVW.2009.5457520 – volume: 1 start-page: 2 issue: 1 year: 2016 ident: 1633_CR39 publication-title: Quality and User Experience doi: 10.1007/s41233-016-0002-1 – ident: 1633_CR53 – ident: 1633_CR84 doi: 10.1109/WACV.2019.00208 – ident: 1633_CR72 doi: 10.1109/CVPR.2019.01047 – ident: 1633_CR157 doi: 10.1109/CVPR.2018.00344 – ident: 1633_CR99 doi: 10.1109/CVPR.2019.00699 – ident: 1633_CR118 doi: 10.1109/CVPRW.2019.00267 – ident: 1633_CR149 doi: 10.1109/CVPR.2013.132 – volume: 9 start-page: 81 issue: 3 year: 2002 ident: 1633_CR135 publication-title: IEEE Signal Processing Letters doi: 10.1109/97.995823 – volume: 98 start-page: 168 issue: 2 year: 2012 ident: 1633_CR139 publication-title: International Journal of Computer Vision doi: 10.1007/s11263-011-0502-7 – ident: 1633_CR128 doi: 10.1109/ICCV.2017.352 – ident: 1633_CR108 doi: 10.1109/CVPR.2013.142 – ident: 1633_CR2 doi: 10.1007/978-3-030-01237-3_45 – ident: 1633_CR25 – ident: 1633_CR163 doi: 10.1109/CVPR.2013.85 – ident: 1633_CR43 doi: 10.1109/ICCV.2017.435 – ident: 1633_CR89 doi: 10.1109/CVPR42600.2020.00368 – ident: 1633_CR6 doi: 10.1109/ICCV.2017.356 – ident: 1633_CR28 doi: 10.1109/CVPR.2019.00397 – ident: 1633_CR90 doi: 10.1109/ICCV.2017.37 – ident: 1633_CR107 doi: 10.1109/CVPR.2013.84 – ident: 1633_CR120 – ident: 1633_CR42 doi: 10.1109/ICCV.2013.392 – ident: 1633_CR29 doi: 10.1109/CVPR.2016.204 – ident: 1633_CR131 doi: 10.1109/CVPR.2018.00853 – ident: 1633_CR98 – ident: 1633_CR125 doi: 10.1109/CVPR.2015.7298677 – ident: 1633_CR55 doi: 10.1007/978-3-030-01219-9_7 – volume: 38 start-page: 1439 issue: 7 year: 2015 ident: 1633_CR109 publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2015.2481418 – ident: 1633_CR151 doi: 10.1109/CVPR.2017.737 – ident: 1633_CR73 – volume: 17 start-page: 513 issue: 5 year: 2010 ident: 1633_CR82 publication-title: IEEE Signal Processing Letters doi: 10.1109/LSP.2010.2043888 – ident: 1633_CR159 doi: 10.1109/CVPR.2018.00068 – ident: 1633_CR165 doi: 10.1109/ICCV.2019.00257 – volume: 57 start-page: 2467 issue: 7 year: 2009 ident: 1633_CR13 publication-title: IEEE Transactions on Signal Processing doi: 10.1109/TSP.2009.2018358 – ident: 1633_CR103 doi: 10.1109/ICCV.2019.00948 – ident: 1633_CR10 doi: 10.1109/CVPR.2019.00700 – ident: 1633_CR17 doi: 10.1007/978-3-642-33715-4_38 – ident: 1633_CR145 doi: 10.1109/CVPR.2013.147 |
SSID | ssj0002823 |
Score | 2.7168434 |
Snippet | Image deblurring is a classic problem in low-level computer vision with the aim to recover a sharp image from a blurred input image. Advances in deep learning... |
SourceID | proquest gale crossref springer |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 2103 |
SubjectTerms | Artificial Intelligence Artificial neural networks Blurring Business performance management Cameras Computer Imaging Computer Science Computer vision Datasets Deep learning Image Processing and Computer Vision Literature reviews Machine vision Neural networks Pattern Recognition Pattern Recognition and Graphics Performance measurement Surveys Taxonomy Vision |
SummonAdditionalLinks | – databaseName: ProQuest Central dbid: BENPR link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LS8NAEB60vXjxLVarBhE8aDCbZpNGEKnaUgWLtBa8LftKLzWtfQj-e2fTTYuKveSQTF6zO9_O7LwAzoKKMTN47FJJtBskSrki4XjA5UD6HM3trLr-cytsdoOnN_q2Aq08F8aEVeaYmAG1GkizR37lR8YrgOp89Xb44ZquUca7mrfQ4La1grrJSoytQhEhmXoFKN7VWy_tOTajgTFrLo9GEw1jYtNoZsl0xM98miZUIaxUXPpjqfoN2H88p9mC1NiEdatJOrXZ0G_Bik63YcNqlY6V2TGeyhs35Od24ORB66Hz-I5Y4iDg9M02YNq7dmpOZzr61F-70G3UX--bru2U4ErUbyZuGHHhRZz4PDTVXxJFRJCImHBFZKw8HirlE0m1oFrTUEc8qnJcyhNaVWieoRa2B4V0kOp9cILI1yiTiRZxBYW7yhNEBKGJxz0pRBCVgORMYdKWETfdLPpsUQDZMJIhI1nGSEZLcDG_ZzgrorGU-tTwmpnqFKkJf-nx6XjMHjttVkOTOkKdMESic0uUDPD1kttsAvwJU9DqB2U5HzNm5XPMFrOpBJf5OC4u__9xB8ufdghrfjaDTBBaGQqT0VQfodYyEcd2Kn4DjNrjgg priority: 102 providerName: ProQuest |
Title | Deep Image Deblurring: A Survey |
URI | https://link.springer.com/article/10.1007/s11263-022-01633-5 https://www.proquest.com/docview/2701324958 |
Volume | 130 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlR3LSsNAcLB68eJbrNYaRPCggW6SzcNbqo2toohaqKdlN9n0UtPStIJ_72y6sdQXeMnCZvKa3XllXgAnjq3MDB6YNCbSdNIkMUXK8YDiILY4mttFdf27e7fddW56tKeTwvIy2r10SRacep7sRqzC56hCCVzbNmkFVqiqJ4W7uGuFn_wXjYhZA3k0jKgbEJ0q8_M9FsTRV6b8zTtaCJ1oA9a0tmiEs-XdhCWZbcG61hwNTZc5TpXNGcq5bTi6knJkdF6RXxjIVAbqV1_WvzBC42k6fpPvO9CNWs-XbVN3QzBj1GEmputx0fA4sbirKrykCRFOKgLCExIHSYO7SWKRmEpBpaSu9LjncxTXKfUTNMFQ09qF5WyYyT0wHM-SSHepFIGNBOzzFKleSNLgjVgIx6sCKZHCYl0qXHWsGLB5kWOFSIaIZAUiGa3C2ec1o1mhjD-hjxWumapAkakQlz6f5jnrPD2yEM1mD_U-F4FONVA6xMfHXGcM4EeoolULkLVyzZimwZxZnvIjoQHoV-G8XMf56d9fbv9_4AewahU7SgWe1WB5Mp7KQ9RUJqIOFT-6rsNK2LxqRmq8frlt4dhs3T881ott-wET099d |
linkProvider | Springer Nature |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LT9tAEB7RcKCXUkqrplCwqqIeWovsxmvHSAiFlxIeUcVDym27L3OhTooTEH-O38aMsyYCVG5cfFiPX7Pjb2Z25wHwPWqSm6HSUBjmwiizNtSZwgOqA8MVuttldf3jXtw5jw76oj8Dd1UuDIVVVphYArUdGFojX-cJ7QqgOd_aGv4LqWsU7a5WLTSUb61gN8sSYz6x49Dd3qALV2x2d3G-1zjf3zvb6YS-y0Bo0DYYhXGidCNRjKuYKqdkluko0ylTlpnUNlRsLWdGOC2cE7FLVNJSqAYz0bLo2qAFg_d9A7MRLaDUYHZ7r_f75EEXoEMzaWaPTpqIU-bTdibJe4yXe6gUGhE3m6F4pBqfKohnO7WlAtx_D--85Rq0J6K2ADMu_wDz3ooNPEYUOFQ1iqjGFmF117lh0P2L2BUgwF3SsmN-sRG0g9Px1bW7_Qjnr8KzT1DLB7n7DEGUcIcYkDmdNhFMWipDBNKONVTDaB0ldWAVU6TxZcupe8alnBZcJkZKZKQsGSlFHX4-XDOcFO14kfob8VpSNYycwm0u1LgoZPf0RLbRhU_QBo2R6Icnygb4eKN89gJ-BBXQekS5XM2Z9HhQyKn01uFXNY_T0_9_uS8v320V5jpnx0fyqNs7XIK3vJQmCoBbhtroauy-osU00iteLAP489p_wj0ATSDy |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3dT9swED8BlSZexr7QCmxE0yYeWETtxnEzqULdSkXHViE-JN48f_LC0kLaTf0X91dxTp1WgMYbL3lwLk5yPv_uzvbdAXxMmt7NkFnMNLFx4oyJlZN4QXWgqUR3u8yu_3OQHp4n3y_YxRL8q2Jh_LHKChNLoDZD7dfI9yj3uwJozrf2XDgWcdzt7Y-uY19Byu-0VuU0ZCizYNplurEQ5HFkp3_RnSva_S6O_SdKewdn3w7jUHEg1mgnjOOUS9XgklCZ-iwqzhCVOJURaYjOTEOmxlCimVXMWpZaLnlLokp0rGXQzUFrBvtdhhpHrY-OYO3rweD4ZK4X0LmZFbZHh42lGQkhPLNAPkLL_VR_TCJtNmN2R03eVxYPdm1LZdh7Ac-DFRt1ZmL3EpZs_grWgkUbBbwosKkqGlG1vYbtrrWjqP8bcSxCsLvyS5D55ZeoE51Obv7Y6Rs4fxKercNKPsztW4gSTi3igbMqayKwtKRDNFKWNGRDK5XwOpCKKUKHFOa-ksaVWCRf9owUyEhRMlKwOuzOnxnNEng8Sv3B81r4zBi5l7FLOSkK0T89ER105znaoykS7QQiN8TXaxkiGfAnfDKtO5Rb1ZiJgA2FWEhyHT5X47i4_f-P23i8t214hjNC_OgPjjZhlZbC5M_CbcHK-GZi36HxNFbvg1RG8OupJ8It528lNg |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+Image+Deblurring%3A+A+Survey&rft.jtitle=International+journal+of+computer+vision&rft.au=Zhang%2C+Kaihao&rft.au=Ren%2C+Wenqi&rft.au=Luo%2C+Wenhan&rft.au=Lai%2C+Wei-Sheng&rft.date=2022-09-01&rft.pub=Springer+US&rft.issn=0920-5691&rft.eissn=1573-1405&rft.volume=130&rft.issue=9&rft.spage=2103&rft.epage=2130&rft_id=info:doi/10.1007%2Fs11263-022-01633-5&rft.externalDocID=10_1007_s11263_022_01633_5 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0920-5691&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0920-5691&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0920-5691&client=summon |