LightenNet: A Convolutional Neural Network for weakly illuminated image enhancement
•We propose a trainable CNN for weakly illuminated image enhancement.•We propose a Retinex model-based weakly illuminated image synthesis approach.•The proposed method generalizes well to diverse weakly illuminated images. Weak illumination or low light image enhancement as pre-processing is needed...
Saved in:
Published in | Pattern recognition letters Vol. 104; pp. 15 - 22 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Amsterdam
Elsevier B.V
01.03.2018
Elsevier Science Ltd |
Subjects | |
Online Access | Get full text |
ISSN | 0167-8655 1872-7344 |
DOI | 10.1016/j.patrec.2018.01.010 |
Cover
Abstract | •We propose a trainable CNN for weakly illuminated image enhancement.•We propose a Retinex model-based weakly illuminated image synthesis approach.•The proposed method generalizes well to diverse weakly illuminated images.
Weak illumination or low light image enhancement as pre-processing is needed in many computer vision tasks. Existing methods show limitations when they are used to enhance weakly illuminated images, especially for the images captured under diverse illumination circumstances. In this letter, we propose a trainable Convolutional Neural Network (CNN) for weakly illuminated image enhancement, namely LightenNet, which takes a weakly illuminated image as input and outputs its illumination map that is subsequently used to obtain the enhanced image based on Retinex model. The proposed method produces visually pleasing results without over or under-enhanced regions. Qualitative and quantitative comparisons are conducted to evaluate the performance of the proposed method. The experimental results demonstrate that the proposed method achieves superior performance than existing methods. Additionally, we propose a new weakly illuminated image synthesis approach, which can be use as a guide for weakly illuminated image enhancement networks training and full-reference image quality assessment. |
---|---|
AbstractList | •We propose a trainable CNN for weakly illuminated image enhancement.•We propose a Retinex model-based weakly illuminated image synthesis approach.•The proposed method generalizes well to diverse weakly illuminated images.
Weak illumination or low light image enhancement as pre-processing is needed in many computer vision tasks. Existing methods show limitations when they are used to enhance weakly illuminated images, especially for the images captured under diverse illumination circumstances. In this letter, we propose a trainable Convolutional Neural Network (CNN) for weakly illuminated image enhancement, namely LightenNet, which takes a weakly illuminated image as input and outputs its illumination map that is subsequently used to obtain the enhanced image based on Retinex model. The proposed method produces visually pleasing results without over or under-enhanced regions. Qualitative and quantitative comparisons are conducted to evaluate the performance of the proposed method. The experimental results demonstrate that the proposed method achieves superior performance than existing methods. Additionally, we propose a new weakly illuminated image synthesis approach, which can be use as a guide for weakly illuminated image enhancement networks training and full-reference image quality assessment. Weak illumination or low light image enhancement as pre-processing is needed in many computer vision tasks. Existing methods show limitations when they are used to enhance weakly illuminated images, especially for the images captured under diverse illumination circumstances. In this letter, we propose a trainable Convolutional Neural Network (CNN) for weakly illuminated image enhancement, namely LightenNet, which takes a weakly illuminated image as input and outputs its illumination map that is subsequently used to obtain the enhanced image based on Retinex model. The proposed method produces visually pleasing results without over or under-enhanced regions. Qualitative and quantitative comparisons are conducted to evaluate the performance of the proposed method. The experimental results demonstrate that the proposed method achieves superior performance than existing methods. Additionally, we propose a new weakly illuminated image synthesis approach, which can be use as a guide for weakly illuminated image enhancement networks training and full-reference image quality assessment. |
Author | Guo, Jichang Porikli, Fatih Li, Chongyi Pang, Yanwei |
Author_xml | – sequence: 1 givenname: Chongyi orcidid: 0000-0003-2609-2460 surname: Li fullname: Li, Chongyi organization: School of Electrical and Information Engineering, Tianjin University, Weijing Road 92, Tianjin 300300, China – sequence: 2 givenname: Jichang surname: Guo fullname: Guo, Jichang email: jcguo@tju.edu.cn organization: School of Electrical and Information Engineering, Tianjin University, Weijing Road 92, Tianjin 300300, China – sequence: 3 givenname: Fatih surname: Porikli fullname: Porikli, Fatih organization: Research School of Engineering, College of Engineering and Computer Science, Australian National University, Canberra, ACT 0200, Australia – sequence: 4 givenname: Yanwei surname: Pang fullname: Pang, Yanwei organization: School of Electrical and Information Engineering, Tianjin University, Weijing Road 92, Tianjin 300300, China |
BookMark | eNqFkD1rwzAQhkVJoUnaf9DB0NmuJNuynaEQQr8gpEPbWSj2KZHjSKksJ-TfV4k7dWjh4JbnPd57RmigjQaEbgmOCCbsvo52wlkoI4pJHmHiB1-gIckzGmZxkgzQ0GNZmLM0vUKjtq0xxiwu8iF6n6vV2oFegJsE02Bm9N40nVNGiyZYQGfPyx2M3QTS2OAAYtMcA9U03VZp4aAK1FasIAC9FrqELWh3jS6laFq4-dlj9Pn0-DF7Cedvz6-z6Twsk4S5kKa-gWC5lCyVGJhMQZByGWe4TEWWF8sMpKQQwzKheUqZTHAlSJpQD1BSFPEY3fV3d9Z8ddA6XpvO-uItp5gxluH4TCU9VVrTthYk31lf2R45wfykj9e818dP-jgmfrCPTX7FSuXESYyzQjX_hR_6MPj39wosb0sFXk-lPOp4ZdTfB74B2SuQQw |
CitedBy_id | crossref_primary_10_1109_TCSVT_2021_3073371 crossref_primary_10_1109_ACCESS_2020_2988767 crossref_primary_10_1049_ipr2_12173 crossref_primary_10_1002_cpe_7347 crossref_primary_10_1016_j_image_2022_116916 crossref_primary_10_1109_TMM_2019_2933334 crossref_primary_10_3390_s23020792 crossref_primary_10_3390_sym11040574 crossref_primary_10_1109_ACCESS_2021_3137993 crossref_primary_10_1016_j_neucom_2025_129509 crossref_primary_10_1364_OE_484628 crossref_primary_10_1007_s10846_019_01124_9 crossref_primary_10_1007_s13369_022_06888_1 crossref_primary_10_1007_s11042_022_13411_z crossref_primary_10_1109_TCSVT_2022_3195996 crossref_primary_10_32604_cmc_2021_016047 crossref_primary_10_1007_s41095_021_0232_x crossref_primary_10_7498_aps_71_20220463 crossref_primary_10_1016_j_heliyon_2023_e14558 crossref_primary_10_1016_j_patcog_2024_111180 crossref_primary_10_1016_j_jksuci_2023_101666 crossref_primary_10_1109_TIM_2023_3307181 crossref_primary_10_1016_j_image_2021_116527 crossref_primary_10_1016_j_inffus_2024_102365 crossref_primary_10_1109_TPAMI_2021_3063604 crossref_primary_10_1109_JAS_2024_124263 crossref_primary_10_1007_s11042_023_15147_w crossref_primary_10_3390_bioengineering11030270 crossref_primary_10_1016_j_asoc_2025_112865 crossref_primary_10_1016_j_patrec_2025_02_001 crossref_primary_10_1007_s11831_021_09587_6 crossref_primary_10_1109_JSYST_2023_3262593 crossref_primary_10_1016_j_bspc_2021_103208 crossref_primary_10_1109_TPAMI_2021_3126387 crossref_primary_10_1007_s11263_023_01900_z crossref_primary_10_1016_j_eswa_2025_127263 crossref_primary_10_3389_fmars_2023_1163831 crossref_primary_10_1109_TCSVT_2024_3480930 crossref_primary_10_1007_s40747_024_01762_z crossref_primary_10_1007_s10489_020_01923_w crossref_primary_10_3390_jmse11081625 crossref_primary_10_1007_s11760_023_02974_5 crossref_primary_10_3390_s24092711 crossref_primary_10_1016_j_image_2023_116971 crossref_primary_10_1016_j_eswa_2022_118920 crossref_primary_10_1016_j_measen_2023_100933 crossref_primary_10_1049_ell2_70053 crossref_primary_10_1017_S0373463322000467 crossref_primary_10_1080_08839514_2021_1985799 crossref_primary_10_1364_JOSAA_468876 crossref_primary_10_1007_s00371_021_02343_8 crossref_primary_10_1016_j_patrec_2023_06_017 crossref_primary_10_3390_app13169261 crossref_primary_10_32604_iasc_2022_024961 crossref_primary_10_1016_j_jvcir_2023_103795 crossref_primary_10_12677_CSA_2023_133053 crossref_primary_10_1007_s11760_023_02687_9 crossref_primary_10_1109_ACCESS_2021_3132078 crossref_primary_10_1109_TCI_2024_3378091 crossref_primary_10_1007_s11042_024_19743_2 crossref_primary_10_1016_j_patrec_2020_08_013 crossref_primary_10_1109_TITS_2021_3117868 crossref_primary_10_3390_s23187763 crossref_primary_10_3390_sym15101850 crossref_primary_10_1080_13682199_2024_2343979 crossref_primary_10_1016_j_imavis_2023_104693 crossref_primary_10_1109_TGRS_2022_3201530 crossref_primary_10_23919_cje_2021_00_350 crossref_primary_10_3390_computers13060134 crossref_primary_10_1049_ipr2_12650 crossref_primary_10_1109_JSTSP_2024_3463416 crossref_primary_10_1016_j_dsp_2024_104802 crossref_primary_10_1109_TCSVT_2023_3239511 crossref_primary_10_3390_rs14041027 crossref_primary_10_1109_TGRS_2023_3279826 crossref_primary_10_3390_s24030772 crossref_primary_10_3390_s24165246 crossref_primary_10_1007_s41064_022_00226_8 crossref_primary_10_1016_j_sigpro_2022_108821 crossref_primary_10_1109_ACCESS_2020_3001206 crossref_primary_10_1049_ipr2_13180 crossref_primary_10_1016_j_marpetgeo_2023_106405 crossref_primary_10_3390_s24051586 crossref_primary_10_1016_j_eswa_2023_122844 crossref_primary_10_1109_LSP_2020_3036312 crossref_primary_10_1051_shsconf_202316601068 crossref_primary_10_36548_jiip_2024_4_003 crossref_primary_10_1109_TCSVT_2021_3113559 crossref_primary_10_1186_s13640_019_0418_7 crossref_primary_10_1111_coin_12648 crossref_primary_10_1007_s10044_023_01196_2 crossref_primary_10_1016_j_cmpb_2022_106800 crossref_primary_10_1109_JSEN_2023_3328995 crossref_primary_10_1016_j_jvcir_2020_102759 crossref_primary_10_32604_iasc_2022_021954 crossref_primary_10_3390_electronics12245022 crossref_primary_10_1016_j_image_2024_117246 crossref_primary_10_1109_ACCESS_2023_3297490 crossref_primary_10_1109_TMM_2024_3358962 crossref_primary_10_1016_j_measurement_2021_109973 crossref_primary_10_1016_j_knosys_2023_110730 crossref_primary_10_1049_ipr2_12418 crossref_primary_10_1117_1_JEI_34_2_023004 crossref_primary_10_1155_2021_2436486 crossref_primary_10_1007_s11042_022_12429_7 crossref_primary_10_1016_j_cviu_2023_103916 crossref_primary_10_1016_j_jvcir_2025_104392 crossref_primary_10_3389_fnins_2024_1297671 crossref_primary_10_1109_TIP_2021_3122004 crossref_primary_10_54097_fcis_v1i3_2242 crossref_primary_10_1007_s11063_024_11565_5 crossref_primary_10_1155_2022_1882464 crossref_primary_10_1109_ACCESS_2020_3007610 crossref_primary_10_1007_s11517_019_01965_4 crossref_primary_10_1109_TIP_2020_3008396 crossref_primary_10_11948_20220303 crossref_primary_10_1016_j_displa_2024_102877 crossref_primary_10_1109_TIP_2022_3140607 crossref_primary_10_1007_s00034_025_03022_y crossref_primary_10_1109_TCSVT_2023_3260212 crossref_primary_10_1109_TNNLS_2022_3190880 crossref_primary_10_1109_TCE_2023_3280467 crossref_primary_10_1007_s11042_024_20285_w crossref_primary_10_1016_j_jksuci_2023_101885 crossref_primary_10_3390_s24113299 crossref_primary_10_3724_SP_J_1089_2022_19719 crossref_primary_10_1007_s00521_023_08331_4 crossref_primary_10_3390_rs14184608 crossref_primary_10_11834_jig_240041 crossref_primary_10_3390_e24060815 crossref_primary_10_1109_TCSVT_2020_3009235 crossref_primary_10_3390_electronics11172750 crossref_primary_10_1016_j_image_2019_115723 crossref_primary_10_1016_j_patcog_2022_108846 crossref_primary_10_3390_agriculture14071003 crossref_primary_10_1016_j_imu_2022_101136 crossref_primary_10_1007_s00371_024_03554_5 crossref_primary_10_1109_ACCESS_2020_2992749 crossref_primary_10_1364_OE_485672 crossref_primary_10_1117_1_JEI_33_2_023042 crossref_primary_10_1007_s00034_025_03017_9 crossref_primary_10_1117_1_JEI_33_2_023040 crossref_primary_10_1109_ACCESS_2022_3161527 crossref_primary_10_1016_j_dsp_2023_103993 crossref_primary_10_3390_s23239593 crossref_primary_10_1109_TITS_2024_3495034 crossref_primary_10_1016_j_compeleceng_2024_109634 crossref_primary_10_1016_j_image_2024_117174 crossref_primary_10_1007_s11760_024_03232_y crossref_primary_10_1109_ACCESS_2024_3381514 crossref_primary_10_1016_j_ndteint_2024_103049 crossref_primary_10_1016_j_cag_2023_08_004 crossref_primary_10_1088_1361_6560_ac6724 crossref_primary_10_1109_TNNLS_2023_3289626 crossref_primary_10_1109_TMM_2022_3194993 crossref_primary_10_3390_rs13010062 crossref_primary_10_3390_math12244025 crossref_primary_10_1016_j_asoc_2019_105889 crossref_primary_10_1109_ACCESS_2022_3202940 crossref_primary_10_3390_sym14061165 crossref_primary_10_1186_s13640_025_00663_6 crossref_primary_10_1007_s11760_023_02801_x crossref_primary_10_1109_TETCI_2024_3369321 crossref_primary_10_1007_s00500_023_08788_4 crossref_primary_10_1016_j_neucom_2021_08_142 crossref_primary_10_1016_j_engstruct_2024_117958 crossref_primary_10_1007_s40747_024_01387_2 crossref_primary_10_1007_s11760_024_03299_7 crossref_primary_10_1109_ACCESS_2023_3336411 crossref_primary_10_1109_LSP_2020_2965824 crossref_primary_10_1109_TMM_2023_3284988 crossref_primary_10_1111_coin_12391 crossref_primary_10_1016_j_patrec_2024_10_014 crossref_primary_10_53759_7669_jmc202404060 crossref_primary_10_1016_j_eswa_2024_123470 crossref_primary_10_1016_j_resconrec_2023_107274 crossref_primary_10_1117_1_JEI_31_6_063050 crossref_primary_10_1016_j_image_2022_116848 crossref_primary_10_1016_j_compeleceng_2023_108859 crossref_primary_10_3390_mi12121458 crossref_primary_10_1109_TMM_2022_3162493 crossref_primary_10_3390_math9192499 crossref_primary_10_1109_TIP_2021_3135473 crossref_primary_10_1109_TPAMI_2022_3212995 crossref_primary_10_3390_electronics13112040 crossref_primary_10_1080_13682199_2023_2260663 crossref_primary_10_1051_jnwpu_20224030524 crossref_primary_10_1155_2022_3903453 crossref_primary_10_3390_ijgi12100400 crossref_primary_10_1016_j_autcon_2023_104930 crossref_primary_10_1017_S0373463321000783 crossref_primary_10_1080_13682199_2023_2198350 crossref_primary_10_1016_j_patcog_2019_107143 crossref_primary_10_1016_j_image_2021_116433 crossref_primary_10_1049_itr2_12534 crossref_primary_10_3390_app13148148 crossref_primary_10_1016_j_bspc_2023_104874 crossref_primary_10_1364_OE_459063 crossref_primary_10_1016_j_displa_2023_102614 crossref_primary_10_1016_j_ijleo_2025_172306 crossref_primary_10_1007_s00371_021_02289_x crossref_primary_10_3390_math11194194 crossref_primary_10_1007_s11063_022_10872_z crossref_primary_10_1007_s41348_022_00584_w crossref_primary_10_1007_s00371_022_02761_2 crossref_primary_10_1038_s41598_024_65270_3 crossref_primary_10_1016_j_patcog_2022_109195 crossref_primary_10_1155_2020_6707328 crossref_primary_10_1109_ACCESS_2025_3545258 crossref_primary_10_1109_TCI_2023_3323835 crossref_primary_10_1016_j_cag_2023_10_016 crossref_primary_10_1016_j_image_2023_116966 crossref_primary_10_1016_j_image_2023_117016 crossref_primary_10_1007_s12652_021_02947_x crossref_primary_10_1016_j_dsp_2024_104752 crossref_primary_10_1117_1_JEI_30_3_033023 crossref_primary_10_1109_TNNLS_2021_3071245 crossref_primary_10_3390_rs16071134 crossref_primary_10_1109_ACCESS_2020_3022393 crossref_primary_10_3390_math11183834 crossref_primary_10_1109_LSP_2022_3162145 crossref_primary_10_1049_ipr2_13226 crossref_primary_10_32604_cmc_2024_059000 crossref_primary_10_1109_ACCESS_2023_3290490 crossref_primary_10_1007_s00371_020_01838_0 crossref_primary_10_1109_TIP_2020_3045617 crossref_primary_10_1016_j_jvcir_2024_104211 crossref_primary_10_1007_s11265_025_01945_y crossref_primary_10_1049_iet_ipr_2018_6380 crossref_primary_10_1007_s10921_023_01013_0 crossref_primary_10_1016_j_bspc_2021_103286 crossref_primary_10_1016_j_dsp_2022_103532 crossref_primary_10_3389_fmars_2023_1133881 crossref_primary_10_1007_s00138_021_01223_4 crossref_primary_10_3390_su15021029 crossref_primary_10_3390_s24185912 crossref_primary_10_1016_j_patrec_2024_02_011 crossref_primary_10_1007_s00521_020_05015_1 crossref_primary_10_1109_TCI_2023_3340617 crossref_primary_10_1109_TCSVT_2024_3472278 crossref_primary_10_3390_s20020495 crossref_primary_10_1049_ipr2_12148 crossref_primary_10_1016_j_jvcir_2025_104402 crossref_primary_10_1049_csy2_12110 crossref_primary_10_1016_j_patcog_2023_110025 crossref_primary_10_1016_j_image_2021_116466 crossref_primary_10_1016_j_neucom_2020_12_057 crossref_primary_10_1007_s13369_023_07923_5 crossref_primary_10_1007_s11760_024_03733_w crossref_primary_10_1109_ACCESS_2022_3207299 crossref_primary_10_1016_j_engappai_2024_108884 crossref_primary_10_3390_electronics12092081 crossref_primary_10_1016_j_gmod_2023_101206 crossref_primary_10_1007_s11042_023_15233_z crossref_primary_10_1007_s11390_024_3814_0 crossref_primary_10_1088_1748_0221_18_07_P07037 crossref_primary_10_1007_s00371_025_03835_7 crossref_primary_10_1007_s00530_022_00913_x crossref_primary_10_1007_s11042_020_09919_x |
Cites_doi | 10.1109/TIP.2016.2598681 10.1109/TIP.2016.2639450 10.1073/pnas.83.10.3078 10.1109/TIP.2013.2261309 10.1016/j.patcog.2016.06.008 10.1016/S0734-189X(87)80186-X 10.1109/TPAMI.2012.213 10.1109/83.597272 10.1137/100806588 10.1109/TIP.2015.2474701 10.1109/TIP.2013.2293423 10.1016/j.sigpro.2016.05.031 10.1109/TPAMI.2015.2439281 10.1109/TIP.2003.819861 10.1145/3072959.3073592 10.1109/TIP.2014.2324813 |
ContentType | Journal Article |
Copyright | 2018 Elsevier B.V. Copyright Elsevier Science Ltd. Mar 1, 2018 |
Copyright_xml | – notice: 2018 Elsevier B.V. – notice: Copyright Elsevier Science Ltd. Mar 1, 2018 |
DBID | AAYXX CITATION 7SC 7TK 8FD JQ2 L7M L~C L~D |
DOI | 10.1016/j.patrec.2018.01.010 |
DatabaseName | CrossRef Computer and Information Systems Abstracts Neurosciences Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic ProQuest Computer Science Collection Computer and Information Systems Abstracts Neurosciences Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Computer Science |
EISSN | 1872-7344 |
EndPage | 22 |
ExternalDocumentID | 10_1016_j_patrec_2018_01_010 S0167865518300163 |
GroupedDBID | --M .DC .~1 0R~ 123 1RT 1~. 1~5 4.4 457 4G. 53G 5VS 7-5 71M 8P~ 9JN AABNK AACTN AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAXUO AAYFN ABBOA ABFNM ABFRF ABJNI ABMAC ABYKQ ACDAQ ACGFO ACGFS ACRLP ACZNC ADBBV ADEZE ADTZH AEBSH AECPX AEFWE AEKER AENEX AFKWA AFTJW AGHFR AGUBO AGYEJ AHHHB AHJVU AHZHX AIALX AIEXJ AIKHN AITUG AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD AXJTR BJAXD BKOJK BLXMC CS3 DU5 EBS EFJIC EFLBG EO8 EO9 EP2 EP3 F5P FDB FIRID FNPLU FYGXN G-Q GBLVA GBOLZ J1W JJJVA KOM LG9 LY1 M41 MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 RIG RNS ROL SDF SDG SDP SES SPC SPCBC SST SSV SSZ T5K TN5 UNMZH WH7 XPP ZMT ~G- --K 1B1 29O AAQXK AATTM AAXKI AAYWO AAYXX ABDPE ABWVN ABXDB ACNNM ACRPL ACVFH ADCNI ADJOM ADMUD ADMXK ADNMO AEIPS AEUPX AFJKZ AFPUW AFXIZ AGCQF AGQPQ AGRNS AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP ASPBG AVWKF AZFZN BNPGV CITATION EJD FEDTE FGOYB HLZ HVGLF HZ~ IHE R2- RPZ SBC SDS SEW SSH VOH WUQ Y6R 7SC 7TK 8FD EFKBS JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c446t-25398a68ff65f0e6f5ea1cb370c5a789b7eff2e3eb428526f40da154270c21993 |
IEDL.DBID | AIKHN |
ISSN | 0167-8655 |
IngestDate | Sun Jul 13 05:27:58 EDT 2025 Tue Jul 01 02:40:36 EDT 2025 Thu Apr 24 22:50:43 EDT 2025 Fri Feb 23 02:24:34 EST 2024 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | Weak illumination image enhancement Low light image enhancement 41A10 65D05 65D17 Image degradation 41A05 CNNs |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c446t-25398a68ff65f0e6f5ea1cb370c5a789b7eff2e3eb428526f40da154270c21993 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0003-2609-2460 |
OpenAccessLink | http://hdl.handle.net/1885/139409 |
PQID | 2066670399 |
PQPubID | 2047552 |
PageCount | 8 |
ParticipantIDs | proquest_journals_2066670399 crossref_primary_10_1016_j_patrec_2018_01_010 crossref_citationtrail_10_1016_j_patrec_2018_01_010 elsevier_sciencedirect_doi_10_1016_j_patrec_2018_01_010 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2018-03-01 2018-03-00 20180301 |
PublicationDateYYYYMMDD | 2018-03-01 |
PublicationDate_xml | – month: 03 year: 2018 text: 2018-03-01 day: 01 |
PublicationDecade | 2010 |
PublicationPlace | Amsterdam |
PublicationPlace_xml | – name: Amsterdam |
PublicationTitle | Pattern recognition letters |
PublicationYear | 2018 |
Publisher | Elsevier B.V Elsevier Science Ltd |
Publisher_xml | – name: Elsevier B.V – name: Elsevier Science Ltd |
References | Fu, Zeng, Huang, Ding, Paisley (bib0007) 2016; 129 Dong, Deng, Loy, He, Tang (bib0002) 2016; 38 Wang, Xiao, Liu, Wei (bib0024) 2014; 23 Wang, Bovik, Sherikh, Simoncelli (bib0026) 2004; 13 Fu, Liao, Zeng, Huang, Zhang, Ding (bib0006) 2015; 24 Ng, Wang (bib0021) 2011; 4 Gonzalez, Woods (bib0010) 2017 He, Sun, Tang (bib0012) 2013; 35 He, Zhang, Ren, Sun (bib0013) 2016 Kim, Lee, Lee (bib0016) 2016 Pizer, Amburn, Austin, Cromartie, Geselowitz, Greer, Romeny, Zimmerman, Zuiderveld (bib0023) 1987; 39 Peng, Jayant, Le, David (bib0022) 2012 Jia, Shelhamer, Donahue, Karayev, Long, Girshick, Guadarrama, Darrell (bib0014) 2014 Xue, Zhang, Mou, Bovik (bib0027) 2014; 23 Jobson, Rahman, Woodell (bib0015) 1997; 6 Dong, Wang, Pang, Li, Wen, Lu (bib0004) 2011 Krizhevsky, Sutskever, Hinton (bib0017) 2012 Guo, Li, Ling (bib0011) 2017; 26 Land (bib0018) 1986; 83 Dong, Deng, Loy, Tang (bib0003) 2015 Fotiadou, Tsagkatakis, Tsakalides (bib0005) 2014 Wang, Zheng, Liu (bib0025) 2013; 22 Cai, Xu, Jia, Qing, Tao (bib0001) 2016; 25 Gharbi, Chen, Barron, Hasinoff, Durand (bib0009) 2017; 36 Zhang, Shen, Luo, Zhang, Song (bib0028) 2012 Li, Wang, Wang, Gao (bib0019) 2015 Lore, Akintayo, Sarkar (bib0020) 2017; 61 Fu, Zeng, Huang, Zhang, Ding (bib0008) 2016 Dong (10.1016/j.patrec.2018.01.010_bib0002) 2016; 38 Fu (10.1016/j.patrec.2018.01.010_bib0008) 2016 Krizhevsky (10.1016/j.patrec.2018.01.010_bib0017) 2012 Cai (10.1016/j.patrec.2018.01.010_bib0001) 2016; 25 Dong (10.1016/j.patrec.2018.01.010_bib0003) 2015 Lore (10.1016/j.patrec.2018.01.010_bib0020) 2017; 61 Li (10.1016/j.patrec.2018.01.010_bib0019) 2015 Fu (10.1016/j.patrec.2018.01.010_bib0006) 2015; 24 Pizer (10.1016/j.patrec.2018.01.010_bib0023) 1987; 39 Wang (10.1016/j.patrec.2018.01.010_bib0025) 2013; 22 Zhang (10.1016/j.patrec.2018.01.010_bib0028) 2012 Wang (10.1016/j.patrec.2018.01.010_bib0026) 2004; 13 Jia (10.1016/j.patrec.2018.01.010_bib0014) 2014 Wang (10.1016/j.patrec.2018.01.010_bib0024) 2014; 23 Xue (10.1016/j.patrec.2018.01.010_bib0027) 2014; 23 Fu (10.1016/j.patrec.2018.01.010_bib0007) 2016; 129 Gharbi (10.1016/j.patrec.2018.01.010_bib0009) 2017; 36 Land (10.1016/j.patrec.2018.01.010_bib0018) 1986; 83 Dong (10.1016/j.patrec.2018.01.010_bib0004) 2011 Jobson (10.1016/j.patrec.2018.01.010_bib0015) 1997; 6 Gonzalez (10.1016/j.patrec.2018.01.010_bib0010) 2017 Ng (10.1016/j.patrec.2018.01.010_bib0021) 2011; 4 Peng (10.1016/j.patrec.2018.01.010_bib0022) 2012 He (10.1016/j.patrec.2018.01.010_bib0012) 2013; 35 Guo (10.1016/j.patrec.2018.01.010_bib0011) 2017; 26 Fotiadou (10.1016/j.patrec.2018.01.010_bib0005) 2014 Kim (10.1016/j.patrec.2018.01.010_bib0016) 2016 He (10.1016/j.patrec.2018.01.010_bib0013) 2016 |
References_xml | – start-page: 3730 year: 2015 end-page: 3734 ident: bib0019 article-title: A low-light image enhancement method for both denosing and contrast enlarging publication-title: IEEE Conference on Image Processing – volume: 39 start-page: 355 year: 1987 end-page: 368 ident: bib0023 article-title: Adaptive histogram equalization and its variations publication-title: Comput. Vis. Graph. Image Process. – volume: 36 year: 2017 ident: bib0009 article-title: Deep bilateral learning for real-time image enhancement publication-title: ACM Trans. Graph. – volume: 83 start-page: 3078 year: 1986 end-page: 3080 ident: bib0018 article-title: An alternative technique for the computation of the designator in the Retinex theory of color vision publication-title: Natl. Acad. Sci. U – volume: 38 start-page: 295 year: 2016 end-page: 307 ident: bib0002 article-title: Image super-resolution using deep convolutional networks publication-title: IEEE Trans. Pattern Anal. Mach.Intell. – start-page: 675 year: 2014 end-page: 678 ident: bib0014 article-title: Caffe: convolutional architecture for fast feature embedding publication-title: IEEE Conference on Multimedia – volume: 35 start-page: 1397 year: 2013 end-page: 1409 ident: bib0012 article-title: Guided image filtering publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – volume: 23 start-page: 3381 year: 2014 end-page: 3396 ident: bib0024 article-title: Variational Bayesian method for Retinex publication-title: IEEE Trans. Image Process. – start-page: 576 year: 2015 end-page: 584 ident: bib0003 article-title: Compression artifacts reduction by a deep convolutional network publication-title: IEEE Conference on Computer Vision – volume: 61 start-page: 650 year: 2017 end-page: 662 ident: bib0020 article-title: Llnet: a deep autoencoder approach to natural low-light image enhancement publication-title: Pattern Recognit. – volume: 13 start-page: 600 year: 2004 end-page: 612 ident: bib0026 article-title: Image quality assessment: from error visibility to structural similarity publication-title: IEEE Trans. Image Process. – volume: 6 start-page: 965 year: 1997 end-page: 976 ident: bib0015 article-title: A multi-scale Retinex for bridging the gap between color images and the human observation of scenes publication-title: IEEE Trans. Image Process. – volume: 22 start-page: 3538 year: 2013 end-page: 3548 ident: bib0025 article-title: Naturalness preserved enhancement algorithm for non-uniform illumination images publication-title: IEEE Trans. Image Process. – start-page: 2782 year: 2016 end-page: 2790 ident: bib0008 article-title: A weighted variational model for simultaneous reflectance and illumination estimation publication-title: IEEE Conference on Computer Vision and Pattern Recognition – year: 2012 ident: bib0017 article-title: ImageNet classification with deep convolutional neural networks publication-title: Adv. Neural Inf. Process. Syst. – start-page: 1646 year: 2016 end-page: 1654 ident: bib0016 article-title: Accurate image super-resolution using very deep convolutional networks publication-title: IEEE Conference on Computer Vision and Pattern Recognition – start-page: 84 year: 2014 end-page: 93 ident: bib0005 article-title: Low light image enhancement via sparse representations publication-title: IEEE Conference on Image Analysis and Recognition – start-page: 1 year: 2011 end-page: 6 ident: bib0004 article-title: Fast efficient algorithm for enhancement of low lighting video publication-title: IEEE Conference on Multimedia and Expo – volume: 26 start-page: 982 year: 2017 end-page: 993 ident: bib0011 article-title: Lime: low-light image enhancement via illumination map estimation publication-title: IEEE Trans. Image Process. – volume: 129 start-page: 82 year: 2016 end-page: 96 ident: bib0007 article-title: A fusion-based enhancing method for weakly illuminated images publication-title: Signal Process. – start-page: 2034 year: 2012 end-page: 2037 ident: bib0028 article-title: Enhancement and noise reduction of very low light level images publication-title: IEEE Conference on Pattern Recognition – volume: 24 start-page: 4965 year: 2015 end-page: 4977 ident: bib0006 article-title: A probabilistic method for image enhancement with simultaneous illumination and reflectance estimation publication-title: IEEE Trans. Image Process. – year: 2017 ident: bib0010 article-title: Digital Image Processing – start-page: 770 year: 2016 end-page: 778 ident: bib0013 article-title: Deep residual learning for image recognition publication-title: IEEE Conference on Computer Vision and Pattern Recognition – volume: 23 start-page: 684 year: 2014 end-page: 695 ident: bib0027 article-title: Gradient magnitude similarity deviation: a highly efficient perceptual image quality index publication-title: IEEE Trans. Image Process. – volume: 4 start-page: 345 year: 2011 end-page: 365 ident: bib0021 article-title: A total variation model for Retinex publication-title: SIAM J. Imaging Sci. – start-page: 1098 year: 2012 end-page: 1105 ident: bib0022 article-title: Unsuperived feature learning framework for non-reference image quality assessment publication-title: IEEE Conference on Computer Vision and Pattern Recognition – volume: 25 start-page: 5187 year: 2016 end-page: 5198 ident: bib0001 article-title: DehazeNet: an end-to-end system for single image haze removel publication-title: IEEE Trans. Image Process. – year: 2012 ident: 10.1016/j.patrec.2018.01.010_bib0017 article-title: ImageNet classification with deep convolutional neural networks publication-title: Adv. Neural Inf. Process. Syst. – start-page: 576 year: 2015 ident: 10.1016/j.patrec.2018.01.010_bib0003 article-title: Compression artifacts reduction by a deep convolutional network – volume: 25 start-page: 5187 year: 2016 ident: 10.1016/j.patrec.2018.01.010_bib0001 article-title: DehazeNet: an end-to-end system for single image haze removel publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2016.2598681 – start-page: 770 year: 2016 ident: 10.1016/j.patrec.2018.01.010_bib0013 article-title: Deep residual learning for image recognition – volume: 26 start-page: 982 year: 2017 ident: 10.1016/j.patrec.2018.01.010_bib0011 article-title: Lime: low-light image enhancement via illumination map estimation publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2016.2639450 – volume: 83 start-page: 3078 year: 1986 ident: 10.1016/j.patrec.2018.01.010_bib0018 article-title: An alternative technique for the computation of the designator in the Retinex theory of color vision publication-title: Natl. Acad. Sci. U doi: 10.1073/pnas.83.10.3078 – volume: 22 start-page: 3538 year: 2013 ident: 10.1016/j.patrec.2018.01.010_bib0025 article-title: Naturalness preserved enhancement algorithm for non-uniform illumination images publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2013.2261309 – start-page: 675 year: 2014 ident: 10.1016/j.patrec.2018.01.010_bib0014 article-title: Caffe: convolutional architecture for fast feature embedding – start-page: 84 year: 2014 ident: 10.1016/j.patrec.2018.01.010_bib0005 article-title: Low light image enhancement via sparse representations – start-page: 2782 year: 2016 ident: 10.1016/j.patrec.2018.01.010_bib0008 article-title: A weighted variational model for simultaneous reflectance and illumination estimation – volume: 61 start-page: 650 year: 2017 ident: 10.1016/j.patrec.2018.01.010_bib0020 article-title: Llnet: a deep autoencoder approach to natural low-light image enhancement publication-title: Pattern Recognit. doi: 10.1016/j.patcog.2016.06.008 – volume: 39 start-page: 355 year: 1987 ident: 10.1016/j.patrec.2018.01.010_bib0023 article-title: Adaptive histogram equalization and its variations publication-title: Comput. Vis. Graph. Image Process. doi: 10.1016/S0734-189X(87)80186-X – start-page: 1 year: 2011 ident: 10.1016/j.patrec.2018.01.010_bib0004 article-title: Fast efficient algorithm for enhancement of low lighting video – volume: 35 start-page: 1397 year: 2013 ident: 10.1016/j.patrec.2018.01.010_bib0012 article-title: Guided image filtering publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2012.213 – start-page: 3730 year: 2015 ident: 10.1016/j.patrec.2018.01.010_bib0019 article-title: A low-light image enhancement method for both denosing and contrast enlarging – volume: 6 start-page: 965 year: 1997 ident: 10.1016/j.patrec.2018.01.010_bib0015 article-title: A multi-scale Retinex for bridging the gap between color images and the human observation of scenes publication-title: IEEE Trans. Image Process. doi: 10.1109/83.597272 – volume: 4 start-page: 345 year: 2011 ident: 10.1016/j.patrec.2018.01.010_bib0021 article-title: A total variation model for Retinex publication-title: SIAM J. Imaging Sci. doi: 10.1137/100806588 – volume: 24 start-page: 4965 year: 2015 ident: 10.1016/j.patrec.2018.01.010_bib0006 article-title: A probabilistic method for image enhancement with simultaneous illumination and reflectance estimation publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2015.2474701 – start-page: 2034 year: 2012 ident: 10.1016/j.patrec.2018.01.010_bib0028 article-title: Enhancement and noise reduction of very low light level images – volume: 23 start-page: 684 year: 2014 ident: 10.1016/j.patrec.2018.01.010_bib0027 article-title: Gradient magnitude similarity deviation: a highly efficient perceptual image quality index publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2013.2293423 – start-page: 1098 year: 2012 ident: 10.1016/j.patrec.2018.01.010_bib0022 article-title: Unsuperived feature learning framework for non-reference image quality assessment – year: 2017 ident: 10.1016/j.patrec.2018.01.010_bib0010 – volume: 129 start-page: 82 year: 2016 ident: 10.1016/j.patrec.2018.01.010_bib0007 article-title: A fusion-based enhancing method for weakly illuminated images publication-title: Signal Process. doi: 10.1016/j.sigpro.2016.05.031 – start-page: 1646 year: 2016 ident: 10.1016/j.patrec.2018.01.010_bib0016 article-title: Accurate image super-resolution using very deep convolutional networks – volume: 38 start-page: 295 year: 2016 ident: 10.1016/j.patrec.2018.01.010_bib0002 article-title: Image super-resolution using deep convolutional networks publication-title: IEEE Trans. Pattern Anal. Mach.Intell. doi: 10.1109/TPAMI.2015.2439281 – volume: 13 start-page: 600 year: 2004 ident: 10.1016/j.patrec.2018.01.010_bib0026 article-title: Image quality assessment: from error visibility to structural similarity publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2003.819861 – volume: 36 year: 2017 ident: 10.1016/j.patrec.2018.01.010_bib0009 article-title: Deep bilateral learning for real-time image enhancement publication-title: ACM Trans. Graph. doi: 10.1145/3072959.3073592 – volume: 23 start-page: 3381 year: 2014 ident: 10.1016/j.patrec.2018.01.010_bib0024 article-title: Variational Bayesian method for Retinex publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2014.2324813 |
SSID | ssj0006398 |
Score | 2.6665864 |
Snippet | •We propose a trainable CNN for weakly illuminated image enhancement.•We propose a Retinex model-based weakly illuminated image synthesis approach.•The... Weak illumination or low light image enhancement as pre-processing is needed in many computer vision tasks. Existing methods show limitations when they are... |
SourceID | proquest crossref elsevier |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 15 |
SubjectTerms | Artificial neural networks CNNs Computer vision Illumination Image degradation Image enhancement Image processing systems Image quality Light Low light image enhancement Mathematical models Neural networks Qualitative research Quality assessment Quality control Retinex (algorithm) Weak illumination image enhancement |
Title | LightenNet: A Convolutional Neural Network for weakly illuminated image enhancement |
URI | https://dx.doi.org/10.1016/j.patrec.2018.01.010 https://www.proquest.com/docview/2066670399 |
Volume | 104 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1La9wwEB72cWkPTfMoTZMuOuTqrNZayXZuy5Kwee0lDeQm_BhRt6kTWrcll_z2zNjy0pTAQsBg_JAwo9E3I3nmG4CDQmsVRbIItCqyYIoyDWLUWeDyODcyjlzhOHf4cmkW19OzG33Tg3mXC8NhlR77W0xv0NrfGXtpju_LcnzFAfScVklKyY6L6sMwVInRAxjOTs8XyxUgkxGOO4pvbtBl0DVhXrzljMxlOIkb_k5OpX3ZQv2H1Y0BOnkP77znKGbtx21CD6st2OiqMgg_Sbfg7T8Ug9twdcGrb6yWWB-JmZjfVX-8slFfzMzRnJpQcEH-q_iL6ffbB1FyAeSyIke0EOUPwhyB1VdWEN5M3IHrk-Mv80XgCykEOa326iDUJIHUxM4Z7SQapzGd5JmKZK7TKE6yCJ0LUWFGixEdGjeVRUq-VUgvhBzh9wEG1V2FH0Fk9NQoGWOizLSIMMvJ6GPOP09dSD3uguqEZ3PPMs7FLm5tF072zbYityxyKyd0yF0IVq3uW5aNNe9H3bjYZ9piyRCsabnfDaP1s_WXZUp7Q9CXJJ9e3fEevOGrNjxtHwb1z9_4mfyVOhtB__BxMvJa-QQDruqZ |
linkProvider | Elsevier |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LS8QwEB58HNSDb_FtDl7rZtsm7XqTRVl13YsK3kIfE6yPKroqXvztzrSpLwRBKBSaR8tkMjNJv_kCsJ0rFUSRzD0V5KkXoky8GFXq2SzOtIwjm1vOHT4Z6N55eHShLkag2-TCMKzS2f7aplfW2j1pOWm27ouidcoAek6rJKXkwCUYhfGQXsy4vp23T5wHueC4Ifjm6k3-XAXy4g1nZCbDdlyxd3Ii7e_-6YelrtzPwSxMu7hR7NWfNgcjWM7DTHMmg3BTdB6mvhAMLsBpn9feWA5wuCv2RPeufHaqRn0xL0d1q4DggqJX8YLJ9c2rKPj446KkMDQXxS1ZHIHlJasHbyUuwvnB_lm357ljFLyM1npDz1ckgUTH1mplJWqrMGlnaRDJTCVR3EkjtNbHAFNaiihf21DmCUVWPlXwGd-3BGPlXYnLIFIq1YGMsRPoMI8wzcjlY8a_Tq1PPa5A0AjPZI5jnI-6uDENmOzK1CI3LHIj23TJFfA-Wt3XHBt_1I-acTHfdMWQG_ij5XozjMbN1UfDhPaaDF-ns_rvjrdgond20jf9w8HxGkxySQ1UW4ex4cMTblDkMkw3K818B_4e62Q |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=LightenNet%3A+A+Convolutional+Neural+Network+for+weakly+illuminated+image+enhancement&rft.jtitle=Pattern+recognition+letters&rft.au=Li%2C+Chongyi&rft.au=Guo%2C+Jichang&rft.au=Porikli%2C+Fatih&rft.au=Pang%2C+Yanwei&rft.date=2018-03-01&rft.pub=Elsevier+Science+Ltd&rft.issn=0167-8655&rft.eissn=1872-7344&rft.volume=104&rft.spage=15&rft_id=info:doi/10.1016%2Fj.patrec.2018.01.010&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0167-8655&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0167-8655&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0167-8655&client=summon |