Underwater image enhancement based on conditional generative adversarial network

Underwater images play an essential role in acquiring and understanding underwater information. High-quality underwater images can guarantee the reliability of underwater intelligent systems. Unfortunately, underwater images are characterized by low contrast, color casts, blurring, low light, and un...

Full description

Saved in:
Bibliographic Details
Published inSignal processing. Image communication Vol. 81; p. 115723
Main Authors Yang, Miao, Hu, Ke, Du, Yixiang, Wei, Zhiqiang, Sheng, Zhibin, Hu, Jintong
Format Journal Article
LanguageEnglish
Published Amsterdam Elsevier B.V 01.02.2020
Elsevier BV
Subjects
Online AccessGet full text
ISSN0923-5965
1879-2677
DOI10.1016/j.image.2019.115723

Cover

More Information
Summary:Underwater images play an essential role in acquiring and understanding underwater information. High-quality underwater images can guarantee the reliability of underwater intelligent systems. Unfortunately, underwater images are characterized by low contrast, color casts, blurring, low light, and uneven illumination, which severely affects the perception and processing of underwater information. To improve the quality of acquired underwater images, numerous methods have been proposed, particularly with the emergence of deep learning technologies. However, the performance of underwater image enhancement methods is still unsatisfactory due to lacking sufficient training data and effective network structures. In this paper, we solve this problem based on a conditional generative adversarial network (cGAN), where the clear underwater image is achieved by a multi-scale generator. Besides, we employ a dual discriminator to grab local and global semantic information, which enforces the generated results by the multi-scale generator realistic and natural. Experiments on real-world and synthetic underwater images demonstrate that the proposed method performs favorable against the state-of-the-art underwater image enhancement methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0923-5965
1879-2677
DOI:10.1016/j.image.2019.115723