Fingerphoto Deblurring Using Attention-guided Multi-stage GAN

Using fingerphoto images acquired from mobile cameras, low-quality sensors, or crime scenes, it has become a challenge for automated identification systems to verify the identity due to various acquisition distortions. A significant type of photometric distortion that notably reduces the quality of...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 11; p. 1
Main Authors Joshi, Amol S., Dabouei, Ali, Dawson, Jeremy, Nasrabadi, Nasser M.
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN2169-3536
2169-3536
DOI10.1109/ACCESS.2023.3301467

Cover

More Information
Summary:Using fingerphoto images acquired from mobile cameras, low-quality sensors, or crime scenes, it has become a challenge for automated identification systems to verify the identity due to various acquisition distortions. A significant type of photometric distortion that notably reduces the quality of a fingerphoto is the blurring of the image. This paper proposes a deep fingerphoto deblurring model to restore the ridge information degraded by the image blurring. As the core of our model, we utilize a conditional Generative Adversarial Network (cGAN) to learn the distribution of natural ridge patterns. We perform several modifications to enhance the quality of the reconstructed (deblurred) fingerphotos by our proposed model. First, we develop a multi-stage GAN to learn the ridge distribution in a coarse-to-fine framework. This framework enables the model to maintain the consistency of the ridge deblurring process at different resolutions. Second, we propose a guided attention module that helps the generator to focus mainly on blurred regions. Third, we incorporate a deep fingerphoto verifier as an auxiliary adaptive loss function to force the generator to preserve the ID information during the deblurring process. Finally, we evaluate the effectiveness of the proposed model through extensive experiments on multiple public fingerphoto datasets as well as real-world blurred fingerphotos. In particular, our method achieves 5.2 dB, 8.7%, and 7.6% improvement in PSNR, AUC, and EER, respectively, compared to a state-of-the-art deblurring method.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3301467