It GAN Do Better: GAN-Based Detection of Objects on Images With Varying Quality

In this paper, we propose a novel generative framework which uses Generative Adversarial Networks (GANs) to generate features that provide robustness for object detection on reduced-quality images. The proposed GAN-based Detection of Objects (GAN-DO) framework is not restricted to any particular arc...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on image processing Vol. 30; pp. 9220 - 9230
Main Authors Prakash, Charan D., Karam, Lina J.
Format Journal Article
LanguageEnglish
Published New York IEEE 2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN1057-7149
1941-0042
1941-0042
DOI10.1109/TIP.2021.3124155

Cover

More Information
Summary:In this paper, we propose a novel generative framework which uses Generative Adversarial Networks (GANs) to generate features that provide robustness for object detection on reduced-quality images. The proposed GAN-based Detection of Objects (GAN-DO) framework is not restricted to any particular architecture and can be generalized to several deep neural network (DNN) based architectures. The resulting deep neural network maintains the exact architecture as the selected baseline model without adding to the model parameter complexity or inference speed. We first evaluate the effect of image quality on both object classification and object bounding box regression. We then test the models resulting from our proposed GAN-DO framework, using two state-of-the-art object detection architectures as the baseline models. We also evaluate the effect of the number of re-trained parameters in the generator of GAN-DO on the accuracy of the final trained model. Performance results provided using GAN-DO on object detection datasets establish an improved robustness to varying image quality and a higher mAP compared to the existing approaches.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1057-7149
1941-0042
1941-0042
DOI:10.1109/TIP.2021.3124155