Research on Multimodal Image Fusion Target Detection Algorithm Based on Generative Adversarial Network

In this paper, we propose a target detection algorithm based on adversarial discriminative domain adaptation for infrared and visible image fusion using unsupervised learning methods to reduce the differences between multimodal image information. Firstly, this paper improves the fusion model based o...

Full description

Saved in:
Bibliographic Details
Published inWireless communications and mobile computing Vol. 2022; no. 1
Main Authors Wu, Zhaoli, Wu, Xuehan, Zhu, Yuancai, Zhai, Jingxuan, Yang, Haibo, Yang, Zhiwei, Wang, Chao, Sun, Jilong
Format Journal Article
LanguageEnglish
Published Oxford Hindawi 2022
John Wiley & Sons, Inc
Subjects
Online AccessGet full text
ISSN1530-8669
1530-8677
1530-8677
DOI10.1155/2022/1740909

Cover

More Information
Summary:In this paper, we propose a target detection algorithm based on adversarial discriminative domain adaptation for infrared and visible image fusion using unsupervised learning methods to reduce the differences between multimodal image information. Firstly, this paper improves the fusion model based on generative adversarial network and uses the fusion algorithm based on the dual discriminator generative adversarial network to generate high-quality IR-visible fused images and then blends the IR and visible images into a ternary dataset and combines the triple angular loss function to do migration learning. Finally, the fused images are used as the input images of faster RCNN object detection algorithm for detection, and a new nonmaximum suppression algorithm is used to improve the faster RCNN target detection algorithm, which further improves the target detection accuracy. Experiments prove that the method can achieve mutual complementation of multimodal feature information and make up for the lack of information in single-modal scenes, and the algorithm achieves good detection results for information from both modalities (infrared and visible light).
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1530-8669
1530-8677
1530-8677
DOI:10.1155/2022/1740909