Deep Factorized Multi-view Hashing for Image Retrieval

Multi-view hashing has been paid much attention to due to its computational efficiency and lower memory overhead in similarity measurement between instances. However, a common drawback of these multi-view hashing methods is the lack of ability to fully explore the underlying correlations between dif...

Full description

Saved in:
Bibliographic Details
Published inMultimedia Technology and Enhanced Learning Vol. 446; pp. 628 - 643
Main Authors Zhu, Chenyang, He, Wenjue, Zhang, Zheng
Format Book Chapter
LanguageEnglish
Published Switzerland Springer 2022
Springer Nature Switzerland
SeriesLecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
Subjects
Online AccessGet full text
ISBN3031181220
9783031181221
ISSN1867-8211
1867-822X
DOI10.1007/978-3-031-18123-8_49

Cover

More Information
Summary:Multi-view hashing has been paid much attention to due to its computational efficiency and lower memory overhead in similarity measurement between instances. However, a common drawback of these multi-view hashing methods is the lack of ability to fully explore the underlying correlations between different views, which hinders them from producing more discriminative hash codes. In our work, we propose the principled Deep Factorized Multi-view Hashing (DFMH) framework, including interpretable robust representation learning, multi-view fusion learning, and flexible semantic feature learning, to deal with the challenging multi-view hashing problem. Specifically, instead of directly projecting the features to a common representation space, we construct an adaptively weighted deep factorized structure to preserve the heterogeneity between different views. Furthermore, the visual space and semantic space are interactively learned to form a reliable hamming space. Particularly, the flexible semantic representation is obtained by learning regressively from semantic labels. Importantly, a well-designed learning strategy is developed to optimize the objective function efficiently. DFMH as well as compared methods is tested on benchmark datasets to validate the efficiency and effectiveness of our proposed method. The source codes of this paper are released at: https://github.com/chenyangzhu1/DFMH.
Bibliography:C. Zhu and W. He—Co-first authors;Supported by National Natural Science Foundation of China (62002085).
ISBN:3031181220
9783031181221
ISSN:1867-8211
1867-822X
DOI:10.1007/978-3-031-18123-8_49