Pre-trained VGGNet Architecture for Remote-Sensing Image Scene Classification

The visual geometry group network (VGGNet) is used widely for image classification and has proven to be very effective method. Most existing approaches use features of just one type, and traditional fusion methods generally use multiple manually created features. However, to get the benefits of mult...

Full description

Saved in:
Bibliographic Details
Published in2018 24th International Conference on Pattern Recognition (ICPR) pp. 1622 - 1627
Main Authors Muhammad, Usman, Wang, Weiqiang, Chattha, Shahbaz Pervaiz, Ali, Sajid
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.08.2018
Subjects
Online AccessGet full text
DOI10.1109/ICPR.2018.8545591

Cover

More Information
Summary:The visual geometry group network (VGGNet) is used widely for image classification and has proven to be very effective method. Most existing approaches use features of just one type, and traditional fusion methods generally use multiple manually created features. However, to get the benefits of multilayer features remain a significant challenge in the remote-sensing domain. To address this challenge, we present a simple yet powerful framework based on canonical correlation analysis and 4-layer SVM classifier. Specifically, the pretrained VGGNet is employed as a deep feature extractor to extract mid-level and deep features for remote-sensing scene images. We then choose two convolutional (mid-level) and two fully-connected layers produced by VGGNet in which each layer is treated as a separated feature descriptor. Next, canonical correlation analysis (CCA) is used as a feature fusion strategy to refine the extracted features, and to fuse them with more discriminative power. Finally, the support vector machine (SVM) classifier is used to construct the 4-layer representation of the scenes images. Experimenting on a UC Merced and WHU-RS datasets, demonstrate that the proposed approach, even without data augmentation, fine tuning or coding strategy, has a superior performance than state-of-the-art methods used now.
DOI:10.1109/ICPR.2018.8545591