Target Classification Using the Deep Convolutional Networks for SAR Images

The algorithm of synthetic aperture radar automatic target recognition (SAR-ATR) is generally composed of the extraction of a set of features that transform the raw input into a representation, followed by a trainable classifier. The feature extractor is often hand designed with domain knowledge and...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on geoscience and remote sensing Vol. 54; no. 8; pp. 4806 - 4817
Main Authors Chen, Sizhe, Wang, Haipeng, Xu, Feng, Jin, Ya-Qiu
Format Journal Article
LanguageEnglish
Published New York IEEE 01.08.2016
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN0196-2892
1558-0644
DOI10.1109/TGRS.2016.2551720

Cover

More Information
Summary:The algorithm of synthetic aperture radar automatic target recognition (SAR-ATR) is generally composed of the extraction of a set of features that transform the raw input into a representation, followed by a trainable classifier. The feature extractor is often hand designed with domain knowledge and can significantly impact the classification accuracy. By automatically learning hierarchies of features from massive training data, deep convolutional networks (ConvNets) recently have obtained state-of-the-art results in many computer vision and speech recognition tasks. However, when ConvNets was directly applied to SAR-ATR, it yielded severe overfitting due to limited training images. To reduce the number of free parameters, we present a new all-convolutional networks (A-ConvNets), which only consists of sparsely connected layers, without fully connected layers being used. Experimental results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) benchmark data set illustrate that A-ConvNets can achieve an average accuracy of 99% on classification of ten-class targets and is significantly superior to the traditional ConvNets on the classification of target configuration and version variants.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2016.2551720