DeepID-Net: Deformable deep convolutional neural networks for object detection
In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the def...
Saved in:
Published in | 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 2403 - 2412 |
---|---|
Main Authors | , , , , , , , , , , |
Format | Conference Proceeding Journal Article |
Language | English |
Published |
IEEE
01.06.2015
|
Subjects | |
Online Access | Get full text |
ISSN | 1063-6919 1063-6919 |
DOI | 10.1109/CVPR.2015.7298854 |
Cover
Summary: | In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN [14], which was the state-of-the-art, from 31% to 50.3% on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1%. Detailed component-wise analysis is also provided through extensive experimental evaluation, which provide a global view for people to understand the deep learning object detection pipeline. |
---|---|
Bibliography: | ObjectType-Article-2 SourceType-Scholarly Journals-1 ObjectType-Conference-1 ObjectType-Feature-3 content type line 23 SourceType-Conference Papers & Proceedings-2 |
ISSN: | 1063-6919 1063-6919 |
DOI: | 10.1109/CVPR.2015.7298854 |