Robust Pedestrian Tracking and Recognition from FLIR Video: A Unified Approach via Sparse Coding

Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis o...

Full description

Saved in:
Bibliographic Details
Published inSensors (Basel, Switzerland) Vol. 14; no. 6; pp. 11245 - 11259
Main Authors Li, Xin, Guo, Rui, Chen, Chao
Format Journal Article
LanguageEnglish
Published Switzerland MDPI AG 24.06.2014
MDPI
Subjects
Online AccessGet full text
ISSN1424-8220
1424-8220
DOI10.3390/s140611245

Cover

More Information
Summary:Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR) video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians), especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1424-8220
1424-8220
DOI:10.3390/s140611245