Recognizing realistic actions from videos "in the wild"
In this paper, we present a systematic framework for recognizing realistic actions from videos "in the wild". Such unconstrained videos are abundant in personal collections as well as on the Web. Recognizing action from such videos has not been addressed extensively, primarily due to the t...
Saved in:
| Published in | 2009 IEEE Conference on Computer Vision and Pattern Recognition pp. 1996 - 2003 |
|---|---|
| Main Authors | , , |
| Format | Conference Proceeding |
| Language | English Japanese |
| Published |
IEEE
01.06.2009
|
| Subjects | |
| Online Access | Get full text |
| ISBN | 1424439922 9781424439928 |
| ISSN | 1063-6919 1063-6919 |
| DOI | 10.1109/CVPR.2009.5206744 |
Cover
| Summary: | In this paper, we present a systematic framework for recognizing realistic actions from videos "in the wild". Such unconstrained videos are abundant in personal collections as well as on the Web. Recognizing action from such videos has not been addressed extensively, primarily due to the tremendous variations that result from camera motion, background clutter, changes in object appearance, and scale, etc. The main challenge is how to extract reliable and informative features from the unconstrained videos. We extract both motion and static features from the videos. Since the raw features of both types are dense yet noisy, we propose strategies to prune these features. We use motion statistics to acquire stable motion features and clean static features. Furthermore, PageRank is used to mine the most informative static features. In order to further construct compact yet discriminative visual vocabularies, a divisive information-theoretic algorithm is employed to group semantically related features. Finally, AdaBoost is chosen to integrate all the heterogeneous yet complementary features for recognition. We have tested the framework on the KTH dataset and our own dataset consisting of 11 categories of actions collected from YouTube and personal videos, and have obtained impressive results for action recognition and action localization. |
|---|---|
| ISBN: | 1424439922 9781424439928 |
| ISSN: | 1063-6919 1063-6919 |
| DOI: | 10.1109/CVPR.2009.5206744 |