Pose for Everything: Towards Category-Agnostic Pose Estimation

Existing works on 2D pose estimation mainly focus on a certain category, e.g. human, animal, and vehicle. However, there are lots of application scenarios that require detecting the poses/keypoints of the unseen class of objects. In this paper, we introduce the task of Category-Agnostic Pose Estimat...

Full description

Saved in:
Bibliographic Details
Published inComputer Vision - ECCV 2022 Vol. 13666; pp. 398 - 416
Main Authors Xu, Lumin, Jin, Sheng, Zeng, Wang, Liu, Wentao, Qian, Chen, Ouyang, Wanli, Luo, Ping, Wang, Xiaogang
Format Book Chapter
LanguageEnglish
Published Switzerland Springer 2022
Springer Nature Switzerland
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text
ISBN9783031200670
3031200675
ISSN0302-9743
1611-3349
DOI10.1007/978-3-031-20068-7_23

Cover

More Information
Summary:Existing works on 2D pose estimation mainly focus on a certain category, e.g. human, animal, and vehicle. However, there are lots of application scenarios that require detecting the poses/keypoints of the unseen class of objects. In this paper, we introduce the task of Category-Agnostic Pose Estimation (CAPE), which aims to create a pose estimation model capable of detecting the pose of any class of object given only a few samples with keypoint definition. To achieve this goal, we formulate the pose estimation problem as a keypoint matching problem and design a novel CAPE framework, termed POse Matching Network (POMNet). A transformer-based Keypoint Interaction Module (KIM) is proposed to capture both the interactions among different keypoints and the relationship between the support and query images. We also introduce Multi-category Pose (MP-100) dataset, which is a 2D pose dataset of 100 object categories containing over 20K instances and is well-designed for developing CAPE algorithms. Experiments show that our method outperforms other baseline approaches by a large margin. Codes and data are available at https://github.com/luminxu/Pose-for-Everything.
Bibliography:L. Xu and S. Jin—Equal contribution.
Supplementary InformationThe online version contains supplementary material available at https://doi.org/10.1007/978-3-031-20068-7_23.
ISBN:9783031200670
3031200675
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-031-20068-7_23