投影深度向量分解融合PEMS的视角不变人体动作识别
针对摄像机内部参数的不确定性和投影平面选择难的问题,提出一种新的投影深度算法用于视角不变的动作识别,该算法采用对称镜面平面提取(plane extraction from mirror symmetry,PEMS)策略,有效解决了投影平面选择难的问题。首先通过摄像机组观察获得3D动作姿势,然后运用PEMS策略从场景中提取平面,相对于提取平面估计身体点的投影深度,最后使用这个信息进行动作识别。该算法的核心是投影平面的提取和投影深度组成向量的求解。利用该算法在CMU Mo Cap数据集、TUM数据集和多视图IXMAS数据集上进行测试,精度可分别高达94%、91%和90%,且在较少动作实例情况下,仍...
Saved in:
Published in | 计算机应用研究 Vol. 33; no. 3; pp. 940 - 944 |
---|---|
Main Author | |
Format | Journal Article |
Language | Chinese |
Published |
四川工程职业技术学院 信息中心,四川 德阳,618000%华中科技大学 计算机科学与技术学院,武汉,430074
2016
|
Subjects | |
Online Access | Get full text |
ISSN | 1001-3695 |
DOI | 10.3969/j.issn.1001-3695.2016.03.069 |
Cover
Summary: | 针对摄像机内部参数的不确定性和投影平面选择难的问题,提出一种新的投影深度算法用于视角不变的动作识别,该算法采用对称镜面平面提取(plane extraction from mirror symmetry,PEMS)策略,有效解决了投影平面选择难的问题。首先通过摄像机组观察获得3D动作姿势,然后运用PEMS策略从场景中提取平面,相对于提取平面估计身体点的投影深度,最后使用这个信息进行动作识别。该算法的核心是投影平面的提取和投影深度组成向量的求解。利用该算法在CMU Mo Cap数据集、TUM数据集和多视图IXMAS数据集上进行测试,精度可分别高达94%、91%和90%,且在较少动作实例情况下,仍然能够准确定义新动作。比较表明,该算法的人体动作识别性能明显优于其他几种较新的算法。 |
---|---|
Bibliography: | 51-1196/TP As the uncertainty of the inside camera parameter and hard to choose the projection plane,this paper proposed a new algorithm for the perspective projection depth invariant action recognition. The proposed algorithm used the strategy of planes extraction from mirror symmetry( PEMS),which was an effective solution to the projection plane choosing. Firstly,it observed a 3D postures by the camera group,and then used PEMS strategies to extract the plane from the scene,the plane estimated projected depth of the body relatived to the extraction point,and finally used this information in action recognition.The core of proposed algorithm was to extract the depth of the projection plane and the solution of projection of the vector composition. It obtained overall accuracies: 94%,91%,and 90% with the proposed algorithm on the CMU Mo Cap data sets,TUM data sets and the IXMAS data sets respectively. And it is still able to accurately define new actions in the case of small movements' instance. The proposed algo |
ISSN: | 1001-3695 |
DOI: | 10.3969/j.issn.1001-3695.2016.03.069 |