Distribution of action movements (DAM): a descriptor for human action recognition

Franco RONCHETTI , Facundo QUIROGA , Laura LANZARINI , Cesar ESTREBOU

Front. Comput. Sci. ›› 2015, Vol. 9 ›› Issue (6) : 956 -965.

PDF (570KB)
Front. Comput. Sci. ›› 2015, Vol. 9 ›› Issue (6) : 956 -965. DOI: 10.1007/s11704-015-4320-x
RESEARCH ARTICLE

Distribution of action movements (DAM): a descriptor for human action recognition

Author information +
History +
PDF (570KB)

Abstract

Human action recognition from skeletal data is an important and active area of research in which the state of the art has not yet achieved near-perfect accuracy on many wellknown datasets. In this paper, we introduce the Distribution of Action Movements Descriptor, a novel action descriptor based on the distribution of the directions of the motions of the joints between frames, over the set of all possible motions in the dataset. The descriptor is computed as a normalized histogram over a set of representative directions of the joints, which are in turn obtained via clustering. While the descriptor is global in the sense that it represents the overall distribution of movement directions of an action, it is able to partially retain its temporal structure by applying a windowing scheme.

The descriptor, together with a standard classifier, outperforms several state-of-the-art techniques on many wellknown datasets.

Keywords

human action recognition / descriptor / Prob-SOM / MSRC12 / Action3D

Cite this article

Download citation ▾
Franco RONCHETTI, Facundo QUIROGA, Laura LANZARINI, Cesar ESTREBOU. Distribution of action movements (DAM): a descriptor for human action recognition. Front. Comput. Sci., 2015, 9(6): 956-965 DOI:10.1007/s11704-015-4320-x

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Yao A, Gall J, Fanelli G, and Van Gool L J. Does human action recognition benefit from pose estimation? In: Proceedings of the British Machine Vision Conference. 2011

[2]

Estrebou C, Lanzarini L, Hasperué W. Voice recognition based on probabilistic SOM. In: Proceedings of XXXVI Congreso Latinoamericano de Informática (CLEI). 2010

[3]

Hussein M E, Torki M, Gowayyed M A, El-Saban M. Human action recognition using a temporal hierarchy of covariance descriptors on 3D joint locations. In: Proceedings of the 23rd International Joint Conference on Artificial Intelligence. 2013, 2466−2472

[4]

Barnachon M, Bouakaz S, Boufama B, Guillou E. Ongoing human action recognition with motion capture. Pattern Recognition, 2014, 47(1): 238−247

[5]

Gowayyed M A, Torki M, Hussein M E, El-Saban M. Histogram of oriented displacements (HOD): describing trajectories of human joints for action recognition. In: Proceedings of the 23rd International Joint Conference on Artificial Intelligence. 2013, 1351−1357

[6]

Ofli F, Chaudhry R, Kurillo G, Vidal R, Bajcsy R. Sequence of the most informative joints (SMIJ): a new representation for human skeletal action recognition. Journal of Visual Communication and Image Representation, 2014, 25(1): 24−38

[7]

Cho K, Chen X. Classifying and visualizing motion capture sequences using deep neural networks. 2013, arXiv:1306.3874

[8]

Li W Q, Zhang Z Y, Liu Z C. Action recognition based on a bag of 3D points. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. 2010, 9−14

[9]

Kohonen T. The self-organizing map. Neurocomputing, 1998, 21(1): 1−6

[10]

Fothergill S, Mentis H, Kohli P, Nowozin S. Instructing people for training gestural interactive systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2012, 1737−1746

[11]

Ellis C, Masood S Z, Tappen MF, Laviola Jr J J, Sukthankar R. Exploring the trade-off between accuracy and observational latency in action recognition. International Journal of Computer Vision, 2013, 101(3): 420−436

[12]

Wang J, Liu Z C, Chorowski J, Chen Z Y, Wu Y. Robust 3D action recognition with random occupancy patterns. In: Proceedings of the 12th European Conference on Computer Vision. 2012, 872−885

[13]

Wang J, Liu Z C, Wu Y, Yuan J S. Mining actionlet ensemble for action recognition with depth cameras. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2012, 1290−1297

[14]

Negin F, Özdemir F, Akgü C B, Yüksel K A, Erçil A. A decision forest based feature selection framework for action recognition from RGB-Depth cameras. Lecture Notes in Computer Science, 2013, 7950: 648−657

[15]

Jiang X B, Zhong F, Peng Q S, Qin X Y. Robust action recognition based on a hierarchical model. In: Proceedings of IEEE International Conference on Cyberworlds. 2013, 191−198

RIGHTS & PERMISSIONS

Higher Education Press and Springer-Verlag Berlin Heidelberg

AI Summary AI Mindmap
PDF (570KB)

Supplementary files

Supplementary Material-Highlights in 3-page ppt

1197

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/