Distribution of action movements (DAM): a descriptor for human action recognition

Franco RONCHETTI, Facundo QUIROGA, Laura LANZARINI, Cesar ESTREBOU

PDF(570 KB)
PDF(570 KB)
Front. Comput. Sci. ›› 2015, Vol. 9 ›› Issue (6) : 956-965. DOI: 10.1007/s11704-015-4320-x
RESEARCH ARTICLE

Distribution of action movements (DAM): a descriptor for human action recognition

Author information +
History +

Abstract

Human action recognition from skeletal data is an important and active area of research in which the state of the art has not yet achieved near-perfect accuracy on many wellknown datasets. In this paper, we introduce the Distribution of Action Movements Descriptor, a novel action descriptor based on the distribution of the directions of the motions of the joints between frames, over the set of all possible motions in the dataset. The descriptor is computed as a normalized histogram over a set of representative directions of the joints, which are in turn obtained via clustering. While the descriptor is global in the sense that it represents the overall distribution of movement directions of an action, it is able to partially retain its temporal structure by applying a windowing scheme.

The descriptor, together with a standard classifier, outperforms several state-of-the-art techniques on many wellknown datasets.

Keywords

human action recognition / descriptor / Prob-SOM / MSRC12 / Action3D

Cite this article

Download citation ▾
Franco RONCHETTI, Facundo QUIROGA, Laura LANZARINI, Cesar ESTREBOU. Distribution of action movements (DAM): a descriptor for human action recognition. Front. Comput. Sci., 2015, 9(6): 956‒965 https://doi.org/10.1007/s11704-015-4320-x

References

[1]
Yao A, Gall J, Fanelli G, and Van Gool L J. Does human action recognition benefit from pose estimation? In: Proceedings of the British Machine Vision Conference. 2011
CrossRef Google scholar
[2]
Estrebou C, Lanzarini L, Hasperué W. Voice recognition based on probabilistic SOM. In: Proceedings of XXXVI Congreso Latinoamericano de Informática (CLEI). 2010
[3]
Hussein M E, Torki M, Gowayyed M A, El-Saban M. Human action recognition using a temporal hierarchy of covariance descriptors on 3D joint locations. In: Proceedings of the 23rd International Joint Conference on Artificial Intelligence. 2013, 2466−2472
[4]
Barnachon M, Bouakaz S, Boufama B, Guillou E. Ongoing human action recognition with motion capture. Pattern Recognition, 2014, 47(1): 238−247
CrossRef Google scholar
[5]
Gowayyed M A, Torki M, Hussein M E, El-Saban M. Histogram of oriented displacements (HOD): describing trajectories of human joints for action recognition. In: Proceedings of the 23rd International Joint Conference on Artificial Intelligence. 2013, 1351−1357
[6]
Ofli F, Chaudhry R, Kurillo G, Vidal R, Bajcsy R. Sequence of the most informative joints (SMIJ): a new representation for human skeletal action recognition. Journal of Visual Communication and Image Representation, 2014, 25(1): 24−38
CrossRef Google scholar
[7]
Cho K, Chen X. Classifying and visualizing motion capture sequences using deep neural networks. 2013, arXiv:1306.3874
[8]
Li W Q, Zhang Z Y, Liu Z C. Action recognition based on a bag of 3D points. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. 2010, 9−14
CrossRef Google scholar
[9]
Kohonen T. The self-organizing map. Neurocomputing, 1998, 21(1): 1−6
CrossRef Google scholar
[10]
Fothergill S, Mentis H, Kohli P, Nowozin S. Instructing people for training gestural interactive systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2012, 1737−1746
CrossRef Google scholar
[11]
Ellis C, Masood S Z, Tappen MF, Laviola Jr J J, Sukthankar R. Exploring the trade-off between accuracy and observational latency in action recognition. International Journal of Computer Vision, 2013, 101(3): 420−436
CrossRef Google scholar
[12]
Wang J, Liu Z C, Chorowski J, Chen Z Y, Wu Y. Robust 3D action recognition with random occupancy patterns. In: Proceedings of the 12th European Conference on Computer Vision. 2012, 872−885
CrossRef Google scholar
[13]
Wang J, Liu Z C, Wu Y, Yuan J S. Mining actionlet ensemble for action recognition with depth cameras. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2012, 1290−1297
CrossRef Google scholar
[14]
Negin F, Özdemir F, Akgü C B, Yüksel K A, Erçil A. A decision forest based feature selection framework for action recognition from RGB-Depth cameras. Lecture Notes in Computer Science, 2013, 7950: 648−657
CrossRef Google scholar
[15]
Jiang X B, Zhong F, Peng Q S, Qin X Y. Robust action recognition based on a hierarchical model. In: Proceedings of IEEE International Conference on Cyberworlds. 2013, 191−198
CrossRef Google scholar

RIGHTS & PERMISSIONS

2014 Higher Education Press and Springer-Verlag Berlin Heidelberg
AI Summary AI Mindmap
PDF(570 KB)

Accesses

Citations

Detail

Sections
Recommended

/