Variability in gesticulation patterns: A robust framework for recognizing self co-articulated dynamic gestures

Shweta Sharda , Ritu Vyas , Joyeeta Singha

International Journal of Systematic Innovation ›› 2026, Vol. 10 ›› Issue (1) : 19 -34.

PDF (3857KB)
International Journal of Systematic Innovation ›› 2026, Vol. 10 ›› Issue (1) :19 -34. DOI: 10.6977/IJoSI.202602_10(1).0003
ARTICLE
research-article
Variability in gesticulation patterns: A robust framework for recognizing self co-articulated dynamic gestures
Author information +
History +
PDF (3857KB)

Abstract

ynamic hand gesture recognition has become an important research area in human-computer interaction, virtual reality, sign language interpretation, and intelligent surveillance systems. With the increasing demand for natural and contactless communication interfaces, gesture-based systems are gaining significant attention due to their intuitive and user-friendly nature. However, one of the major challenges in dynamic gesture recognition is inter-user variability, where differences in speed, style, and articulation patterns among users reduce the overall robustness and accuracy of recognition systems. Another critical issue is self co-articulation, which occurs when gestures overlap or influence each other during continuous motion, making feature extraction more complex. This study presents a dynamic hand gesture recognition system that addresses inter-user variability in gesticulation patterns. In our proposed system, a new set of features was employed, which divides the gesture into two halves, and feature extraction was performed after the removal of self-co-articulation. The efficiency of the proposed system was validated on a new set of gestures recorded in the LNM Institute of Information Technology Dynamic Hand Gesture Dataset-4, which consists of videos recorded according to different patterns. The performance of the proposed system was calculated with different features combined with individual as well as combinations of classifiers, such as support vector machine, k-nearest neighbor, naïve Bayes, adaptive neuro-fuzzy inference system, and discriminant analysis classifiers. The recognition accuracy of the naïve Bayes classifier was 93.13%, which is the best among all the classifiers. Recognition accuracy improved by about 10% with an increase in the number of features.

Keywords

Hand gesture recognition / Pattern variation / Self-co-articulation / Trajectory features

Cite this article

Download citation ▾
Shweta Sharda, Ritu Vyas, Joyeeta Singha. Variability in gesticulation patterns: A robust framework for recognizing self co-articulated dynamic gestures. International Journal of Systematic Innovation, 2026, 10(1): 19-34 DOI:10.6977/IJoSI.202602_10(1).0003

登录浏览全文

4963

注册一个新账户 忘记密码

Funding

None.

References

[1]

Anish Monsley, K., Yadav, K. S., Misra, S., Khan, T., Bhuyan, M. K., & Laskar, R. H. (2021). Segregation of meaningful strokes, a pre-requisite for self co-articulation removal in isolated dynamic gestures. IET Image Processing, 15(5), 1166-1178. https://doi.org/10.1049/ipr2.12095

[2]

Bamwenda, J., & Özerdem, M. S. (2019). Recognition of static hand gesture with using ANN and SVM. Dicle University Journal of Engineering, 10, 561-568. Doi:10.24012/dumf.569357

[3]

Beh, J., Han, D., & Ko, H. (2014). Rule-based trajectory segmentation for modeling hand motion trajectory. Pattern Recognition, 47(4), 1586-1601.

[4]

Bhuyan, M. K., Ajay Kumar, D., MacDorman, K. F., & Iwahori, Y. (2014). A novel set of features for continuous hand gesture recognition. Journal on Multimodal User Interfaces, 8(4), 333-343. https://doi.org/10.1007/s12193-014-0165-0

[5]

Bradski, G., & Kaehler, A. (2008). Learning OpenCV: Computer vision with the OpenCV library. Sevastopol, CA: O’Reilly Media, Inc..

[6]

Cheng, J., Shi, D., Li, C., et al. (2023). Skeleton-based gesture recognition with learnable paths and signature features. IEEE Transactions on Multimedia, 26, 3951-3961. https://doi.org/10.1109/TMM.2023.3318242

[7]

Comaniciu, D., Ramesh, V., & Meer, P. (2003). Kernel-based object tracking. IEEE Transactions on pattern analysis and machine intelligence, 25(5), 564-577. https://doi.org/10.1109/TPAMI.2003.1195991

[8]

Elmezain, M., Al-Hamadi, A., Appenrodt, J., & Michaelis, B. (2008). A hidden markov model-based continuous gesture recognition system for hand motion trajectory. In:Proceedings of the 2008 19th international conference on pattern recognition; December 8-11, 2008, Tampa, FL, USA. pp. 1-4. https://doi.org/10.1109/ICPR.2008.4761080

[9]

Elmezain, M., Al-Hamadi, A., & Michaelis, B. (2009). Hand gesture recognition based on combined features extraction. International Journal of Electrical and Computer Engineering, 3(12), 2389-2394.

[10]

Hsu, C. W., & Lin, C. J. (2002). A comparison of methods for multiclass support vector machines. IEEE transactions on Neural Networks, 13(2), 415-425. https://doi.org/10.1109/72.991427

[11]

Huang, S., & Hong, J. (2011). Moving object tracking system based on camshift and Kalman filter. In: Proceedings of the 2011 International conference on consumer electronics, communications and networks (CECNet); April 16-18, 2011, Xianning, China. pp. 1423-1426. https://doi.org/10.1109/CECNET.2011.5769081

[12]

Kao, C. Y., & Fahn, C. S. (2011). A human-machine interaction technique: hand gesture recognition based on hidden Markov models with trajectory of hand motion. Procedia Engineering, 15, 3739-3743. https://doi.org/10.1016/j.proeng.2011.08.700

[13]

Kolsch, M., & Turk, M. (2004). Fast 2d hand tracking with flocks of features and multi-cue integration. In: Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop; June 27-July 2, 2004; Washington, DC, USA. pp. 158-158. https://doi.org/10.1109/CVPR.2004.345

[14]

Li, C., Xie, C., Zhang, B., Chen, C., & Han, J. (2018). Deep Fisher discriminant learning for mobile hand gesture recognition. Pattern Recognition, 77, 276-288. https://doi.org/10.1016/j.patcog.2017.12.023

[15]

Misra, S., Singha, J., & Laskar, R. H. (2018). Vision-based hand gesture recognition of alphabets, numbers, arithmetic operators and ASCII characters in order to develop a virtual text-entry interface system. Neural Computing and Applications, 29(8), 117-135. https://doi.org/10.1007/s00521-017-2838-6

[16]

Misra, S., & Laskar, R. H. (2019). Development of a hierarchical dynamic keyboard character recognition system using trajectory features and scale-invariant holistic modeling of characters. Journal of Ambient Intelligence and Humanized Computing, 10(12), 4901-4923. https://doi.org/10.1007/s12652-019-01189-2

[17]

Pun, C. M., Zhu, H. M., & Feng, W. (2011). Real-time hand gesture recognition using motion tracking. International Journal of Computational Intelligence Systems, 4(2), 277-286. https://doi.org/10.2991/ijcis.2011.4.2.15

[18]

Rubine, D. (1991a). Specifying gestures by example. ACM SIGGRAPH computer graphics, 25(4), 329-337. https://doi.org/10.1145/127719.122753

[19]

Rubine, D. (1991b). The automatic recognition of gestures [PhD thesis]. Carnegie Mellon University.

[20]

Saboo, S., & Singha, J. (2021). Vision based two-level hand tracking system for dynamic hand gestures in indoor environment. Multimedia Tools and Applications, 80(13), 20579-20598. https://doi.org/10.1007/s11042-021-10669-7

[21]

Saboo, S., Singha, J., & Laskar, R. H. (2022). Dynamic hand gesture recognition using combination of two-level tracker and trajectory-guided features. Multimedia Systems, 28(1), 183-194. https://doi.org/10.1007/s00530-021-00811-8

[22]

Singha, J., Misra, S., & Laskar, R. H. (2016). Effect of variation in gesticulation pattern in dynamic hand gesture recognition system. Neurocomputing, 208, 269-280. https://doi.org/10.1016/j.neucom.2016.05.049

[23]

Singla, A., Roy, P. P., & Dogra, D. P. (2019). Visual rendering of shapes on 2D display devices guided by hand gestures. Displays, 57, 18-33. https://doi.org/10.1016/j.displa.2019.03.001

[24]

Subasi, A. (2007). Application of adaptive neuro-fuzzy inference system for epileptic seizure detection using wavelet feature extraction. Computers in biology and medicine, 37(2), 227-244. https://doi.org/10.1016/j.compbiomed.2005.12.003

[25]

Tang, J., Cheng, H., Zhao, Y., & Guo, H. (2018). Structured dynamic time warping for continuous hand trajectory gesture recognition. Pattern Recognition, 80, 21-31. https://doi.org/10.1016/j.patcog.2018.02.011

[26]

Tharwat, A. (2016). Linear vs. quadratic discriminant analysis classifier: a tutorial. International Journal of Applied Pattern Recognition, 3(2), 145-180. https://doi.org/10.1504/IJAPR.2016.079050

[27]

Wang, X., & Li, X. (2010, December). The study of MovingTarget tracking based on Kalman-CamShift in the video. In:Proceedings of the 2nd International Conference on Information Science and Engineeringl; October 26-28, 2012; Chongqing, China. pp. 1-4. https://doi.org/10.1109/ICISE.2010.5690826

[28]

Xu, D., Wu, X., Chen, Y. L., & Xu, Y. (2015). Online dynamic gesture recognition for human robot interaction. Journal of Intelligent & Robotic Systems, 77(3), 583-596. https://doi.org/10.1007/s10846-014-0039-4

[29]

Yoon, H. S., Soh, J., Bae, Y. J., & Yang, H. S. (2001). Hand gesture recognition using combined features of location, angle and velocity. Pattern recognition, 34(7), 1491-1501. https://doi.org/10.1016/S0031-3203(00)00096-0

PDF (3857KB)

0

Accesses

0

Citation

Detail

Sections
Recommended

/