Robust human motion prediction via integration of spatial and temporal cues

Shaobo Zhang , Sheng Liu , Fei Gao , Yuan Feng

Optoelectronics Letters ›› 2025, Vol. 21 ›› Issue (8) : 499 -506.

PDF
Optoelectronics Letters ›› 2025, Vol. 21 ›› Issue (8) : 499 -506. DOI: 10.1007/s11801-025-4119-4
Article
research-article

Robust human motion prediction via integration of spatial and temporal cues

Author information +
History +
PDF

Abstract

Research on human motion prediction has made significant progress due to its importance in the development of various artificial intelligence applications. However, effectively capturing spatio-temporal features for smoother and more precise human motion prediction remains a challenge. To address these issues, a robust human motion prediction method via integration of spatial and temporal cues (RISTC) has been proposed. This method captures sufficient spatio-temporal correlation of the observable sequence of human poses by utilizing the spatio-temporal mixed feature extractor (MFE). In multi-layer MFEs, the channel-graph united attention blocks extract the augmented spatial features of the human poses in the channel and spatial dimension. Additionally, multi-scale temporal blocks have been designed to effectively capture complicated and highly dynamic temporal information. Our experiments on the Human3.6M and Carnegie Mellon University motion capture (CMU Mocap) datasets show that the proposed network yields higher prediction accuracy than the state-of-the-art methods.

Keywords

A

Cite this article

Download citation ▾
Shaobo Zhang, Sheng Liu, Fei Gao, Yuan Feng. Robust human motion prediction via integration of spatial and temporal cues. Optoelectronics Letters, 2025, 21(8): 499-506 DOI:10.1007/s11801-025-4119-4

登录浏览全文

4963

注册一个新账户 忘记密码

References

RIGHTS & PERMISSIONS

Tianjin University of Technology

AI Summary AI Mindmap
PDF

22

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/