FaSRnet: a feature and semantics refinement network for human pose estimation

Yuanhong ZHONG, Qianfeng XU, Daidi ZHONG, Xun YANG, Shanshan WANG

PDF(1956 KB)
PDF(1956 KB)
Front. Inform. Technol. Electron. Eng ›› 2024, Vol. 25 ›› Issue (4) : 513-526. DOI: 10.1631/FITEE.2200639

FaSRnet: a feature and semantics refinement network for human pose estimation

Author information +
History +

Abstract

Due to factors such as motion blur, video out-of-focus, and occlusion, multi-frame human pose estimation is a challenging task. Exploiting temporal consistency between consecutive frames is an efficient approach for addressing this issue. Currently, most methods explore temporal consistency through refinements of the final heatmaps. The heatmaps contain the semantics information of key points, and can improve the detection quality to a certain extent. However, they are generated by features, and feature-level refinements are rarely considered. In this paper, we propose a human pose estimation framework with refinements at the feature and semantics levels. We align auxiliary features with the features of the current frame to reduce the loss caused by different feature distributions. An attention mechanism is then used to fuse auxiliary features with current features. In terms of semantics, we use the difference information between adjacent heatmaps as auxiliary features to refine the current heatmaps. The method is validated on the large-scale benchmark datasets PoseTrack2017 and PoseTrack2018, and the results demonstrate the effectiveness of our method.

Keywords

Human pose estimation / Multi-frame refinement / Heatmap and offset estimation / Feature alignment / Multi-person

Cite this article

Download citation ▾
Yuanhong ZHONG, Qianfeng XU, Daidi ZHONG, Xun YANG, Shanshan WANG. FaSRnet: a feature and semantics refinement network for human pose estimation. Front. Inform. Technol. Electron. Eng, 2024, 25(4): 513‒526 https://doi.org/10.1631/FITEE.2200639

RIGHTS & PERMISSIONS

2024 Zhejiang University Press
PDF(1956 KB)

Accesses

Citations

Detail

Sections
Recommended

/