EDVAM: a 3D eye-tracking dataset for visual attention modeling in a virtual museum

Yunzhan ZHOU , Tian FENG , Shihui SHUAI , Xiangdong LI , Lingyun SUN , Henry Been-Lirn DUH

Front. Inform. Technol. Electron. Eng ›› 2022, Vol. 23 ›› Issue (1) : 101 -112.

PDF (1728KB)
Front. Inform. Technol. Electron. Eng ›› 2022, Vol. 23 ›› Issue (1) : 101 -112. DOI: 10.1631/FITEE2000318
Orginal Article
Orginal Article

EDVAM: a 3D eye-tracking dataset for visual attention modeling in a virtual museum

Author information +
History +
PDF (1728KB)

Abstract

Predicting visual attention facilitates an adaptive virtual museum environment and provides a context-aware and interactive user experience. Explorations toward development of a visual attention mechanism using eye-tracking data have so far been limited to 2D cases, and researchers are yet to approach this topic in a 3D virtual environment and from a spatiotemporal perspective. We present the first 3D Eye-tracking Dataset for Visual Attention modeling in a virtual Museum, known as the EDVAM. In addition, a deep learning model is devised and tested with the EDVAM to predict a user’s subsequent visual attention from previous eye movements. This work provides a reference for visual attention modeling and context-aware interaction in the context of virtual museums.

Keywords

Visual attention / Virtual museums / Eye-tracking datasets / Gaze detection / Deep learning

Cite this article

Download citation ▾
Yunzhan ZHOU, Tian FENG, Shihui SHUAI, Xiangdong LI, Lingyun SUN, Henry Been-Lirn DUH. EDVAM: a 3D eye-tracking dataset for visual attention modeling in a virtual museum. Front. Inform. Technol. Electron. Eng, 2022, 23(1): 101-112 DOI:10.1631/FITEE2000318

登录浏览全文

4963

注册一个新账户 忘记密码

References

RIGHTS & PERMISSIONS

Zhejiang University Press

AI Summary AI Mindmap
PDF (1728KB)

Supplementary files

FITEE-0101-22009-YZZ_suppl_1

FITEE-0101-22009-YZZ_suppl_2

800

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/