EDVAM: a 3D eye-tracking dataset for visual attention modeling in a virtual museum

Yunzhan ZHOU, Tian FENG, Shihui SHUAI, Xiangdong LI, Lingyun SUN, Henry Been-Lirn DUH

PDF(1728 KB)
PDF(1728 KB)
Front. Inform. Technol. Electron. Eng ›› 2022, Vol. 23 ›› Issue (1) : 101-112. DOI: 10.1631/FITEE2000318
Orginal Article
Orginal Article

EDVAM: a 3D eye-tracking dataset for visual attention modeling in a virtual museum

Author information +
History +

Abstract

Predicting visual attention facilitates an adaptive virtual museum environment and provides a context-aware and interactive user experience. Explorations toward development of a visual attention mechanism using eye-tracking data have so far been limited to 2D cases, and researchers are yet to approach this topic in a 3D virtual environment and from a spatiotemporal perspective. We present the first 3D Eye-tracking Dataset for Visual Attention modeling in a virtual Museum, known as the EDVAM. In addition, a deep learning model is devised and tested with the EDVAM to predict a user’s subsequent visual attention from previous eye movements. This work provides a reference for visual attention modeling and context-aware interaction in the context of virtual museums.

Keywords

Visual attention / Virtual museums / Eye-tracking datasets / Gaze detection / Deep learning

Cite this article

Download citation ▾
Yunzhan ZHOU, Tian FENG, Shihui SHUAI, Xiangdong LI, Lingyun SUN, Henry Been-Lirn DUH. EDVAM: a 3D eye-tracking dataset for visual attention modeling in a virtual museum. Front. Inform. Technol. Electron. Eng, 2022, 23(1): 101‒112 https://doi.org/10.1631/FITEE2000318

RIGHTS & PERMISSIONS

2022 Zhejiang University Press
PDF(1728 KB)

Accesses

Citations

Detail

Sections
Recommended

/