EDVAM: a 3D eye-tracking dataset for visual attention modeling in a virtual museum
Yunzhan ZHOU, Tian FENG, Shihui SHUAI, Xiangdong LI, Lingyun SUN, Henry Been-Lirn DUH
EDVAM: a 3D eye-tracking dataset for visual attention modeling in a virtual museum
Predicting visual attention facilitates an adaptive virtual museum environment and provides a context-aware and interactive user experience. Explorations toward development of a visual attention mechanism using eye-tracking data have so far been limited to 2D cases, and researchers are yet to approach this topic in a 3D virtual environment and from a spatiotemporal perspective. We present the first 3D Eye-tracking Dataset for Visual Attention modeling in a virtual Museum, known as the EDVAM. In addition, a deep learning model is devised and tested with the EDVAM to predict a user’s subsequent visual attention from previous eye movements. This work provides a reference for visual attention modeling and context-aware interaction in the context of virtual museums.
Visual attention / Virtual museums / Eye-tracking datasets / Gaze detection / Deep learning
/
〈 | 〉 |