This IEEE conference talk presents TF-DPC, a task-free eye-tracking dataset for dynamic point clouds in virtual reality environments. Explore how researchers captured gaze and head movements from 24 participants observing 19 scanned dynamic point clouds with 6 degrees of freedom. Learn about the comparison between visual saliency maps from task-free and task-dependent (quality assessment) settings, and understand how the researchers analyzed task influence on visual attention using Pearson correlation and a spatially adapted Earth Mover's Distance. Discover significant findings revealing task-driven differences in attention patterns and gain insights into gaze behavior and movement trajectories. The research provides valuable information for developing visual saliency models and enhancing VR perception, particularly for dynamic human figure point clouds. Presented by authors Xuemei Zhou, Irene Viola, Silvia Rossi, and Pablo Cesar from Centrum Wiskunde & Informatica (CWI) as part of the Visual Perception and Interaction session at IEEE VR.
Overview
Syllabus
Comparison of Visual Saliency for Dynamic Point Cloud: Task-free vs. T...
Taught by
IEEE Virtual Reality Conference