Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Comparison of Visual Saliency for Dynamic Point Cloud: Task-free vs. Task-dependent

IEEE via YouTube

Overview

Coursera Plus Monthly Sale: All Certificates & Courses 40% Off!
This IEEE conference talk presents TF-DPC, a task-free eye-tracking dataset for dynamic point clouds in virtual reality environments. Explore how researchers captured gaze and head movements from 24 participants observing 19 scanned dynamic point clouds with 6 degrees of freedom. Learn about the comparison between visual saliency maps from task-free and task-dependent (quality assessment) settings, and understand how the researchers analyzed task influence on visual attention using Pearson correlation and a spatially adapted Earth Mover's Distance. Discover significant findings revealing task-driven differences in attention patterns and gain insights into gaze behavior and movement trajectories. The research provides valuable information for developing visual saliency models and enhancing VR perception, particularly for dynamic human figure point clouds. Presented by authors Xuemei Zhou, Irene Viola, Silvia Rossi, and Pablo Cesar from Centrum Wiskunde & Informatica (CWI) as part of the Visual Perception and Interaction session at IEEE VR.

Syllabus

Comparison of Visual Saliency for Dynamic Point Cloud: Task-free vs. T...

Taught by

IEEE Virtual Reality Conference

Reviews

Start your review of Comparison of Visual Saliency for Dynamic Point Cloud: Task-free vs. Task-dependent

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.