This conference talk presents a unified approach to mesh saliency that evaluates both textured and non-textured meshes through virtual reality and multifunctional prediction. Learn how researchers from Shanghai Jiao Tong University and East China Normal University enhance AI adaptability by identifying regions that naturally attract visual attention in 3D models. Discover their comprehensive dataset containing saliency distributions for identical 3D models under both non-textured and textured conditions, and explore their proposed unified saliency prediction model that integrates geometric and texture features within a coherent topological framework. The presentation highlights the model's scalability, generalizability, and potential to enhance modeling and rendering while revealing insights into 3D visual feature interaction. Part of the "Exploring Perception, Haptics, and Learning in XR" session at IEEE VR 2025.
Overview
Syllabus
Unified Approach to Mesh Saliency: Evaluating Textured and Non-Texture...
Taught by
IEEE Virtual Reality Conference