HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data
University of Central Florida via YouTube
Overview
This research presentation from the University of Central Florida explores the HalluciDoctor framework, which addresses hallucinatory toxicity issues in visual instruction data for AI models. During the 22-minute talk, researchers discuss methods for identifying and mitigating harmful hallucinations that can occur when large language models process visual information. Learn about the challenges of visual instruction tuning and the innovative approaches developed to reduce toxic outputs while maintaining model performance. The presentation includes detailed slides available for reference, offering insights into this critical area of AI safety research.
Syllabus
Paper 2 : HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data
Taught by
UCF CRCV