This 20-minute video from Discover AI explores AI safety and security concerns, focusing specifically on the dangers of prompt injection attacks. Learn about the real-world vulnerabilities of AI systems through practical examples based on research from Xi'an Jiaotong University and SGIT AI Lab. Understand why connecting to AI agents can pose significant risks, and discover how attackers can "break the prompt wall" to manipulate AI responses. The presentation covers lightweight prompt injection techniques documented in academic research, providing essential knowledge for anyone concerned with AI safety, risk management, and protection against jailbreak attempts. Gain valuable insights into how these security vulnerabilities work and what measures can help safeguard against toxic prompts.
Poisoned Agents: AI Safety and Security - Toxic Prompts
Discover AI via YouTube
Overview
Syllabus
Poisoned Agents - Toxic Prompts
Taught by
Discover AI