Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Generative AI Safety and Security: A Research and Design Perspective

DevConf via YouTube

Overview

Coursera Plus Monthly Sale: All Certificates & Courses 40% Off!
This 50-minute conference talk by Dr. Mohit Sewak, Staff Software Engineer at Google and technical lead of AI and MLOps practice on GenAI Safety and Security, explores the critical aspects of Generative AI safety and security from both research and design perspectives. Discover how to build secure and responsible AI-powered systems as the talk examines the vulnerabilities of large language models (LLMs) and provides defensive strategies against threats like prompt injection and jailbreak attacks. Learn actionable techniques for implementing effective safety guardrails, including input sanitization, adversarial robustness, output filtering, and content moderation. Gain valuable insights into maintaining "topicality" to ensure AI systems remain aligned with their intended purposes. The presentation offers researchers' perspectives on the evolving threat landscape, practical design patterns for building robust safety measures, comprehensive understanding of security challenges in LLMs, and proactive risk mitigation strategies for trustworthy AI solutions. Particularly valuable for developers, security engineers, students, and AI practitioners working with Generative AI technologies. Slides and additional resources are available through the DevConf website.

Syllabus

Generative AI Safety and Security: A Research and Design Perspective - DevConf2025

Taught by

DevConf

Reviews

Start your review of Generative AI Safety and Security: A Research and Design Perspective

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.