Overview
Explore the security implications of using Generative AI for code development in this 51-minute talk from NDC Security in Oslo. Learn how developers are shifting from traditional code reuse to generating new code through GenAI prompts, fundamentally changing software development practices. Examine concerning academic research showing that AI systems trained on vulnerable open-source code tend to reproduce those vulnerabilities, while developers paradoxically trust AI-generated code more than human-written code. Discover the additional risks associated with Large Language Models, including jailbreaks, data poisoning, malicious agents, recursive learning, and intellectual property infringements. Through analysis of real-world data from multiple studies, understand how GenAI is transforming software security, identify the new risks it introduces, and learn practical strategies to address these emerging security challenges.
Syllabus
Using GenAI on your code, what could possibly go wrong? -
Taught by
NDC Conferences