Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Getting Insight Out Of and Back Into Deep Neural Networks

BSidesLV via YouTube

Overview

Explore deep neural networks and their interpretability in this insightful conference talk from BSidesLV 2017. Delve into techniques for extracting meaningful information from complex neural network models and learn how to reintegrate this knowledge back into the networks. Gain valuable insights on improving model transparency, understanding decision-making processes, and enhancing the overall performance of deep learning systems. Discover practical approaches to demystifying the black box nature of neural networks and leveraging these insights for more effective and interpretable AI applications.

Syllabus

GT - Getting Insight Out Of and Back Into Deep Neural Networks - Richard Harang

Taught by

BSidesLV

Reviews

Start your review of Getting Insight Out Of and Back Into Deep Neural Networks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.