Overview
Explore deep neural networks and their interpretability in this insightful conference talk from BSidesLV 2017. Delve into techniques for extracting meaningful information from complex neural network models and learn how to reintegrate this knowledge back into the networks. Gain valuable insights on improving model transparency, understanding decision-making processes, and enhancing the overall performance of deep learning systems. Discover practical approaches to demystifying the black box nature of neural networks and leveraging these insights for more effective and interpretable AI applications.
Syllabus
GT - Getting Insight Out Of and Back Into Deep Neural Networks - Richard Harang
Taught by
BSidesLV