
Udemy Special: Ends May 28!
Learn Data Science. Courses starting at $12.99.
Get Deal
This Black Hat conference talk explores two novel attack surfaces in deep neural network (DNN) executables that emerge during the compilation process. Discover how attackers can exploit cache side-channels to steal model architectures from DNN executables, leveraging the hardware- and cache-aware optimizations introduced by deep learning compilers. Learn about DeepCache, a general framework that combines Prime+Probe techniques with contrastive learning and anomaly detection to achieve high-accuracy architecture stealing. The presentation also examines how DRAM microarchitectures create vulnerabilities, demonstrating that attackers with only knowledge of victim model architectures can launch effective bit-flip attacks (such as Rowhammer) against DNN executables—a significant departure from traditional attacks requiring complete model information. The speakers show how strategic profiling of same-structure-different-weights DNN executables allows attackers to identify vulnerable bits and potentially reduce model accuracy to random guessing with minimal flips, as validated on DDR4 DRAM systems.