Explore a 19-minute conference talk from ACM SIGPLAN's VMCAI'24 that delves into enhancing the interpretability and fairness of Support Vector Machines (SVMs) through abstract interpretation. Learn about a novel feature importance measure called Abstract Feature Importance (AFI), which offers fast computation and dataset-independent results. Discover how this methodology can be applied to certify individual fairness in SVMs and generate concrete counterexamples when verification fails. Examine the effectiveness of this approach on linear and nonlinear kernel SVMs, and understand how AFI correlates strongly with SVM stability under feature perturbations, outperforming traditional feature importance measures in providing insights into SVM trustworthiness.
Abstract Interpretation-Based Feature Importance for Support Vector Machines
ACM SIGPLAN via YouTube
Overview
Syllabus
[VMCAI'24] Abstract Interpretation-Based Feature Importance for Support Vector Machines
Taught by
ACM SIGPLAN