SPS Education Short Course Visual Explainability in Machine Learning

  • Online

Visual explanations have traditionally acted as rationales used to justify the decisions made by machine learning systems. With the advent of large-scale neural networks, the role of visual explanations has been to shed interpretability on black-box models. We view this role as the process for the network to answer the question `Why P?’, where P is a trained network’s prediction. Recently however, with increasingly capable models, the role of explainability has expanded. Neural networks are asked to justify `What if?’ counterfactual and `Why P, rather than Q?’ contrastive question modalities that the network did not explicitly train to answer. This allows explanations to act as reasons to make further prediction. The short course provides a principled and rational introduction into Explainability within machine learning and justifies them as reasons to make decisions. Such a reasoning framework allows for robust machine learning as well as trustworthy AI to be accepted in everyday lives. Applications like robust recognition, image quality assessment, visual saliency, anomaly detection, out-of-distribution detection, adversarial image detection, seismic interpretation, semantic segmentation, and machine teaching among others will be discussed.

Learning Outcomes

  • Basics of explainability in neural networks – their function, types, shortcomings, evaluation, and reasoning paradigms
  • Understanding applicability of reason-based explainability across applications and data modalities. Applications include robust recognition, anomaly detection, visual saliency, and machine teaching. Data modalities include natural images, and computed seismic and biomedical images

Syllabus

Day 1 (4 hrs): December 5: 9:00am -1:00pm

Lecture 1: Introduction to Explainability in Neural Networks (2 hrs) (Ghassan AlRegib)

  • Explainability in ML: Definition, role, and need for Explainability
  • Categorizations of Explainability
    • Implicit vs Explicit explanations
    • Black box vs White box explanations
    • Interventionist vs Non-interventionist explanations
  • Overview of Explainability in Neural Networks
    • Explainability in neural networks
    • Dimensionality reduction in last layer embedding
    • Visualizing activations
    • Gradient based visualizations
    • Saliency maps and intermediate feature visualizations
    • CAM visualizations and explanations
    • GradCAM visualization and explanations
    • Examples of applications: robust recognition, image quality assessment, visual saliency, anomaly detection, out-of-distribution detection, adversarial image detection, seismic interpretation, semantic segmentation, and machine teaching

Lecture 2: Explanations as Reasons: Towards explanatory paradigms (2 hrs) (Mohit Prabhushankar)

  • Reasoning in AI
    • Deductive reasoning
    • Inductive reasoning
    • Abductive Reasoning
  • Significance of Explanations
    • As justifications of decisions
    • As assistants in making decisions  
  • Explanatory Paradigms
    • Types of Explanations
    • Indirect and Direct Explanations
    • Targeted Explanations
    • Explanatory Paradigms
    • Examples
    • Recognition in natural and seismic images
    • Image Quality Assessment
  • Complete explanations

Day 2 (3 hrs); December 6: 9:00am -12:00pm

Lecture 3: Impact of Explanations 1: Towards Robust Neural Networks 1 (1 hr 30 mins) (Ghassan AlRegib)

  • Recap of significance of explanations from Day 1
    • As justifications of decisions
    • As assistants in making decisions
  • Utilizing explanations in making decisions  
    • The effectiveness of contrastive reasoning Robustness in Neural Networks
  • Robust classification in the presence of noise
  • Robust adversarial image detection

Lecture 4: Impact of Explanations 2: Towards Robust Neural Networks 2 (1 hr 30 mins) (Mohit Prabhushankar)

  • Utilizing explanations in making decisions
  • Robust out-of-distribution detection
    • Anomaly Detection in Neural Networks
    • Statistical analysis of anomalies
    • Performance metrics
    • Anomaly Detection settings
    • Gradient-based explanation-based anomaly detection  

Day 3 (3 hrs): December 7: 9:00am -12:00pm

Lecture 5: Impact of Explanations 3: Towards Trust and Evaluation (1hr 30 mins) (Mohit Prabhushankar)

  • Explanatory evaluation taxonomy
    • Direct evaluations
    • Indirect evaluations
    • Targeted evaluations
  • Direct evaluations
    • Human evaluations
    • Ethical considerations for human evaluations
    • Examples for direct evaluations: Amazon Mechanical Turk
  • Indirect evaluations
    • Human Visual Saliency
    • Attention in Transformers
  • Targeted Evaluations
    • Examples of targeted evaluations through robustness
    • Machine Teaching and examples on seismic data

Lecture 6: Impact of Explanations 4: Towards Interventions and Evaluation (1hr 30 mins) (Mohit Prabhushankar)

  • Interventions in Neural Networks
    • Examples of targeted interventions in neural networks
    • Examples of explanation-based interventions in neural networks
    • Specific Interventions
    • Interventions for Causality
    • Interventions for Privacy
    • Interventions for Benchmarking
    • Interventions for Prompting

Target audience, and the expected prerequisite technical knowledge: The targeted audiences are senior-year undergraduate, postgraduate, engineers and practitioners, with some background in python and machine learning.

Supporting course resources, software, tools and readings

  • Lecture notes from the materials presented in the course.
  • References to papers for specific details taught during the course

Suggested reading:

  • AlRegib, G., & Prabhushankar, M. (2022). Explanatory Paradigms in Neural Networks: Towards relevant and contextual explanations. IEEE Signal Processing Magazine, 39(4), 59-72
  • Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision (pp. 618-626)
  • Prabhushankar, M., & AlRegib, G. (2021). Contrastive Reasoning in Neural Networks. arXiv preprint arXiv:2103.12329.
  • Goyal, Y., Wu, Z., Ernst, J., Batra, D., Parikh, D., & Lee, S. (2019, May). Counterfactual visual explanations. In International Conference on Machine Learning (pp. 2376-2384). PMLR
  • G. Kwon*, M. Prabhushankar*, D. Temel, and G. AlRegib, "Distorted Representation Space Characterization Through Backpropagated Gradients," in IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, Sep. 2019.
  • G. Kwon, M. Prabhushankar, D. Temel, and G. AlRegib, "Backpropagated Gradient Representations for Anomaly Detection," in Proceedings of the European Conference on Computer Vision (ECCV), SEC, Glasgow, Aug. 23-28 2020
  • Temel, D., Lee, J., & AlRegib, G. (2018, December). Cure-or: Challenging unreal and real environments for object recognition. In 2018 17th IEEE international conference on machine learning and applications (ICMLA) (pp. 137-144). IEEE
  • M. Prabhushankar, and G. AlRegib, "Introspective Learning : A Two-Stage Approach for Inference in Neural Networks," in Advances in Neural Information Processing Systems (NeurIPS), New Orleans, LA,, Nov. 29 - Dec. 1, 2022, [PDF], [Code].

Presenters:

Ghassan AlRegib

Georgia Institute of Technology, Georgia Institute of Technology

https://alregib.ece.gatech.edu/ alregib@gatech.edu

Ghassan AlRegib is currently the John and McCarty Chair Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. He received the ECE Outstanding Junior Faculty Member Award, in 2008 and the 2017 Denning Faculty Award for Global Engagement. His research group, the Omni Lab for Intelligent Visual Engineering and Science (OLIVES) works on research projects related to machine learning, image and video processing, image and video understanding, seismic interpretation, machine learning for ophthalmology, and video analytics. He has participated in several service activities within the IEEE. He served as the TP co-Chair for ICIP 2020 ICIP 2024, and GlobalSIP 2014. He was area editor for the IEEE Signal Processing Magazine. In 2008, he received the ECE Outstanding Junior Faculty Member Award. In 2017, he received the 2017 Denning Faculty Award for Global Engagement. He and his students received the Beat Paper Award in ICIP 2019. He is an IEEE Fellow.

Mohit Prabhushankar

Georgia Institute of Technology, Georgia Institute of Technology, mohit.p@gatech.edu

Mohit Prabhushankar received his Ph.D. degree in electrical engineering from the Georgia Institute of Technology (Georgia Tech), Atlanta, Georgia, 30332, USA, in 2021. He is currently a Postdoctoral Research Fellow in the School of Electrical and Computer Engineering at the Georgia Institute of Technology in the Omni Lab for Intelligent Visual Engineering and Science (OLIVES).

He is working in the fields of image processing, machine learning, active learning, healthcare, and robust and explainable AI. He is the recipient of the Best Paper award at ICIP 2019 and Top Viewed Special Session Paper Award at ICIP 2020. He is the recipient of the ECE Outstanding Graduate Teaching Award, the CSIP Research award, and of the Roger P Webb ECE Graduate Research Excellence award, all in 2022.


SPS Education Short Course  Visual Explainability in Machine Learning
  • Course Provider: Signal Processing Society
  • Course Number: SPSVIRTUALCOURSE1
  • Duration (Hours): 10
  • Credits: None