Upcoming Events
Past Events
2023-04-26 Keynote at Polish Conference on Artificial Intelligence, Lodz, Poland
2023-03-28 Invited Talk at Workshop on "Explainability in Machine Learning", Tübingen, Germany
2022-11-24 Talk at 3rd International Workshop on Auditing AI-Systems,, Berlin, Germany
2022-12-08 Talk at Max Planck School of Cognition Academy,, Berlin, Germany
2022-10-20 Talk at 3rd ERCIM-JST Workshop,, Paris, France
2022-09-15 Talk at 11th Heinz Nixdorf Symposium, , Paderborn, Germany
2022-10-10 Tutorial at 7th Summer School on Data Science (SSDS-2022), virtual event
2022-07-25 Tutorial at 7th International Gran Canaria School on Deep Learning, Gran Canaria, Spain
2022-06-26 Tutorial at 24th International Conference on Human-Computer Interaction, virtual event
2021-06-08 Talk at 2nd Eddy Cross Disciplinary Symposium, online event
2021-05-19 Talk at HEIBRIDS Lecture Series, online event
2020-09-18 Tutorial on "Explainable AI for Deep Networks: Basics and Extensions" at ECML/PKDD 2020 (virtual event)
2020-09-01 Class on "Introduction to Explainable AI" at International Summer School on Deep Learning 2020 in (virtual event)
2020-08-28 Class on "Interpretable and explainable deep learning" at Summer School on Machine Learning in Bioinformatics 2020 (virtual event)
2020-08-28 Keynote on "Explaining the Decisions of Deep Neural Networks and Beyond" at CD-MAKE 2020 (virtual event)
2020-07-18 Workshop on "XXAI: Extending Explainable AI Beyond Deep Models and Classifiers" at ICML 2020 (virtual event)
2019-07-23 Tutorial on "Interpretable & Transparent Deep Learning" at EMBC 2019 in Berlin, Germany
2019-06-16 Talk at CVPR 2019 Workshop on Explainable AI in Long Beach, CA, USA
2018-10-12 Keynote at the 2018 International Explainable AI Symposium in Seoul, Korea
2018-10-07 Tutorial on "Interpretable Deep Learning: Towards Understanding & Explaining Deep Neural Networks" at ICIP 2018 in Athens, Greece
2018-09-16 Tutorial on "Interpretable Machine Learning" at MICCAI 2018 in Granada, Spain
2018-06-18 Tutorial on "Interpreting and Explaining Deep Models in Computer Vision" at CVPR 2018 in Salt Lake City, USA
2017-12-09 Workshop on "Interpreting, Explaining and Visualizing Deep Learning" at NIPS 2017 in Long Beach, CA
2017-12-04 Tutorial on "Understanding Deep Neural Networks and their Predictions" at WIFS 2017 in Rennes, France
2017-03-05 Tutorial at ICASSP 2017 on "Methods for Interpreting and Understanding Deep Neural Networks" in New Orleans, USA
2016-09-26 The LRP Toolbox presented at the ICIP Visual Technology Showcase in Phoenix, Arizona

This webpage aims to regroup publications and software produced as part of a joint project at Fraunhofer HHI and TU Berlin on developing new methods to understand nonlinear predictions of state-of-the-art machine learning models. Machine learning models, in particular deep neural networks (DNNs), are characterized by very high predictive power, but in many case, are not easily interpretable by a human. Interpreting a nonlinear classifier is important to gain trust into the prediction, identify potential data selection biases or artefacts, or gaining insights into complex datasets.

Call for Papers

We are organizing two special tracks at the World Conference on XAI

     

Key dates:

Check our Review Paper

W Samek, G Montavon, S Lapuschkin, C Anders, KR Müller
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Proceedings of the IEEE, 109(3):247-278, 2021
With the broader and highly successful usage of machine learning (ML) in industry and the sciences, there has been a growing demand for explainable artificial intelligence (XAI). Interpretability and explanation methods for gaining a better understanding of the problem-solving abilities and strategies of nonlinear ML, in particular, deep neural networks, are, therefore, receiving increased attention. In this work, we aim to: 1) provide a timely overview of this active emerging field, with a focus on “ post hoc ” explanations, and explain its theoretical foundations; 2) put interpretability algorithms to a test both from a theory and comparative evaluation perspective using extensive simulations; 3) outline best practice aspects, i.e., how to best include interpretation methods into the standard usage of ML; and 4) demonstrate successful usage of XAI in a representative selection of application scenarios. Finally, we discuss challenges and possible future directions of this exciting foundational field of ML.

Interactive LRP Demos

Draw a handwritten digit and see the heatmap being formed in real-time. Create your own heatmap for natural images or text. These demos are based on the Layer-wise Relevance Propagation (LRP) technique by Bach et al. (2015).


MNIST: A simple LRP demo based on a neural network that predicts handwritten digits and was trained using the MNIST data set.

Caffe: A more complex LRP demo based on a neural network implemented using Caffe. The neural network predicts the contents of the picture.

Text: A LRP demo that explains classification on natural language. The neural network predicts the type of document.

How and Why LRP ?

Layer-wise Relevance Propagation (LRP) is a method that identifies important pixels by running a backward pass in the neural network. The backward pass is a conservative relevance redistribution procedure, where neurons that contribute the most to the higher-layer receive most relevance from it. The LRP procedure is shown graphically in the figure below.

The method can be easily implemented in most programming languages and integrated to existing neural network frameworks. The propagation rules used by LRP can for many architectures, including deep rectifier networks or LSTMs, be understood as a Deep Taylor Decomposition of the prediction.

LRP Software

Tutorials

Publications

Edited Books

Tutorial / Overview Papers

Methods Papers

Concept-Level Explanations

Explaining Beyond DNN Classifiers

Evaluation of Explanations

Model Validation and Improvement

Application to Sciences & Humanities

Application to Text

Application to Images & Faces

Application to Video

Application to Speech

Application to Neural Network Pruning

Interpretability and Causality

Software Papers

Downloads

BVLC Model Zoo Contributions


Impressum