This webpage aims to regroup publications and software produced as part of a joint project at Fraunhofer HHI, TU Berlin and SUTD Singapore on developing new method to understand nonlinear predictions of state-of-the-art machine learning models.

Machine learning models are usually characterized by very high predictive power, but in many case, are not easily interpretable by a human. Interpreting a nonlinear classifier is important to gain trust into the prediction, and to identify potential data selection biases or artefacts.

The project studies in particular techniques to decompose the prediction in terms of contributions of individual input variables such that the produced decomposition (i.e. explanation) can be visualized in the same way as the input data. These visualizations are called "heatmaps". An example for an image classified as "matches" by the GoogleNet neural network is shown below:

Input image Heatmap

Check out our gallery and our interactive demos for more examples. Heatmaps can also be produced for text, or scientific data such as EEG.

Next Events

Previous Events


Interactive Demos

Draw a handwritten digit and see the heatmap being formed in real-time. Create your own heatmap for natural images or text.





Journal Publications

Conference Publications

Workshop Papers / Extended Abstracts


BVLC Model Zoo Contributions