Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University

Overview

Contrastive Explanation (Foil Trees)

Contrastive and counterfactual explanations for machine learning (ML)

Marcel Robeer (2018-2020), TNO/Utrecht University

Travis (.org) License Python version

Contents

  1. Introduction
  2. Publications: citing this package
  3. Example usage
  4. Documentation: choices for problem explanation
  5. License

Introduction

Contrastive Explanation provides an explanation for why an instance had the current outcome (fact) rather than a targeted outcome of interest (foil). These counterfactual explanations limit the explanation to the features relevant in distinguishing fact from foil, thereby disregarding irrelevant features. The idea of contrastive explanations is captured in this Python package ContrastiveExplanation. Example facts and foils are:

Machine Learning (ML) type Problem Explainable AI (XAI) question Fact Foil
Classification Determine type of animal Why is this instance a cat rather than a dog? Cat Dog
Regression analysis Predict students' grade Why is the predicted grade for this student 6.5 rather than higher? 6.5 More than 6.5
Clustering Find similar flowers Why is this flower in cluster 1 rather than cluster 4? Cluster 1 Cluster 4

Publications

One scientific paper was published on Contrastive Explanation / Foil Trees:

  • J. van der Waa, M. Robeer, J. van Diggelen, M. Brinkhuis, and M. Neerincx, "Contrastive Explanations with Local Foil Trees", in 2018 Workshop on Human Interpretability in Machine Learning (WHI 2018), 2018, pp. 41-47. [Online]. Available: http://arxiv.org/abs/1806.07470

It was developed as part of a Master's Thesis at Utrecht University / TNO:

Citing this package

@inproceedings{vanderwaa2018,
  title={{Contrastive Explanations with Local Foil Trees}},
  author={van der Waa, Jasper and Robeer, Marcel and van Diggelen, Jurriaan and Brinkhuis, Matthieu and Neerincx, Mark},
  booktitle={2018 Workshop on Human Interpretability in Machine Learning (WHI)},
  year={2018}
}

Example usage

As a simple example, let us explain a Random Forest classifier that determine the type of flower in the well-known Iris flower classification problem. The data set comprises 150 instances, each one of three types of flowers (setosa, versicolor and virginica). For each instance, the data set includes four features (sepal length, sepal width, petal length, petal width) and the goal is to determine which type of flower (class) each instance is.

Steps

First, train the 'black-box' model to explain

from sklearn import datasets, model_selection, ensemble
seed = 1

# Train black-box model on Iris data
data = datasets.load_iris()
train, test, y_train, y_test = model_selection.train_test_split(data.data, 
                                                                data.target, 
                                                                train_size=0.80, 
                                                                random_state=seed)
model = ensemble.RandomForestClassifier(random_state=seed)
model.fit(train, y_train)

Next, perform contrastive explanation on the first test instance (test[0]) by wrapping the tabular data in a DomainMapper, and then using method ContrastiveExplanation.explain_instance_domain()

# Contrastive explanation
import contrastive_explanation as ce

dm = ce.domain_mappers.DomainMapperTabular(train, 
                                           feature_names=data.feature_names,
					   contrast_names=data.target_names)
exp = ce.ContrastiveExplanation(dm, verbose=True)

sample = test[0]
exp.explain_instance_domain(model.predict_proba, sample)

[OUT] "The model predicted 'setosa' instead of 'versicolor' because 'sepal length (cm) <= 6.517 and petal width (cm) <= 0.868'"

The predicted class using the RandomForestClassifier was 'setosa', while the second most probable class 'versicolor' may have been expected instead. The difference of why the current instance was classified 'setosa' is because its sepal length is less than or equal to 6.517 centimers and its petal width is less than or equal to 0.868 centimers. In other words, if the instance would keep all feature values the same, but change its sepal width to more than 6.517 centimers and its petal width to more than 0.868 centimers, the black-box classifier would have changed the outcome to 'versicolor'.

More examples

For more examples, check out the attached Notebook.

Documentation

Several choices can be made to tailor the explanation to your type of explanation problem.

Choices for problem explanation

FactFoil

Used for determining the current outcome (fact) and the outcome of interest (foil), based on a foil_method (e.g. second most probable class, random class, greater than the current outcome). Foils can also be manually selected by using the foil=... optional argument of the ContrastiveExplanation.explain_instance_domain() method.

FactFoil Description foil_method
FactFoilClassification (default) Determine fact and foil for classification/unsupervised learning second, random
FactFoilRegression Determine fact and foil for regression analysis greater, smaller
Explanators

Method for forming the explanation, either using a Foil Tree (TreeExplanator) as described in the paper, or using a prototype (PointExplanator, not fully implemented). As multiple explanations hold, one can choose the foil_strategy as either 'closest' (shortest explanation), 'size' (move the current outcome to the area containing most samples of the foil outcome), 'impurity' (most informative foil area), or 'random' (random foil area)

Explanator Description foil_strategy
TreeExplanator (default) Foil Tree: Explain using a decision tree closest, size, impurity, random
PointExplanator Explain with a representatitive point (prototype) of the foil class closest, medoid, random
Domain Mappers

For handling the different types of data:

  • Tabular (rows and columns)
  • Images (rudimentary support)

Maps to a general format that the explanator can form the explanation in, and then maps the explanation back into this format. Ensures meaningful feature names.

DomainMapper Description
DomainMapperTabular Tabular data (columns with feature names, rows)
DomainMapperPandas Uses a pandas dataframe to create a DomainMapperTabular, while automatically inferring feature names
DomainMapperImage Image data

License

ContrastiveExplanation is BSD-3 Licensed.

Comments
  • Changing output every run

    Changing output every run

    Hi,

    When I run the exp.explain_instance_domain(model.predict_proba, sample) , I get different output everytime. After every run, a different column is given as output.

    Is there a way I can get constant results?

    opened by Subh1m 5
  • TypeError in the example notebook

    TypeError in the example notebook

    running the example notebook (Contrastive explanation - example usage), the line exp.explain_instance_domain(model.predict_proba, sample) gives the following error: image

    opened by Naviden 3
  • Not working for multi-valued categorical features

    Not working for multi-valued categorical features

    Does the current implementation support only binary-valued categorical features?

    Because I tried with the adult income dataset which has many multi-value categorical and continuous features (https://archive.ics.uci.edu/ml/datasets/adult) and got output like these:

    "The model predicted '<=50k' instead of '>50k' because 'hours_per_week <= 42.832 and not occupation and age <= 34.95 and not education and hours_per_week <= 57.892'"

    Here, education and occupation are not binary features - they have many levels.

    bug enhancement 
    opened by raam93 3
  • Always getting the warning

    Always getting the warning "UserWarning: Could not find a difference between fact...", with blank explenations - for any dataset, and every sample.

    I am trying to exactly recreate the example from the README for the Iris-dataset. Unforunately, when running .explain_instance_domain(model.predict_proba, sample) I get the following output:

    [F] Picked foil "1" using foil selection strategy "second" [D] Obtaining neighborhood data C:\Users\dsemkoandrosenko\contrastive_explanation\contrastive_explanation.py:264: UserWarning: Could not find a difference between fact "setosa" and foil "versicolor" warnings.warn(f'Could not find a difference between fact ' "The model predicted 'setosa' instead of 'versicolor' because ''"

    I get the same issue with every single other sample, and even every other dataset I try. What could be the issue?

    Versions: Windows 10 Python: 3.7.4 Scikit-Learn: 0.21.3 Numpy: 1.18.2

    bug 
    opened by mlds2020 2
  • Create DOI on Zenodo

    Create DOI on Zenodo

    I wanted to ask if you considered to create a DOI for the Contrastive Explanation. This allows researchers to reference a version of the package with ease and can be done with Zenodo for example. There is also great integration between Zenodo, which creates a new DOI and a persistent copy of the repository for each release, and Github. You can find instructions on how to create a DOI here from official Github docs: https://docs.github.com/en/repositories/archiving-a-github-repository/referencing-and-citing-content This step helps with visibility of this repository and therefore making your research software more used.

    On a similar note, it also helps researchers to know how to correctly cite the software. I see that you already added this in the readme but Github also offers the citation file format which also shows up on the top right if used: image You can find more information about it here: https://citation-file-format.github.io/

    I would be happy to help with this if you have any questions.

    opened by kequach 1
  • Trying to understand output

    Trying to understand output

    Hi

    I have been trying your approach for a regression problem with categorical features. I receive an explanation in form:

    The model predicted 123 because sales < 1445 and not month ( dummy example)

    month is a categorical variable with values "1", "2", .. "12".

    What does it mean " and not month" then ?

    Thank you

    opened by andreysharapov 0
  • Explanations of Clustering algorithms

    Explanations of Clustering algorithms

    Hi, I was wondering if the Clustering algorithm is supported or not. I see you mentioned it in the README but looking at the code, I can't find it anywhere. Thanks :)

    opened by Naviden 0
  • Specify Features

    Specify Features

    Hi, thank you for the great package. I would like to know is there a way to specify which features to change? For example I would like to see what I need to change only for specific features?

    Thank you

    enhancement 
    opened by arsine1996 0
Releases(v0.2)
Owner
M.J. Robeer
M.J. Robeer
Visual analysis and diagnostic tools to facilitate machine learning model selection.

Yellowbrick Visual analysis and diagnostic tools to facilitate machine learning model selection. What is Yellowbrick? Yellowbrick is a suite of visual

District Data Labs 3.9k Dec 30, 2022
Visualizer for neural network, deep learning, and machine learning models

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX (.onnx, .pb, .pbtxt), Keras (.h5, .keras), Tens

Lutz Roeder 20.9k Dec 28, 2022
A ultra-lightweight 3D renderer of the Tensorflow/Keras neural network architectures

A ultra-lightweight 3D renderer of the Tensorflow/Keras neural network architectures

Souvik Pratiher 16 Nov 17, 2021
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM

Class Activation Map methods implemented in Pytorch pip install grad-cam ⭐ Comprehensive collection of Pixel Attribution methods for Computer Vision.

Jacob Gildenblat 6.5k Jan 01, 2023
An Empirical Review of Optimization Techniques for Quantum Variational Circuits

QVC Optimizer Review Code for the paper "An Empirical Review of Optimization Techniques for Quantum Variational Circuits". Each of the python files ca

Owen Lockwood 5 Jun 28, 2022
A library that implements fairness-aware machine learning algorithms

Themis ML themis-ml is a Python library built on top of pandas and sklearnthat implements fairness-aware machine learning algorithms. Fairness-aware M

Niels Bantilan 105 Dec 30, 2022
Algorithms for monitoring and explaining machine learning models

Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-qual

Seldon 1.9k Dec 30, 2022
An intuitive library to add plotting functionality to scikit-learn objects.

Welcome to Scikit-plot Single line functions for detailed visualizations The quickest and easiest way to go from analysis... ...to this. Scikit-plot i

Reiichiro Nakano 2.3k Dec 31, 2022
Portal is the fastest way to load and visualize your deep neural networks on images and videos 🔮

Portal is the fastest way to load and visualize your deep neural networks on images and videos 🔮

Datature 243 Jan 05, 2023
Bias and Fairness Audit Toolkit

The Bias and Fairness Audit Toolkit Aequitas is an open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers

Data Science for Social Good 513 Jan 06, 2023
tensorboard for pytorch (and chainer, mxnet, numpy, ...)

tensorboardX Write TensorBoard events with simple function call. The current release (v2.1) is tested on anaconda3, with PyTorch 1.5.1 / torchvision 0

Tzu-Wei Huang 7.5k Jan 07, 2023
TensorFlowTTS: Real-Time State-of-the-art Speech Synthesis for Tensorflow 2 (supported including English, Korean, Chinese, German and Easy to adapt for other languages)

🤪 TensorFlowTTS provides real-time state-of-the-art speech synthesis architectures such as Tacotron-2, Melgan, Multiband-Melgan, FastSpeech, FastSpeech2 based-on TensorFlow 2. With Tensorflow 2, we c

3k Jan 04, 2023
Model analysis tools for TensorFlow

TensorFlow Model Analysis TensorFlow Model Analysis (TFMA) is a library for evaluating TensorFlow models. It allows users to evaluate their models on

1.2k Dec 26, 2022
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)

Hierarchical neural-net interpretations (ACD) 🧠 Produces hierarchical interpretations for a single prediction made by a pytorch neural network. Offic

Chandan Singh 111 Jan 03, 2023
Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University

Contrastive Explanation (Foil Trees) Contrastive and counterfactual explanations for machine learning (ML) Marcel Robeer (2018-2020), TNO/Utrecht Univ

M.J. Robeer 41 Aug 29, 2022
Code for visualizing the loss landscape of neural nets

Visualizing the Loss Landscape of Neural Nets This repository contains the PyTorch code for the paper Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer

Tom Goldstein 2.2k Dec 30, 2022
Tool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.)

Tool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.)

Jesse Vig 4.7k Jan 01, 2023
🎆 A visualization of the CapsNet layers to better understand how it works

CapsNet-Visualization For more information on capsule networks check out my Medium articles here and here. Setup Use pip to install the required pytho

Nick Bourdakos 387 Dec 06, 2022
Python implementation of R package breakDown

pyBreakDown Python implementation of breakDown package (https://github.com/pbiecek/breakDown). Docs: https://pybreakdown.readthedocs.io. Requirements

MI^2 DataLab 41 Mar 17, 2022
A Practical Debugging Tool for Training Deep Neural Networks

Cockpit is a visual and statistical debugger specifically designed for deep learning!

31 Aug 14, 2022