Python implementation of R package breakDown

Overview

pyBreakDown

Python implementation of breakDown package (https://github.com/pbiecek/breakDown).

Docs: https://pybreakdown.readthedocs.io.

Requirements

Nothing fancy, just python 3.5.2+ and pip.

Installation

Install directly from github

    git clone https://github.com/bondyra/pyBreakDown
    cd ./pyBreakDown
    python3 setup.py install  # (or use pip install . instead)

Basic usage

Load dataset

from sklearn import datasets
x = datasets.load_boston()
data = x.data
feature_names = x.feature_names
y = x.target

Prepare model

import numpy as np
from sklearn import tree
model = tree.DecisionTreeRegressor()

Train model

train_data = data[1:300,:]
train_labels=y[1:300]
model = model.fit(train_data,y=train_labels)

Explain predictions on test data

#necessary imports
from pyBreakDown.explainer import Explainer
from pyBreakDown.explanation import Explanation
#make explainer object
exp = Explainer(clf=model, data=train_data, colnames=feature_names)
#make explanation object that contains all information
explanation = exp.explain(observation=data[302,:],direction="up")

Text form of explanations

#get information in text form
explanation.text()
Feature                  Contribution        Cumulative          
Intercept = 1            29.1                29.1                
RM = 6.495               -1.98               27.12               
TAX = 329.0              -0.2                26.92               
B = 383.61               -0.12               26.79               
CHAS = 0.0               -0.07               26.72               
NOX = 0.433              -0.02               26.7                
RAD = 7.0                0.0                 26.7                
INDUS = 6.09             0.01                26.71               
DIS = 5.4917             -0.04               26.66               
ZN = 34.0                0.01                26.67               
PTRATIO = 16.1           0.04                26.71               
AGE = 18.4               0.06                26.77               
CRIM = 0.09266           1.33                28.11               
LSTAT = 8.67             4.6                 32.71               
Final prediction                             32.71               
Baseline = 0
#customized text form
explanation.text(fwidth=40, contwidth=40, cumulwidth = 40, digits=4)
Feature                                 Contribution                            Cumulative                              
Intercept = 1                           29.1                                    29.1                                    
RM = 6.495                              -1.9826                                 27.1174                                 
TAX = 329.0                             -0.2                                    26.9174                                 
B = 383.61                              -0.1241                                 26.7933                                 
CHAS = 0.0                              -0.0686                                 26.7247                                 
NOX = 0.433                             -0.0241                                 26.7007                                 
RAD = 7.0                               0.0                                     26.7007                                 
INDUS = 6.09                            0.0074                                  26.708                                  
DIS = 5.4917                            -0.0438                                 26.6642                                 
ZN = 34.0                               0.0077                                  26.6719                                 
PTRATIO = 16.1                          0.0385                                  26.7104                                 
AGE = 18.4                              0.0619                                  26.7722                                 
CRIM = 0.09266                          1.3344                                  28.1067                                 
LSTAT = 8.67                            4.6037                                  32.7104                                 
Final prediction                                                                32.7104                                 
Baseline = 0

Visual form of explanations

explanation.visualize()

png

#customize height, width and dpi of plot
explanation.visualize(figsize=(8,5),dpi=100)

png

#for different baselines than zero
explanation = exp.explain(observation=data[302,:],direction="up",useIntercept=True)  # baseline==intercept
explanation.visualize(figsize=(8,5),dpi=100)

png

Owner
MI^2 DataLab
MI^2 DataLab
🎆 A visualization of the CapsNet layers to better understand how it works

CapsNet-Visualization For more information on capsule networks check out my Medium articles here and here. Setup Use pip to install the required pytho

Nick Bourdakos 387 Dec 06, 2022
An Empirical Review of Optimization Techniques for Quantum Variational Circuits

QVC Optimizer Review Code for the paper "An Empirical Review of Optimization Techniques for Quantum Variational Circuits". Each of the python files ca

Owen Lockwood 5 Jun 28, 2022
Algorithms for monitoring and explaining machine learning models

Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-qual

Seldon 1.9k Dec 30, 2022
GNNLens2 is an interactive visualization tool for graph neural networks (GNN).

GNNLens2 is an interactive visualization tool for graph neural networks (GNN).

Distributed (Deep) Machine Learning Community 143 Jan 07, 2023
tensorboard for pytorch (and chainer, mxnet, numpy, ...)

tensorboardX Write TensorBoard events with simple function call. The current release (v2.1) is tested on anaconda3, with PyTorch 1.5.1 / torchvision 0

Tzu-Wei Huang 7.5k Jan 07, 2023
A game theoretic approach to explain the output of any machine learning model.

SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allo

Scott Lundberg 18.3k Jan 08, 2023
A library for debugging/inspecting machine learning classifiers and explaining their predictions

ELI5 ELI5 is a Python package which helps to debug machine learning classifiers and explain their predictions. It provides support for the following m

2.6k Dec 30, 2022
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet

Neural-Backed Decision Trees · Site · Paper · Blog · Video Alvin Wan, *Lisa Dunlap, *Daniel Ho, Jihan Yin, Scott Lee, Henry Jin, Suzanne Petryk, Sarah

Alvin Wan 556 Dec 20, 2022
python partial dependence plot toolbox

PDPbox python partial dependence plot toolbox Motivation This repository is inspired by ICEbox. The goal is to visualize the impact of certain feature

Li Jiangchun 722 Dec 30, 2022
L2X - Code for replicating the experiments in the paper Learning to Explain: An Information-Theoretic Perspective on Model Interpretation.

L2X Code for replicating the experiments in the paper Learning to Explain: An Information-Theoretic Perspective on Model Interpretation at ICML 2018,

Jianbo Chen 113 Sep 06, 2022
Neural network visualization toolkit for tf.keras

Neural network visualization toolkit for tf.keras

Yasuhiro Kubota 262 Dec 19, 2022
Delve is a Python package for analyzing the inference dynamics of your PyTorch model.

Delve is a Python package for analyzing the inference dynamics of your PyTorch model.

Delve 73 Dec 12, 2022
Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University

Contrastive Explanation (Foil Trees) Contrastive and counterfactual explanations for machine learning (ML) Marcel Robeer (2018-2020), TNO/Utrecht Univ

M.J. Robeer 41 Aug 29, 2022
Code for "High-Precision Model-Agnostic Explanations" paper

Anchor This repository has code for the paper High-Precision Model-Agnostic Explanations. An anchor explanation is a rule that sufficiently “anchors”

Marco Tulio Correia Ribeiro 735 Jan 05, 2023
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)

Hierarchical neural-net interpretations (ACD) 🧠 Produces hierarchical interpretations for a single prediction made by a pytorch neural network. Offic

Chandan Singh 111 Jan 03, 2023
treeinterpreter - Interpreting scikit-learn's decision tree and random forest predictions.

TreeInterpreter Package for interpreting scikit-learn's decision tree and random forest predictions. Allows decomposing each prediction into bias and

Ando Saabas 720 Dec 22, 2022
Visualization toolkit for neural networks in PyTorch! Demo -->

FlashTorch A Python visualization toolkit, built with PyTorch, for neural networks in PyTorch. Neural networks are often described as "black box". The

Misa Ogura 692 Dec 29, 2022
👋🦊 Xplique is a Python toolkit dedicated to explainability, currently based on Tensorflow.

👋🦊 Xplique is a Python toolkit dedicated to explainability, currently based on Tensorflow.

DEEL 343 Jan 02, 2023
Quickly and easily create / train a custom DeepDream model

Dream-Creator This project aims to simplify the process of creating a custom DeepDream model by using pretrained GoogleNet models and custom image dat

56 Jan 03, 2023
Pytorch Feature Map Extractor

MapExtrackt Convolutional Neural Networks Are Beautiful We all take our eyes for granted, we glance at an object for an instant and our brains can ide

Lewis Morris 40 Dec 07, 2022