Code to reproduce experiments in the paper "Explainability Requires Interactivity".

Overview

Explainability Requires Interactivity

This repository contains the code to train all custom models used in the paper Explainability Requires Interactivity as well as to create all static explanations (heat maps and generative). For our interactive framework, see the sister repositor.

Precomputed generative explanations are located at static_generative_explanations.

Requirements

Install the conda environment via conda env create -f env.yml (depending on your system you might need to change some versions, e.g. for pytorch, cudatoolkit and pytorch-lightning).

For some parts you will need the FairFace model, which can be downloaded from the authors' repo. You will only need the res34_fair_align_multi_7_20190809.pt file.

Training classification networks

CelebA dataset

You first need to download and decompress the CelebAMask-HQ dataset (or here). Then run the training with

python train.py --dset celeb --dset_path /PATH/TO/CelebAMask-HQ/ --classes_or_attr Smiling --target_path /PATH/TO/OUTPUT

/PATH/TO/FLOWERS102/ should contain a CelebAMask-HQ-attribute-anno.txt file and an CelebA-HQ-img directory. Any of the columns in CelebAMask-HQ-attribute-anno.txt can be used; in the paper we used Heavy_Makeup, Male, Smiling, and Young.

Flowers102 dataset

You first need to download and decompress the Flowers102 data. Then run the training with

python train.py --dset flowers102 --dset_path /PATH/TO/FLOWERS102/ --classes_or_attr 49-65 --target_path /PATH/TO/OUTPUT/

/PATH/TO/FLOWERS102/ should contain an imagelabels.mat file and an images directory. Classes 49 and 65 correspond to the "Oxeye daisy" and "California poppy", while 63 and 54 correspond to "Black-eyed Susan" and "Sunflower" as in the paper.

Generating heatmap explanations

Heatmap explanations are generated using the Captum library. After training, run explanations via

python static_exp.py --model_path /PATH/TO/MODEL.pt --img_path /PATH/TO/IMGS/ --model_name celeb --fig_dir /PATH/TO/OUTPUT/

/PATH/TO/IMGS/ contains (only) image files and can be omitted in order to run the default images exported by train.py. To run on FairFace, choose --model_name fairface and add --attr age or --attr gender. Other explanation methods can be easily added by modifying the explain_all function in static_exp.py. Explanations are saved to fig_dir. Only tested for the networks trained on the facial images data in the previous step, but any resnet18 with scalar output layer should work just as well.

Generating generative explanations

First, clone the original NVIDIA StyleGAN2-ada-pytorch repo. Make sure everything works as expected (e.g. run the getting started code). If the code is stuck at loading TODO, usually ctrl-C will let the model fall back to a smaller reference implementation which is good enough for our use case. Next, export the repo into your PYTHONPATH (e.g. via export PYTHONPATH=$PYTHONPATH:/PATH/TO/stylegan2-ada-pytorch/). To generate explanations, you will need to 0) train an image model (see above, or use the FairFace model); 1) create a dataset of latent codes + labels; 2) train a latent space logistic regression models; and 3) create the explanations. As each of the steps can be very slow, we split them up

Create labeled latent dataset

First, make sure to either train at least one image model as in the first step and/or download the FairFace model.

python generative_exp.py --phase 1 --attrs Smiling,ff-skin-color --base_dir /PATH/TO/BASE/ --generator_path /PATH/TO/STYLEGAN2.pkl --n_train 20000 --n_valid 5000

The base_dir is the directory where all files/sub-directories are stored and should be the same as the target_path from train.py (e.g., just .). It should contain e.g. the celeb-Smiling directory and the res34_fair_align_multi_7_20190809.pt file if using --attrs Smiling,ff-skin-color.

Train latent space model

After the first step, run

python generative_exp.py --phase 2 --attrs Smiling,ff-skin-color --base_dir /PATH/TO/BASE/ --epochs 50

with same base_dir and attrs.

Create generative explanations

Finally, you can generate generative explanations via

python generative_exp.py --phase 3 --base_dir /PATH/TO/BASE/ --eval_attr Smiling --generator_path /PATH/TO/STYLEGAN2.pkl --attrs Smiling,ff-skin-color --reconstruction_steps 1000 --ampl 0.09 --input_img_dir /PATH/TO/IMAGES/ --output_dir /PATH/TO/OUTPUT/

Here, eval_attr is the final evaluation model's class that you want to explain; attrs are the same as before, the directions in latent space; input_img_dir is a directory with (only) image files that are to be explained. Explanations are saved to output_dir.

Owner
Digital Health & Machine Learning
Digital Health & Machine Learning
Official implementation of the ICLR 2021 paper

You Only Need Adversarial Supervision for Semantic Image Synthesis Official PyTorch implementation of the ICLR 2021 paper "You Only Need Adversarial S

Bosch Research 272 Dec 28, 2022
Nerf pl - NeRF (Neural Radiance Fields) and NeRF in the Wild using pytorch-lightning

nerf_pl Update: an improved NSFF implementation to handle dynamic scene is open! Update: NeRF-W (NeRF in the Wild) implementation is added to nerfw br

AI葵 1.8k Dec 30, 2022
Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

El Bruno 3 Mar 30, 2022
Really awesome semantic segmentation

really-awesome-semantic-segmentation A list of all papers on Semantic Segmentation and the datasets they use. This site is maintained by Holger Caesar

Holger Caesar 400 Nov 28, 2022
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Master Docs License Apache MXNet (incubating) is a deep learning framework designed for both efficiency an

ROCm Software Platform 29 Nov 16, 2022
Request execution of Galaxy SARS-CoV-2 variation analysis workflows on input data you provide.

SARS-CoV-2 processing requests Request execution of Galaxy SARS-CoV-2 variation analysis workflows on input data you provide. Prerequisites This autom

useGalaxy.eu 17 Aug 13, 2022
Alphabetical Letter Recognition

BayeesNetworks-Image-Classification Alphabetical Letter Recognition In these demo we are using "Bayees Networks" Our database is composed by Learning

Mohammed Firass 4 Nov 30, 2021
The Simplest DCGAN Implementation

DCGAN in TensorLayer This is the TensorLayer implementation of Deep Convolutional Generative Adversarial Networks. Looking for Text to Image Synthesis

TensorLayer Community 310 Dec 13, 2022
Automatic tool focused on deriving metallicities of open clusters

metalcode Automatic tool focused on deriving metallicities of open clusters. Based on the method described in Pöhnl & Paunzen (2010, https://ui.adsabs

2 Dec 13, 2021
A python package for generating, analyzing and visualizing building shadows

pybdshadow Introduction pybdshadow is a python package for generating, analyzing and visualizing building shadows from large scale building geographic

Qing Yu 13 Nov 30, 2022
Text Extraction Formulation + Feedback Loop for state-of-the-art WSD (EMNLP 2021)

ConSeC is a novel approach to Word Sense Disambiguation (WSD), accepted at EMNLP 2021. It frames WSD as a text extraction task and features a feedback loop strategy that allows the disambiguation of

Sapienza NLP group 36 Dec 13, 2022
The implementation of FOLD-R++ algorithm

FOLD-R-PP The implementation of FOLD-R++ algorithm. The target of FOLD-R++ algorithm is to learn an answer set program for a classification task. Inst

13 Dec 23, 2022
5 Jan 05, 2023
This repository contains the needed resources to build the HIRID-ICU-Benchmark dataset

HiRID-ICU-Benchmark This repository contains the needed resources to build the HIRID-ICU-Benchmark dataset for which the manuscript can be found here.

Biomedical Informatics at ETH Zurich 30 Dec 16, 2022
Augmentation for Single-Image-Super-Resolution

SRAugmentation Augmentation for Single-Image-Super-Resolution Implimentation CutBlur Cutout CutMix Cutup CutMixup Blend RGBPermutation Identity OneOf

Yubo 6 Jun 27, 2022
Numbering permanent and deciduous teeth via deep instance segmentation in panoramic X-rays

Numbering permanent and deciduous teeth via deep instance segmentation in panoramic X-rays In this repo, you will find the instructions on how to requ

Intelligent Vision Research Lab 4 Jul 21, 2022
Fibonacci Method Gradient Descent

An implementation of the Fibonacci method for gradient descent, featuring a TKinter GUI for inputting the function / parameters to be examined and a matplotlib plot of the function and results.

Emma 1 Jan 28, 2022
[NeurIPS 2021] A weak-shot object detection approach by transferring semantic similarity and mask prior.

[NeurIPS 2021] A weak-shot object detection approach by transferring semantic similarity and mask prior.

BCMI 49 Jul 27, 2022
An official repository for Paper "Uformer: A General U-Shaped Transformer for Image Restoration".

Uformer: A General U-Shaped Transformer for Image Restoration Zhendong Wang, Xiaodong Cun, Jianmin Bao and Jianzhuang Liu Paper: https://arxiv.org/abs

Zhendong Wang 497 Dec 22, 2022
[ICML 2022] The official implementation of Graph Stochastic Attention (GSAT).

Graph Stochastic Attention (GSAT) The official implementation of GSAT for our paper: Interpretable and Generalizable Graph Learning via Stochastic Att

85 Nov 27, 2022