Papers about explainability of GNNs

Overview

awesome-graph-explainability-papers

Papers about explainability of GNNs

Most Influential Cogdl

  1. Explainability in graph neural networks: A taxonomic survey. Yuan Hao, Yu Haiyang, Gui Shurui, Ji Shuiwang. ARXIV 2020. paper
  2. Gnnexplainer: Generating explanations for graph neural networks. Ying Rex, Bourgeois Dylan, You Jiaxuan, Zitnik Marinka, Leskovec Jure. NeurIPS 2019. paper code
  3. Explainability methods for graph convolutional neural networks. Pope Phillip E, Kolouri Soheil, Rostami Mohammad, Martin Charles E, Hoffmann Heiko. CVPR 2019.paper
  4. Parameterized Explainer for Graph Neural Network. Luo Dongsheng, Cheng Wei, Xu Dongkuan, Yu Wenchao, Zong Bo, Chen Haifeng, Zhang Xiang. NeurIPS 2020. paper code
  5. Xgnn: Towards model-level explanations of graph neural networks. Yuan Hao, Tang Jiliang, Hu Xia, Ji Shuiwang. KDD 2020. paper.
  6. Evaluating Attribution for Graph Neural Networks. Sanchez-Lengeling Benjamin, Wei Jennifer, Lee Brian, Reif Emily, Wang Peter, Qian Wesley, McCloskey Kevin, Colwell Lucy, Wiltschko Alexander. NeurIPS 2020.paper
  7. PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks. Vu Minh, Thai My T.. NeurIPS 2020.paper
  8. Explanation-based Weakly-supervised Learning of Visual Relations with Graph Networks. Federico Baldassarre and Kevin Smith and Josephine Sullivan and Hossein Azizpour. ECCV 2020.paper
  9. GCAN: Graph-aware Co-Attention Networks for Explainable Fake News Detection on Social Media. Lu, Yi-Ju and Li, Cheng-Te. ACL 2020.paper
  10. On Explainability of Graph Neural Networks via Subgraph Explorations. Yuan Hao, Yu Haiyang, Wang Jie, Li Kang, Ji Shuiwang. ICML 2021.paper

Recent SOTA

  1. Quantifying Explainers of Graph Neural Networks in Computational Pathology. Jaume Guillaume, Pati Pushpak, Bozorgtabar Behzad, Foncubierta Antonio, Anniciello Anna Maria, Feroce Florinda, Rau Tilman, Thiran Jean-Philippe, Gabrani Maria, Goksel Orcun. Proceedings of the IEEECVF Conference on Computer Vision and Pattern Recognition CVPR 2021.paper
  2. Counterfactual Supporting Facts Extraction for Explainable Medical Record Based Diagnosis with Graph Network. Wu Haoran, Chen Wei, Xu Shuang, Xu Bo. NAACL 2021. paper
  3. When Comparing to Ground Truth is Wrong: On Evaluating GNN Explanation Methods. Faber Lukas, K. Moghaddam Amin, Wattenhofer Roger. KDD 2021. paper
  4. Counterfactual Graphs for Explainable Classification of Brain Networks. Abrate Carlo, Bonchi Francesco. Proceedings of the th ACM SIGKDD Conference on Knowledge Discovery Data Mining KDD 2021. paper
  5. Explainable Subgraph Reasoning for Forecasting on Temporal Knowledge Graphs. Zhen Han, Peng Chen, Yunpu Ma, Volker Tresp. International Conference on Learning Representations ICLR 2021.paper
  6. Generative Causal Explanations for Graph Neural Networks. Lin Wanyu, Lan Hao, Li Baochun. Proceedings of the th International Conference on Machine Learning ICML 2021.paper
  7. Improving Molecular Graph Neural Network Explainability with Orthonormalization and Induced Sparsity. Henderson Ryan, Clevert Djork-Arné, Montanari Floriane. Proceedings of the th International Conference on Machine Learning ICML 2021.paper
  8. Explainable Automated Graph Representation Learning with Hyperparameter Importance. Wang Xin, Fan Shuyi, Kuang Kun, Zhu Wenwu. Proceedings of the th International Conference on Machine Learning ICML 2021.paper
  9. Higher-order explanations of graph neural networks via relevant walks. Schnake Thomas, Eberle Oliver, Lederer Jonas, Nakajima Shinichi, Schütt Kristof T, Müller Klaus-Robert, Montavon Grégoire. arXiv preprint arXiv:2006.03589 2020. paper
  10. HENIN: Learning Heterogeneous Neural Interaction Networks for Explainable Cyberbullying Detection on Social Media. Chen, Hsin-Yu and Li, Cheng-Te. EMNLP 2020. paper

Year 2022

  1. [AAAI22] ProtGNN: Towards Self-Explaining Graph Neural Networks [paper]

Year 2021

  1. [Arxiv 21] Combining Sub-Symbolic and Symbolic Methods for Explainability [paper]
  2. [PAKDD 21] SCARLET: Explainable Attention based Graph Neural Network for Fake News spreader prediction [paper]
  3. [J. Chem. Inf. Model] Coloring Molecules with Explainable Artificial Intelligence for Preclinical Relevance Assessment [paper]
  4. [BioRxiv 21] APRILE: Exploring the Molecular Mechanisms of Drug Side Effects with Explainable Graph Neural Networks [paper]
  5. [ISM 21] Edge-Level Explanations for Graph Neural Networks by Extending Explainability Methods for Convolutional Neural Networks [paper]
  6. [TPAMI 21] Higher-Order Explanations of Graph Neural Networks via Relevant Walks [paper]
  7. [OpenReview 21] FlowX: Towards Explainable Graph Neural Networks via Message Flows [paper]
  8. [OpenReview 21] Task-Agnostic Graph Neural Explanations [paper]
  9. [OpenReview 21] Deconfounding to Explanation Evaluation in Graph Neural Networks [paper]
  10. [OpenReview 21] DEGREE: Decomposition Based Explanation for Graph Neural Networks [paper]
  11. [OpenReview 21] Discovering Invariant Rationales for Graph Neural Networks [paper]
  12. [OpenReview 21] Interpreting Graph Neural Networks via Unrevealed Causal Learning [paper]
  13. [OpenReview 21] Explainable GNN-Based Models over Knowledge Graphs [paper]
  14. [NeurIPS 2021] Reinforcement Learning Enhanced Explainer for Graph Neural Networks [paper]
  15. [NeurIPS 2021] Towards Multi-Grained Explainability for Graph Neural Networks [paper]
  16. [NeurIPS 2021] Robust Counterfactual Explanations on Graph Neural Networks [paper]
  17. [CVPR 2021] Quantifying Explainers of Graph Neural Networks in Computational Pathology.[paper]
  18. [NAACL 2021] Counterfactual Supporting Facts Extraction for Explainable Medical Record Based Diagnosis with Graph Network. [paper]
  19. [Arxiv 21] A Meta-Learning Approach for Training Explainable Graph Neural Network [paper]
  20. [Arxiv 21] Jointly Attacking Graph Neural Network and its Explanations [paper]
  21. [Arxiv 21] Towards a Rigorous Theoretical Analysis and Evaluation of GNN Explanations [paper]
  22. [Arxiv 21] SEEN: Sharpening Explanations for Graph Neural Networks using Explanations from Neighborhoods [paper]
  23. [Arxiv 21] Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks [paper]
  24. [Arxiv 21] Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation [paper]
  25. [Arxiv 21] Learnt Sparsification for Interpretable Graph Neural Networks [paper]
  26. [Arxiv 21] Efficient and Interpretable Robot Manipulation with Graph Neural Networks [paper]
  27. [Arxiv 21] IA-GCN: Interpretable Attention based Graph Convolutional Network for Disease prediction [paper]
  28. [ICML 2021] On Explainability of Graph Neural Networks via Subgraph Explorations[paper]
  29. [ICML 2021] Generative Causal Explanations for Graph Neural Networks[paper]
  30. [ICML 2021] Improving Molecular Graph Neural Network Explainability with Orthonormalization and Induced Sparsity[paper]
  31. [ICML 2021] Automated Graph Representation Learning with Hyperparameter Importance Explanation[paper]
  32. [ICML workshop 21] GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph Neural Networks [paper]
  33. [ICML workshop 21] BrainNNExplainer: An Interpretable Graph Neural Network Framework for Brain Network based Disease Analysis [paper]
  34. [ICML workshop 21] Reliable Graph Neural Network Explanations Through Adversarial Training [paper]
  35. [ICML workshop 21] Reimagining GNN Explanations with ideas from Tabular Data [paper]
  36. [ICML workshop 21] Towards Automated Evaluation of Explanations in Graph Neural Networks [paper]
  37. [ICML workshop 21] Quantitative Evaluation of Explainable Graph Neural Networks for Molecular Property Prediction [paper]
  38. [ICML workshop 21] SALKG: Learning From Knowledge Graph Explanations for Commonsense Reasoning [paper]
  39. [ICLR 2021] Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking[paper]
  40. [ICLR 2021] Graph Information Bottleneck for Subgraph Recognition [paper]
  41. [KDD 2021] When Comparing to Ground Truth is Wrong: On Evaluating GNN Explanation Methods[paper]
  42. [KDD 2021] Counterfactual Graphs for Explainable Classification of Brain Networks [paper]
  43. [AAAI 2021] Motif-Driven Contrastive Learning of Graph Representations [paper]
  44. [WWW 2021] Interpreting and Unifying Graph Neural Networks with An Optimization Framework [paper]
  45. [ICDM 2021] GNES: Learning to Explain Graph Neural Networks [paper]
  46. [ICDM 2021] GCN-SE: Attention as Explainability for Node Classification in Dynamic Graphs [paper]
  47. [ICDM 2021] Multi-objective Explanations of GNN Predictions
  48. [CIKM 2021] Towards Self-Explainable Graph Neural Network [paper]
  49. [ECML PKDD 2021] GraphSVX: Shapley Value Explanations for Graph Neural Networks [paper]
  50. [WiseML 2021] Explainability-based Backdoor Attacks Against Graph Neural Networks [paper]
  51. [IJCNN 21] MEG: Generating Molecular Counterfactual Explanations for Deep Graph Networks [paper]
  52. [KDD workshop 21] CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks [paper]
  53. [ICCSA 2021] Understanding Drug Abuse Social Network Using Weighted Graph Neural Networks Explainer [paper]
  54. [NeSy 21] A New Concept for Explaining Graph Neural Networks [paper]
  55. [Information Fusion 21] Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI [paper]
  56. [Patterns 21] hcga: Highly Comparative Graph Analysis for network phenotyping [paper]

Year 2020

  1. [NeurIPS 2020] Parameterized Explainer for Graph Neural Network.[paper]
  2. [NeurIPS 2020] PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks [paper]
  3. [KDD 2020] XGNN: Towards Model-Level Explanations of Graph Neural Networks [paper]
  4. [ACL 2020]GCAN: Graph-aware Co-Attention Networks for Explainable Fake News Detection on Social Media. paper
  5. [ICML workstop 2020] Contrastive Graph Neural Network Explanation [paper]
  6. [ICML workstop 2020] Towards Explainable Graph Representations in Digital Pathology [paper]
  7. [NeurIPS workshop 2020] Explaining Deep Graph Networks with Molecular Counterfactuals [paper]
  8. [[email protected] 2020] Exploring Graph-Based Neural Networks for Automatic Brain Tumor Segmentation" [paper]
  9. [Arxiv 2020] Graph Neural Networks Including Sparse Interpretability [paper]
  10. [OpenReview 20] A Framework For Differentiable Discovery Of Graph Algorithms [paper]
  11. [OpenReview 20] Causal Screening to Interpret Graph Neural Networks [paper]
  12. [Arxiv 20] xFraud: Explainable Fraud Transaction Detection on Heterogeneous Graphs [paper]
  13. [Arxiv 20] Explaining decisions of Graph Convolutional Neural Networks: patient-specific molecular subnetworks responsible for metastasis prediction in breast cancer [paper]
  14. [Arxiv 20] Understanding Graph Neural Networks from Graph Signal Denoising Perspectives [paper]
  15. [Arxiv 20] Understanding the Message Passing in Graph Neural Networks via Power Iteration [paper]
  16. [Arxiv 20] xERTE: Explainable Reasoning on Temporal Knowledge Graphs for Forecasting Future Links [paper]
  17. [IJCNN 20] GCN-LRP explanation: exploring latent attention of graph convolutional networks] [paper]
Owner
Dongsheng Luo
Ph.D. Student @ PSU
Dongsheng Luo
Code for the bachelors-thesis flaky fault localization

Flaky_Fault_Localization Scripts for the Bachelors-Thesis: "Flaky Fault Localization" by Christian Kasberger. The thesis examines the usefulness of sp

Christian Kasberger 1 Oct 26, 2021
PyTorch implementation for MINE: Continuous-Depth MPI with Neural Radiance Fields

MINE: Continuous-Depth MPI with Neural Radiance Fields Project Page | Video PyTorch implementation for our ICCV 2021 paper. MINE: Towards Continuous D

Zijian Feng 325 Dec 29, 2022
《Truly shift-invariant convolutional neural networks》(2021)

Truly shift-invariant convolutional neural networks [Paper] Authors: Anadi Chaman and Ivan Dokmanić Convolutional neural networks were always assumed

Anadi Chaman 46 Dec 19, 2022
Put blind watermark into a text with python

text_blind_watermark Put blind watermark into a text. Can be used in Wechat dingding ... How to Use install pip install text_blind_watermark Alice Pu

郭飞 164 Dec 30, 2022
CAUSE: Causality from AttribUtions on Sequence of Events

CAUSE: Causality from AttribUtions on Sequence of Events

Wei Zhang 21 Dec 01, 2022
DilatedNet in Keras for image segmentation

Keras implementation of DilatedNet for semantic segmentation A native Keras implementation of semantic segmentation according to Multi-Scale Context A

303 Mar 15, 2022
Python-experiments - A Repository which contains python scripts to automate things and make your life easier with python

Python Experiments A Repository which contains python scripts to automate things

Vivek Kumar Singh 11 Sep 25, 2022
Fast EMD for Python: a wrapper for Pele and Werman's C++ implementation of the Earth Mover's Distance metric

PyEMD: Fast EMD for Python PyEMD is a Python wrapper for Ofir Pele and Michael Werman's implementation of the Earth Mover's Distance that allows it to

William Mayner 433 Dec 31, 2022
Minecraft agent to farm resources using reinforcement learning

BarnyardBot CS 175 group project using Malmo download BarnyardBot.py into the python examples directory and run 'python BarnyardBot.py' in the console

0 Jul 26, 2022
This is an example of a reproducible modelling project

An example of a reproducible modelling project What are we doing? This example was created for the 2021 fall lecture series of Stanford's Center for O

Armin Thomas 2 Oct 26, 2021
Official Implementation of LARGE: Latent-Based Regression through GAN Semantics

LARGE: Latent-Based Regression through GAN Semantics [Project Website] [Google Colab] [Paper] LARGE: Latent-Based Regression through GAN Semantics Yot

83 Dec 06, 2022
This repo contains the source code and a benchmark for predicting user's utilities with Machine Learning techniques for Computational Persuasion

Machine Learning for Argument-Based Computational Persuasion This repo contains the source code and a benchmark for predicting user's utilities with M

Ivan Donadello 4 Nov 07, 2022
MazeRL is an application oriented Deep Reinforcement Learning (RL) framework

MazeRL is an application oriented Deep Reinforcement Learning (RL) framework, addressing real-world decision problems. Our vision is to cover the complete development life cycle of RL applications ra

EnliteAI GmbH 222 Dec 24, 2022
Classification of EEG data using Deep Learning

Graduation-Project Classification of EEG data using Deep Learning Epilepsy is the most common neurological disease in the world. Epilepsy occurs as a

Osman Alpaydın 5 Jun 24, 2022
Official PyTorch Code of GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection (CVPR 2021)

GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Mo

Abhinav Kumar 76 Jan 02, 2023
PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud, CVPR 2019.

PointRCNN PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud Code release for the paper PointRCNN:3D Object Proposal Generation a

Shaoshuai Shi 1.5k Dec 27, 2022
Machine Learning with JAX Tutorials

The purpose of this repo is to make it easy to get started with JAX. It contains my "Machine Learning with JAX" series of tutorials (YouTube videos and Jupyter Notebooks) as well as the content I fou

Aleksa Gordić 372 Dec 28, 2022
This repository contains PyTorch models for SpecTr (Spectral Transformer).

SpecTr: Spectral Transformer for Hyperspectral Pathology Image Segmentation This repository contains PyTorch models for SpecTr (Spectral Transformer).

Boxiang Yun 45 Dec 13, 2022
[NeurIPS 2021] Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples | ⛰️⚠️

Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples This repository is the official implementation of "Tow

Sungyoon Lee 4 Jul 12, 2022
Experiments on continual learning from a stream of pretrained models.

Ex-model CL Ex-model continual learning is a setting where a stream of experts (i.e. model's parameters) is available and a CL model learns from them

Antonio Carta 6 Dec 04, 2022