Source code of NeurIPS 2021 Paper ''Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration''

Related tags

Deep LearningCaGCN
Overview

CaGCN

This repo is for source code of NeurIPS 2021 paper "Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration".

Paper Link: https://arxiv.org/abs/2109.14285

Environment

  • python == 3.8.8
  • pytorch == 1.8.1
  • dgl -cuda11.1 == 0.6.1
  • networkx == 2.5
  • numpy == 1.20.2

GPU: GeForce RTX 2080 Ti

CPU: Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz

Confidence Calibration

CaGCN

python CaGCN.py --model GCN --hidden 64 --dataset dataset --labelrate labelrate --stage 1 --lr_for_cal 0.01 --l2_for_cal 5e-3
python CaGCN.py --model GAT --hidden 8 --dataset dataset --labelrate labelrate --dropout 0.6 --lr 0.005 --stage 1 --lr_for_cal 0.01 --l2_for_cal 5e-3
  • dataset: including [Cora, Citeseer, Pubmed], required.
  • labelrate: including [20, 40, 60], required.

e.g.,

python CaGCN.py --model GCN --hidden 64 --dataset Cora --labelrate 20 --stage 1 --lr_for_cal 0.01 --l2_for_cal 5e-3
python CaGCN.py --model GAT --hidden 8 --dataset Cora --labelrate 20 --dropout 0.6 --lr 0.005 --stage 1 --lr_for_cal 0.01 --l2_for_cal 5e-3

For CoraFull,

python CaGCN.py --model GCN --hidden 64 --dataset CoraFull --labelrate labelrate --stage 1 --lr_for_cal 0.01 --l2_for_cal 0.03
python CaGCN.py --model GAT --hidden 8 --dataset CoraFull --labelrate labelrate --dropout 0.6 --lr 0.005 --stage 1 --lr_for_cal 0.01 --l2_for_cal 0.03
  • labelrate: including [20, 40, 60], required.

Uncalibrated model

python train_others.py --model GCN --hidden 64 --dataset dataset --labelrate labelrate --stage 1 
python train_others.py --model GAT --hidden 8 --dataset dataset --labelrate labelrate --stage 1 --dropout 0.6 --lr 0.005
  • dataset: including [Cora, Citeseer, Pubmed, CoraFull], required.
  • labelrate: including [20, 40, 60], required.

e.g.,

python train_others.py --model GCN --hidden 64 --dataset Cora --labelrate 20 --stage 1
python train_others.py --model GAT --hidden 8 --dataset Cora --labelrate 20 --stage 1 --dropout 0.6 --lr 0.005

Temperature scaling & Matring Scaling

python train_others.py --model GCN --scaling_method method --hidden 64 --dataset dataset --labelrate labelrate --stage 1 --lr_for_cal 0.01 --max_iter 50
python train_others.py --model GAT --scaling_method method --hidden 8 --dataset dataset --labelrate labelrate --dropout 0.6 --lr 0.005 --stage 1 --lr_for_cal 0.01 --max_iter 50
  • method: including [TS, MS], required.
  • dataset: including [Cora, Citeseer, Pubmed, CoraFull], required.
  • labelrate: including [20, 40, 60], required.

e.g.,

python train_others.py --model GCN --scaling_method TS --hidden 64 --dataset Cora --labelrate 20 --stage 1 --lr_for_cal 0.01 --max_iter 50
python train_others.py --model GAT --scaling_method TS --hidden 8 --dataset Cora --labelrate 20 --dropout 0.6 --lr 0.005 --stage 1 --lr_for_cal 0.01 --max_iter 50

Self-Training

GCN L/C=20

python CaGCN.py --model GCN --hidden 64 --dataset Cora --labelrate 20 --stage 4 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 200 --threshold 0.8
python CaGCN.py --model GCN --hidden 64 --dataset Citeseer --labelrate 20 --stage 5 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 150 --threshold 0.9
python CaGCN.py --model GCN --hidden 64 --dataset Pubmed --labelrate 20 --stage 6 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 100 --threshold 0.8
python CaGCN.py --model GCN --hidden 64 --dataset CoraFull --labelrate 20 --stage 4 --lr_for_cal 0.001 --l2_for_cal 0.03 --epoch_for_st 500 --threshold 0.85

GCN L/C=40

python CaGCN.py --model GCN --hidden 64 --dataset Cora --labelrate 40 --stage 2 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 200 --threshold 0.8
python CaGCN.py --model GCN --hidden 64 --dataset Citeseer --labelrate 40 --stage 2 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 150 --threshold 0.85
python CaGCN.py --model GCN --hidden 64 --dataset Pubmed --labelrate 40 --stage 4 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 100 --threshold 0.8
python CaGCN.py --model GCN --hidden 64 --dataset CoraFull --labelrate 40 --stage 4 --lr_for_cal 0.001 --l2_for_cal 0.03 --epoch_for_st 500 --threshold 0.99

GCN L/C=60

python CaGCN.py --model GCN --hidden 64 --dataset Cora --labelrate 60 --stage 4 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 200 --threshold 0.8
python CaGCN.py --model GCN --hidden 64 --dataset Citeseer --labelrate 60 --stage 2 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 150 --threshold 0.8
python CaGCN.py --model GCN --hidden 64 --dataset Pubmed --labelrate 60 --stage 3 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 100 --threshold 0.6
python CaGCN.py --model GCN --hidden 64 --dataset CoraFull --labelrate 60 --stage 5 --lr_for_cal 0.001 --l2_for_cal 0.03 --epoch_for_st 500 --threshold 0.9

GAT L/C=20

python CaGCN.py --model GAT --hidden 8 --dataset Cora --labelrate 20 --dropout 0.6 --lr 0.005 --stage 6 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 200 --threshold 0.8
python CaGCN.py --model GAT --hidden 8 --dataset Citeseer --labelrate 20 --dropout 0.6 --lr 0.005 --stage 3 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 150 --threshold 0.7
python CaGCN.py --model GAT --hidden 8 --dataset Pubmed --labelrate 20 --dropout 0.6 --lr 0.005 --weight_decay 1e-3 --stage 2 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 100 --threshold 0.8 
python CaGCN.py --model GAT --hidden 8 --dataset CoraFull --labelrate 20 --dropout 0.6 --lr 0.005 --stage 5 --lr_for_cal 0.001 --l2_for_cal 0.03 --epoch_for_st 500 --threshold 0.95

GAT L/C=40

python CaGCN.py --model GAT --hidden 8 --dataset Cora --labelrate 40 --dropout 0.6 --lr 0.005 --stage 4 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 200 --threshold 0.9
python CaGCN.py --model GAT --hidden 8 --dataset Citeseer --labelrate 40 --dropout 0.6 --lr 0.005 --stage 2 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 150 --threshold 0.8
python CaGCN.py --model GAT --hidden 8 --dataset Pubmed --labelrate 40 --dropout 0.6 --lr 0.005 --weight_decay 1e-3 --stage 2 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 100 --threshold 0.8 
python CaGCN.py --model GAT --hidden 8 --dataset CoraFull --labelrate 40 --dropout 0.6 --lr 0.005 --stage 2 --lr_for_cal 0.001 --l2_for_cal 0.03 --epoch_for_st 500 --threshold 0.95

GAT L/C=60

python CaGCN.py --model GAT --hidden 8 --dataset Cora --labelrate 60 --dropout 0.6 --lr 0.005 --stage 2 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 200 --threshold 0.8
python CaGCN.py --model GAT --hidden 8 --dataset Citeseer --labelrate 60 --dropout 0.6 --lr 0.005 --stage 6 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 150 --threshold 0.8
python CaGCN.py --model GAT --hidden 8 --dataset Pubmed --labelrate 60 --dropout 0.6 --lr 0.005 --weight_decay 1e-3 --stage 3 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 100 --threshold 0.85 
python CaGCN.py --model GAT --hidden 8 --dataset CoraFull --labelrate 60 --dropout 0.6 --lr 0.005 --stage 2 --lr_for_cal 0.001 --l2_for_cal 0.03 --epoch_for_st 500 --threshold 0.95

More Parameters

For more parameters of baselines, please refer to the Parameter.md

Contact

If you have any questions, please feel free to contact me with [email protected]

Modular Gaussian Processes

Modular Gaussian Processes for Transfer Learning 🧩 Introduction This repository contains the implementation of our paper Modular Gaussian Processes f

Pablo Moreno-Muñoz 10 Mar 15, 2022
This is a library for training and applying sparse fine-tunings with torch and transformers.

This is a library for training and applying sparse fine-tunings with torch and transformers. Please refer to our paper Composable Sparse Fine-Tuning f

Cambridge Language Technology Lab 37 Dec 30, 2022
auto-tuning momentum SGD optimizer

YellowFin YellowFin is an auto-tuning optimizer based on momentum SGD which requires no manual specification of learning rate and momentum. It measure

Jian Zhang 288 Nov 19, 2022
gtfs2vec - Learning GTFS Embeddings for comparing PublicTransport Offer in Microregions

gtfs2vec This is a companion repository for a gtfs2vec - Learning GTFS Embeddings for comparing PublicTransport Offer in Microregions publication. Vis

Politechnika Wrocławska - repozytorium dla informatyków 5 Oct 10, 2022
LSTM model trained on a small dataset of 3000 names written in PyTorch

LSTM model trained on a small dataset of 3000 names. Model generates names from model by selecting one out of top 3 letters suggested by model at a time until an EOS (End Of Sentence) character is no

Sahil Lamba 1 Dec 20, 2021
PyDeepFakeDet is an integrated and scalable tool for Deepfake detection.

PyDeepFakeDet An integrated and scalable library for Deepfake detection research. Introduction PyDeepFakeDet is an integrated and scalable Deepfake de

Junke, Wang 49 Dec 11, 2022
Predict the latency time of the deep learning models

Deep Neural Network Prediction Step 1. Genernate random parameters and Run them sequentially : $ python3 collect_data.py -gp -ep -pp -pl pooling -num

QAQ 1 Nov 12, 2021
Geometric Sensitivity Decomposition

Geometric Sensitivity Decomposition This repo is the official implementation of A Geometric Perspective towards Neural Calibration via Sensitivity Dec

16 Dec 26, 2022
Nicholas Lee 3 Jan 09, 2022
R interface to fast.ai

R interface to fastai The fastai package provides R wrappers to fastai. The fastai library simplifies training fast and accurate neural nets using mod

113 Dec 20, 2022
I-SECRET: Importance-guided fundus image enhancement via semi-supervised contrastive constraining

I-SECRET This is the implementation of the MICCAI 2021 Paper "I-SECRET: Importance-guided fundus image enhancement via semi-supervised contrastive con

13 Dec 02, 2022
Dynamic Token Normalization Improves Vision Transformers

Dynamic Token Normalization Improves Vision Transformers This is the PyTorch implementation of the paper Dynamic Token Normalization Improves Vision T

Wenqi Shao 20 Oct 09, 2022
AutoML library for deep learning

Official Website: autokeras.com AutoKeras: An AutoML system based on Keras. It is developed by DATA Lab at Texas A&M University. The goal of AutoKeras

Keras 8.7k Jan 08, 2023
Distributed DataLoader For Pytorch Based On Ray

Dpex——用户无感知分布式数据预处理组件 一、前言 随着GPU与CPU的算力差距越来越大以及模型训练时的预处理Pipeline变得越来越复杂,CPU部分的数据预处理已经逐渐成为了模型训练的瓶颈所在,这导致单机的GPU配置的提升并不能带来期望的线性加速。预处理性能瓶颈的本质在于每个GPU能够使用的C

Dalong 23 Nov 02, 2022
Kaggle Ultrasound Nerve Segmentation competition [Keras]

Ultrasound nerve segmentation using Keras (1.0.7) Kaggle Ultrasound Nerve Segmentation competition [Keras] #Install (Ubuntu {14,16}, GPU) cuDNN requir

179 Dec 28, 2022
Rule Extraction Methods for Interactive eXplainability

REMIX: Rule Extraction Methods for Interactive eXplainability This repository contains a variety of tools and methods for extracting interpretable rul

Mateo Espinosa Zarlenga 21 Jan 03, 2023
Select, weight and analyze complex sample data

Sample Analytics In large-scale surveys, often complex random mechanisms are used to select samples. Estimates derived from such samples must reflect

samplics 37 Dec 15, 2022
QuanTaichi evaluation suite

QuanTaichi: A Compiler for Quantized Simulations (SIGGRAPH 2021) Yuanming Hu, Jiafeng Liu, Xuanda Yang, Mingkuan Xu, Ye Kuang, Weiwei Xu, Qiang Dai, W

Taichi Developers 120 Jan 04, 2023
Code accompanying the paper "How Tight Can PAC-Bayes be in the Small Data Regime?"

How Tight Can PAC-Bayes be in the Small Data Regime? This is the code to reproduce all experiments for the following paper: @inproceedings{Foong:2021:

5 Dec 21, 2021
An official source code for "Augmentation-Free Self-Supervised Learning on Graphs"

Augmentation-Free Self-Supervised Learning on Graphs An official source code for Augmentation-Free Self-Supervised Learning on Graphs paper, accepted

Namkyeong Lee 59 Dec 01, 2022