A PyTorch implementation of "SelfGNN: Self-supervised Graph Neural Networks without explicit negative sampling"

Overview

SelfGNN

A PyTorch implementation of "SelfGNN: Self-supervised Graph Neural Networks without explicit negative sampling" paper, which will appear in The International Workshop on Self-Supervised Learning for the Web (SSL'21) @ the Web Conference 2021 (WWW'21).

Note

This is an ongoing work and the repository is subjected to continuous updates.

Requirements!

  • Python 3.6+
  • PyTorch 1.6+
  • PyTorch Geometric 1.6+
  • Numpy 1.17.2+
  • Networkx 2.3+
  • SciPy 1.5.4+
  • (OPTINAL) OPTUNA 2.8.0+ If you wish to tune the hyper-parameters of SelfGNN for any dataset

Example usage

$ python src/train.py

💥 Updates

Update 3

Added a hyper-parameter tuning utility using OPTUNA.

usage:

$ python src/tune.py

Update 2

Contrary to what we've claimed in the paper, studies argue and empirically show that Batch Norm does not introduce implicit negative samples. Instead, mainly it compensate for improper initialization. We have carried out new and similar experiments, as shown in the table below, that seems to confirm this argument. (BN:Batch Norm, LN:Layer Norm, -: No Norm ). For this experiment we use a GCN encoder and split data-augmentation. Though BN does not provide implicit negative samples, the empirical evaluation shows that it leads to a better performance; putting it in the encoder is almost sufficient. LN on the other hand is not cosistent; furthemore, the model tends to prefer having BN than LN in any of the modules.

Module Dataset
Encoder Projector Predictor Photo Computer Pubmed
BN BN BN 94.05±0.23 88.83±0.17 77.76±0.57
- 94.2±0.17 88.78±0.20 75.48±0.70
- BN 94.01±0.20 88.65±0.16 78.66±0.52
- 93.9±0.18 88.82±0.16 78.53±0.47
LN LN LN 81.42±2.43 64.10±3.29 74.06±1.07
- 84.1±1.58 68.18±3.21 74.26±0.55
- LN 92.39±0.38 77.18±1.23 73.84±0.73
- 91.93±0.40 73.90±1.16 74.11±0.73
- BN BN 90.01±0.09 77.83±0.12 79.21±0.27
- 90.12±0.07 76.43±0.08 75.10±0.15
LN LN 45.34±2.47 40.56±1.48 56.29±0.77
- 52.92±3.37 40.23±1.46 60.76±0.81
- - BN 91.13±0.13 81.79±0.11 79.34±0.21
LN 50.64±2.84 47.62±2.27 64.18±1.08
- 50.35±2.73 43.68±1.80 63.91±0.92

Update 1

  • Both the paper and the source code are updated following the discussion on this issue
  • Ablation study on the impact of BatchNorm added following reviewers feedback from SSL'21
    • The findings show that SelfGNN with out batch normalization is not stable and often its performance drops significantly
    • Layer Normalization behaves similar to the finding of no BatchNorm

Possible options for training SelfGNN

The following options can be passed to src/train.py

--root: or -r: A path to a root directory to put all the datasets. Default is ./data

--name: or -n: The name of the datasets. Default is cora. Check the Supported dataset names

--model: or -m: The type of GNN architecture to use. Curently three architectres are supported (gcn, gat, sage). Default is gcn.

--aug: or -a: The name of the data augmentation technique. Curently (ppr, heat, katz, split, zscore, ldp, paste) are supported. Default is split.

--layers: or -l: One or more integer values specifying the number of units for each GNN layer. Default is 512 128

--norms: or -nm: The normalization scheme for each module. Default is batch. That is, a Batch Norm will be used in the prediction head. Specifying two inputs, e.g. --norms batch layer, allows the model to use batch norm in the GNN encoder, and layer norm in the prediction head. Finally, specifying three inputs, e.g., --norms no batch layer activates the projection head and normalization is used as: No norm for GNN encoder, Batch Norm for projection head and Layer Norm for the prediction head.

--heads: or -hd: One or more values specifying the number of heads for each GAT layer. Applicable for --model gat. Default is 8 1

--lr: or -lr: Learning rate, a value in [0, 1]. Default is 0.0001

--dropout: or -do: Dropout rate, a value in [0, 1]. Deafult is 0.2

--epochs: or -e: The number of epochs. Default is 1000.

--cache-step: or -cs: The step size for caching the model. That is, every --cache-step the model will be persisted. Default is 100.

--init-parts: or -ip: The number of initial partitions, for using the improved version using Clustering. Default is 1.

--final-parts: or -fp: The number of final partitions, for using the improved version using Clustering. Default is 1.

Supported dataset names

Name Nodes Edges Features Classes Description
Cora 2,708 5,278 1,433 7 Citation Network
Citeseer 3,327 4,552 3,703 6 Citation Network
Pubmed 19,717 44,324 500 3 Citation Network
Photo 7,487 119,043 745 8 Co-purchased products network
Computers 13,381 245,778 767 10 Co-purchased products network
CS 18,333 81,894 6,805 15 Collaboration network
Physics 34,493 247,962 8,415 5 Collaboration network

Any dataset from the PyTorch Geometric library can be used, however SelfGNN is tested only on the above datasets.

Citing

If you find this research helpful, please cite it as

@misc{kefato2021selfsupervised,
      title={Self-supervised Graph Neural Networks without explicit negative sampling}, 
      author={Zekarias T. Kefato and Sarunas Girdzijauskas},
      year={2021},
      eprint={2103.14958},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
Owner
Zekarias Tilahun
Zekarias Tilahun
FastReID is a research platform that implements state-of-the-art re-identification algorithms.

FastReID is a research platform that implements state-of-the-art re-identification algorithms.

JDAI-CV 2.8k Jan 07, 2023
MIRACLE (Missing data Imputation Refinement And Causal LEarning)

MIRACLE (Missing data Imputation Refinement And Causal LEarning) Code Author: Trent Kyono This repository contains the code used for the "MIRACLE: Cau

van_der_Schaar \LAB 15 Dec 29, 2022
piSTAR Lab is a modular platform built to make AI experimentation accessible and fun. (pistar.ai)

piSTAR Lab WARNING: This is an early release. Overview piSTAR Lab is a modular deep reinforcement learning platform built to make AI experimentation a

piSTAR Lab 0 Aug 01, 2022
Image-based Navigation in Real-World Environments via Multiple Mid-level Representations: Fusion Models Benchmark and Efficient Evaluation

Image-based Navigation in Real-World Environments via Multiple Mid-level Representations: Fusion Models Benchmark and Efficient Evaluation This reposi

First Person Vision @ Image Processing Laboratory - University of Catania 1 Aug 21, 2022
Modeling CNN layers activity with Gaussian mixture model

GMM-CNN This code package implements the modeling of CNN layers activity with Gaussian mixture model and Inference Graphs visualization technique from

3 Aug 05, 2022
Distributed Asynchronous Hyperparameter Optimization in Python

Hyperopt: Distributed Hyperparameter Optimization Hyperopt is a Python library for serial and parallel optimization over awkward search spaces, which

6.5k Jan 01, 2023
Use AI to generate a optimized stock portfolio

Use AI, Modern Portfolio Theory, and Monte Carlo simulation's to generate a optimized stock portfolio that minimizes risk while maximizing returns. Ho

Greg James 30 Dec 22, 2022
Fast Axiomatic Attribution for Neural Networks (NeurIPS*2021)

Fast Axiomatic Attribution for Neural Networks This is the official repository accompanying the NeurIPS 2021 paper: R. Hesse, S. Schaub-Meyer, and S.

Visual Inference Lab @TU Darmstadt 11 Nov 21, 2022
Research Artifact of USENIX Security 2022 Paper: Automated Side Channel Analysis of Media Software with Manifold Learning

Automated Side Channel Analysis of Media Software with Manifold Learning Official implementation of USENIX Security 2022 paper: Automated Side Channel

Yuanyuan Yuan 175 Jan 07, 2023
CSKG is a commonsense knowledge graph that combines seven popular sources into a consolidated representation

CSKG: The CommonSense Knowledge Graph CSKG is a commonsense knowledge graph that combines seven popular sources into a consolidated representation: AT

USC ISI I2 85 Dec 12, 2022
This repo contains the source code and a benchmark for predicting user's utilities with Machine Learning techniques for Computational Persuasion

Machine Learning for Argument-Based Computational Persuasion This repo contains the source code and a benchmark for predicting user's utilities with M

Ivan Donadello 4 Nov 07, 2022
A python library for implementing a recommender system

python-recsys A python library for implementing a recommender system. Installation Dependencies python-recsys is build on top of Divisi2, with csc-pys

Oscar Celma 1.5k Dec 17, 2022
TCNN Temporal convolutional neural network for real-time speech enhancement in the time domain

TCNN Pandey A, Wang D L. TCNN: Temporal convolutional neural network for real-time speech enhancement in the time domain[C]//ICASSP 2019-2019 IEEE Int

凌逆战 16 Dec 30, 2022
MLSpace: Hassle-free machine learning & deep learning development

MLSpace: Hassle-free machine learning & deep learning development

abhishek thakur 293 Jan 03, 2023
Fashion Entity Classification

Fashion-Entity-Classification - Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grays

ADITYA SHAH 1 Jan 04, 2022
Discovering Dynamic Salient Regions with Spatio-Temporal Graph Neural Networks

Discovering Dynamic Salient Regions with Spatio-Temporal Graph Neural Networks This is the official code for DyReg model inroduced in Discovering Dyna

Bitdefender Machine Learning 11 Nov 08, 2022
Lama-cleaner: Image inpainting tool powered by LaMa

Lama-cleaner: Image inpainting tool powered by LaMa

Qing 5.8k Jan 05, 2023
Official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Recognition" in AAAI2022.

AimCLR This is an official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Reco

Gty 44 Dec 17, 2022
A selection of State Of The Art research papers (and code) on human locomotion (pose + trajectory) prediction (forecasting)

A selection of State Of The Art research papers (and code) on human trajectory prediction (forecasting). Papers marked with [W] are workshop papers.

Karttikeya Manglam 40 Nov 18, 2022
Graph InfoClust: Leveraging cluster-level node information for unsupervised graph representation learning

Graph-InfoClust-GIC [PAKDD 2021] PAKDD'21 version Graph InfoClust: Maximizing Coarse-Grain Mutual Information in Graphs Preprint version Graph InfoClu

Costas Mavromatis 21 Dec 03, 2022