Expressive Power of Invariant and Equivaraint Graph Neural Networks (ICLR 2021)

Overview

Expressive Power of Invariant and Equivaraint Graph Neural Networks

In this repository, we show how to use powerful GNN (2-FGNN) to solve a graph alignment problem. This code was used to derive the practical results in the following paper:

Waiss Azizian, Marc Lelarge. Expressive Power of Invariant and Equivariant Graph Neural Networks, ICLR 2021.

arXiv OpenReview

Problem: alignment of graphs

The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. Here we consider a noisy version of this problem: the two graphs below are noisy versions of a parent graph. There is no strict isomorphism between them. Can we still match the vertices of graph 1 with the corresponding vertices of graph 2?

graph 1 graph 2

With our GNN, we obtain the following results: green vertices are well paired vertices and red vertices are errors. Both graphs are now represented using the layout from the right above but the color of the vertices are the same on both sides. At inference, our GNN builds node embedding for the vertices of graphs 1 and 2. Finally a node of graph 1 is matched to its most similar node of graph 2 in this embedding space.

graph 1 graph 2

Below, on the left, we plot the errors made by our GNN: errors made on red vertices are represented by links corresponding to a wrong matching or cycle; on the right, we superpose the two graphs: green edges are in both graphs (they correspond to the parent graph), orange edges are in graph 1 only and blue edges are in graph 2 only. We clearly see the impact of the noisy edges (orange and blue) as each red vertex (corresponding to an error) is connected to such edges (except the isolated red vertex).

Wrong matchings/cycles Superposing the 2 graphs

To measure the performance of our GNN, instead of looking at vertices, we can look at edges. On the left below, we see that our GNN recovers most of the green edges present in graphs 1 and 2 (edges from the parent graph). On the right, mismatched edges correspond mostly to noisy (orange and blue) edges (present in only one of the graphs 1 or 2).

Matched edges Mismatched edges

Training GNN for the graph alignment problem

For the training of our GNN, we generate synthetic datasets as follows: first sample the parent graph and then add edges to construct graphs 1 and 2. We obtain a dataset made of pairs of graphs for which we know the true matching of vertices. We then use a siamese encoder as shown below where the same GNN (i.e. shared weights) is used for both graphs. The node embeddings constructed for each graph are then used to predict the corresponding permutation index by taking the outer product and a softmax along each row. The GNN is trained with a standard cross-entropy loss. At inference, we can add a LAP solver to get a permutation from the matrix .

Various architectures can be used for the GNN and we find that FGNN (first introduced by Maron et al. in Provably Powerful Graph Networks NeurIPS 2019) are best performing for our task. In our paper Expressive Power of Invariant and Equivariant Graph Neural Networks, we substantiate these empirical findings by proving that FGNN has a better power of approximation among all equivariant architectures working with tensors of order 2 presented so far (this includes message passing GNN or linear GNN).

Results

Each line corresponds to a model trained at a given noise level and shows its accuracy across all noise levels. We see that pretrained models generalize very well at noise levels unseen during the training.

We provide a simple notebook to reproduce this result for the pretrained model released with this repository (to run the notebook create a ipykernel with name gnn and with the required dependencies as described below).

We refer to our paper for comparisons with other algorithms (message passing GNN, spectral or SDP algorithms).

To cite our paper:

@inproceedings{azizian2020characterizing,
  title={Expressive power of invariant and equivariant graph neural networks},
  author={Azizian, Wa{\"\i}ss and Lelarge, Marc},
  booktitle={International Conference on Learning Representations},
  year={2021},
  url={https://openreview.net/forum?id=lxHgXYN4bwl}
}

Overview of the code

Project structure

.
├── loaders
|   └── dataset selector
|   └── data_generator.py # generating random graphs
|   └── test_data_generator.py
|   └── siamese_loader.py # loading pairs 
├── models
|   └── architecture selector
|   └── layers.py # equivariant block
|   └── base_model.py # powerful GNN Graph -> Graph
|   └── siamese_net.py # GNN to match graphs
├── toolbox
|   └── optimizer and losses selectors
|   └── logger.py  # keeping track of most results during training
|   └── metrics.py # computing scores
|   └── losses.py  # computing losses
|   └── optimizer.py # optimizers
|   └── utility.py
|   └── maskedtensor.py # Tensor-like class to handle batches of graphs of different sizes
├── commander.py # main file from the project serving for calling all necessary functions for training and testing
├── trainer.py # pipelines for training and validation
├── eval.py # testing models

Dependencies

Dependencies are listed in requirements.txt. To install, run

pip install -r requirements.txt

Training

Run the main file commander.py with the command train

python train commander.py

To change options, use Sacred command-line interface and see default.yaml for the configuration structure. For instance,

python commander.py train with cpu=No data.generative_model=Regular train.epoch=10 

You can also copy default.yaml and modify the configuration parameters there. Loading the configuration in other.yaml (or other.json) can be done with

python commander.py train with other.yaml

See Sacred documentation for an exhaustive reference.

To save logs to Neptune, you need to provide your own API key via the dedicated environment variable.

The model is regularly saved in the folder runs.

Evaluating

There are two ways of evaluating the models. If you juste ran the training with a configuration conf.yaml, you can simply do,

python commander.py eval with conf.yaml

You can omit with conf.yaml if you are using the default configuartion.

If you downloaded a model with a config file from here, you can edit the section test_data of this config if you wish and then run,

python commander.py eval with /path/to/config model_path=/path/to/model.pth.tar
Comments
  • need 2 different model_path variables

    need 2 different model_path variables

    When not starting anew, model_path_load should contain the path to the model. model_path should then be the path to save the new learned model. Right now, the model_path_load is used for the evaluation!

    bug 
    opened by mlelarge 1
  • Unify the different problems

    Unify the different problems

    The previous commands for training and evaluation should still work the same, even though it uses a different configuration file at the beginning. It still uses Sacred the same way for the commander.py. The problem can be switched from the config file. The main change in the previous code is the use of the Helper class (in toolbox/helper.py) which is the class that coordinates each problem (see the file for further info).

    Also added the 'article_commander.py' which generates the data for comparison between planted problems and the corresponding NP-problem, which works in a similar way to commander.py.

    opened by MauTrib 1
  • Reorganize a commander.py

    Reorganize a commander.py

    This PR tries to unify the train and eval CLI wth the use of a single config file. See default.yaml for the result and the README for more information. Please, don't hesitate to comment on changes you don't approve or dislike. Bests,

    opened by wazizian 1
  • Fix convention for tensor shapes

    Fix convention for tensor shapes

    Hi! I am modifying the code so the same convention for the shape of tensors is used everywhere (see PR). But I'm not sure we have chosen the right one. You and we have chosen(bs, n_vertices, n_vertices, features)but Marron preferred (bs, features, n_vertices, n_vertices). Indeed the later matches Pytorch's convention for images and so makes convolutions natural. In one case, MLPs and blocks have to be adapted and in the other, it is the data generation process, so it is a similar amount of work. What do you think ? Bests, Waïss

    opened by wazizian 1
  • added normalization + embeddings

    added normalization + embeddings

    I modified slightly the architecture and training seems much better and more stable. I added a first embedding layer and BN after multiplication and at the output.

    opened by mlelarge 0
Releases(QAP)
  • QAP(Jan 21, 2021)

    Config: "data": {"num_examples_train": 20000, "num_examples_val": 1000, "generative_model": "Regular", "noise_model": "ErdosRenyi", "edge_density": 0.2, "n_vertices": 50, "vertex_proba": 1.0, "noise": 0.15, "path_dataset": "dataset"}, "train": {"epoch": 50, "batch_size": 32, "lr": 0.0001, "scheduler_step": 5, "scheduler_decay": 0.9, "print_freq": 100, "loss_reduction": "mean"}, "arch": {"arch": "Siamese_Model", "model_name": "Simple_Node_Embedding", "num_blocks": 2, "original_features_num": 2, "in_features": 64, "out_features": 64, "depth_of_mlp": 3}

    Source code(tar.gz)
    Source code(zip)
    config.json(601 bytes)
    model_best.pth.tar(333.26 KB)
Several simple examples for popular neural network toolkits calling custom CUDA operators.

Neural Network CUDA Example Several simple examples for neural network toolkits (PyTorch, TensorFlow, etc.) calling custom CUDA operators. We provide

WeiYang 798 Jan 01, 2023
SMD-Nets: Stereo Mixture Density Networks

SMD-Nets: Stereo Mixture Density Networks This repository contains a Pytorch implementation of "SMD-Nets: Stereo Mixture Density Networks" (CVPR 2021)

Fabio Tosi 115 Dec 26, 2022
MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data

This repository is the official PyTorch implementation of Meta-Balance. Find the paper on arxiv MetaBalance: High-Performance Neural Networks for Clas

Arpit Bansal 20 Oct 18, 2021
Deep Learning Models for Causal Inference

Extensive tutorials for learning how to build deep learning models for causal inference using selection on observables in Tensorflow 2.

Bernard J Koch 151 Dec 31, 2022
COPA-SSE contains crowdsourced explanations for the Balanced COPA dataset

COPA-SSE Repository for COPA-SSE: Semi-Structured Explanations for Commonsense Reasoning. COPA-SSE contains crowdsourced explanations for the Balanced

Ana Brassard 5 Jul 31, 2022
Open-Ended Commonsense Reasoning (NAACL 2021)

Open-Ended Commonsense Reasoning Quick links: [Paper] | [Video] | [Slides] | [Documentation] This is the repository of the paper, Differentiable Open-

(Bill) Yuchen Lin 31 Oct 19, 2022
Diffusion Probabilistic Models for 3D Point Cloud Generation (CVPR 2021)

Diffusion Probabilistic Models for 3D Point Cloud Generation [Paper] [Code] The official code repository for our CVPR 2021 paper "Diffusion Probabilis

Shitong Luo 323 Jan 05, 2023
Prometheus Exporter for data scraped from datenplattform.darmstadt.de

darmstadt-opendata-exporter Scrapes data from https://datenplattform.darmstadt.de and presents it in the Prometheus Exposition format. Pull requests w

Martin Weinelt 2 Apr 12, 2022
PyTorch inference for "Progressive Growing of GANs" with CelebA snapshot

Progressive Growing of GANs inference in PyTorch with CelebA training snapshot Description This is an inference sample written in PyTorch of the origi

320 Nov 21, 2022
AdaNet is a lightweight TensorFlow-based framework for automatically learning high-quality models with minimal expert intervention

AdaNet is a lightweight TensorFlow-based framework for automatically learning high-quality models with minimal expert intervention. AdaNet buil

3.4k Jan 07, 2023
基于PaddleClas实现垃圾分类,并转换为inference格式用PaddleHub服务端部署

百度网盘链接及提取码: 链接:https://pan.baidu.com/s/1HKpgakNx1hNlOuZJuW6T1w 提取码:wylx 一个垃圾分类项目带你玩转飞桨多个产品(1) 基于PaddleClas实现垃圾分类,导出inference模型并利用PaddleHub Serving进行服务

thomas-yanxin 22 Jul 12, 2022
Predicting Axillary Lymph Node Metastasis in Early Breast Cancer Using Deep Learning on Primary Tumor Biopsy Slides

Predicting Axillary Lymph Node Metastasis in Early Breast Cancer Using Deep Learning on Primary Tumor Biopsy Slides Project | This repo is the officia

CVSM Group - email: <a href=[email protected]"> 33 Dec 28, 2022
Clockwork Convnets for Video Semantic Segmentation

Clockwork Convnets for Video Semantic Segmentation This is the reference implementation of arxiv:1608.03609: Clockwork Convnets for Video Semantic Seg

Evan Shelhamer 141 Nov 21, 2022
Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021.

NL-CSNet-Pytorch Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021. Note: this repo only shows the strategy of

WenxueCui 7 Nov 07, 2022
Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI

Hourglass Transformer - Pytorch (wip) Implementation of Hourglass Transformer, in Pytorch. It will also contain some of my own ideas about how to make

Phil Wang 61 Dec 25, 2022
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)

Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CAC) Xin Lai*, Zhuotao Tian*, Li Jiang, Shu Liu, Hengshuang Zhao, Li

DV Lab 137 Dec 14, 2022
Subgraph Based Learning of Contextual Embedding

SLiCE Self-Supervised Learning of Contextual Embeddings for Link Prediction in Heterogeneous Networks Dataset details: We use four public benchmark da

Pacific Northwest National Laboratory 27 Dec 01, 2022
Manim is an engine for precise programmatic animations, designed for creating explanatory math videos

Manim is an engine for precise programmatic animations, designed for creating explanatory math videos. Note, there are two versions of manim. This rep

Grant Sanderson 49k Jan 09, 2023
Build fully-functioning computer vision models with PyTorch

Detecto is a Python package that allows you to build fully-functioning computer vision and object detection models with just 5 lines of code. Inferenc

Alan Bi 576 Dec 29, 2022
The spiritual successor to knockknock for PyTorch Lightning, get notified when your training ends

Who's there? The spiritual successor to knockknock for PyTorch Lightning, to get a notification when your training is complete or when it crashes duri

twsl 70 Oct 06, 2022