GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models

Overview

GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Model

This repository is the official PyTorch implementation of GraphRNN, a graph generative model using auto-regressive model.

Jiaxuan You*, Rex Ying*, Xiang Ren, William L. Hamilton, Jure Leskovec, GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Model (ICML 2018)

Installation

Install PyTorch following the instuctions on the official website. The code has been tested over PyTorch 0.2.0 and 0.4.0 versions.

conda install pytorch torchvision cuda90 -c pytorch

Then install the other dependencies.

pip install -r requirements.txt

Test run

python main.py

Code description

For the GraphRNN model: main.py is the main executable file, and specific arguments are set in args.py. train.py includes training iterations and calls model.py and data.py create_graphs.py is where we prepare target graph datasets.

For baseline models:

  • B-A and E-R models are implemented in baselines/baseline_simple.py.
  • Kronecker graph model is implemented in the SNAP software, which can be found in https://github.com/snap-stanford/snap/tree/master/examples/krongen (for generating Kronecker graphs), and https://github.com/snap-stanford/snap/tree/master/examples/kronfit (for learning parameters for the model).
  • MMSB is implemented using the EDWARD library (http://edwardlib.org/), and is located in baselines.
  • We implemented the DeepGMG model based on the instructions of their paper in main_DeepGMG.py.
  • We implemented the GraphVAE model based on the instructions of their paper in baselines/graphvae.

Parameter setting: To adjust the hyper-parameter and input arguments to the model, modify the fields of args.py accordingly. For example, args.cuda controls which GPU is used to train the model, and args.graph_type specifies which dataset is used to train the generative model. See the documentation in args.py for more detailed descriptions of all fields.

Outputs

There are several different types of outputs, each saved into a different directory under a path prefix. The path prefix is set at args.dir_input. Suppose that this field is set to ./:

  • ./graphs contains the pickle files of training, test and generated graphs. Each contains a list of networkx object.
  • ./eval_results contains the evaluation of MMD scores in txt format.
  • ./model_save stores the model checkpoints
  • ./nll saves the log-likelihood for generated graphs as sequences.
  • ./figures is used to save visualizations (see Visualization of graphs section).

Evaluation

The evaluation is done in evaluate.py, where user can choose which settings to evaluate. To evaluate how close the generated graphs are to the ground truth set, we use MMD (maximum mean discrepancy) to calculate the divergence between two sets of distributions related to the ground truth and generated graphs. Three types of distributions are chosen: degree distribution, clustering coefficient distribution. Both of which are implemented in eval/stats.py, using multiprocessing python module. One can easily extend the evaluation to compute MMD for other distribution of graphs.

We also compute the orbit counts for each graph, represented as a high-dimensional data point. We then compute the MMD between the two sets of sampled points using ORCA (see http://www.biolab.si/supp/orca/orca.html) at eval/orca. One first needs to compile ORCA by

g++ -O2 -std=c++11 -o orca orca.cpp` 

in directory eval/orca. (the binary file already in repo works in Ubuntu).

To evaluate, run

python evaluate.py

Arguments specific to evaluation is specified in class evaluate.Args_evaluate. Note that the field Args_evaluate.dataset_name_all must only contain datasets that are already trained, by setting args.graph_type to each of the datasets and running python main.py.

Visualization of graphs

The training, testing and generated graphs are saved at 'graphs/'. One can visualize the generated graph using the function utils.load_graph_list, which loads the list of graphs from the pickle file, and util.draw_graph_list, which plots the graph using networkx.

Misc

Jesse Bettencourt and Harris Chan have made a great slide introducing GraphRNN in Prof. David Duvenaud’s seminar course Learning Discrete Latent Structure.

Owner
Jiaxuan
Jiaxuan
In-Place Activated BatchNorm for Memory-Optimized Training of DNNs

In-Place Activated BatchNorm In-Place Activated BatchNorm for Memory-Optimized Training of DNNs In-Place Activated BatchNorm (InPlace-ABN) is a novel

1.3k Dec 29, 2022
An intuitive library to extract features from time series

Time Series Feature Extraction Library Intuitive time series feature extraction This repository hosts the TSFEL - Time Series Feature Extraction Libra

Associação Fraunhofer Portugal Research 589 Jan 04, 2023
Implementation of FitVid video prediction model in JAX/Flax.

FitVid Video Prediction Model Implementation of FitVid video prediction model in JAX/Flax. If you find this code useful, please cite it in your paper:

Google Research 62 Nov 25, 2022
Fusion-in-Decoder Distilling Knowledge from Reader to Retriever for Question Answering

This repository contains code for: Fusion-in-Decoder models Distilling Knowledge from Reader to Retriever Dependencies Python 3 PyTorch (currently tes

Meta Research 323 Dec 19, 2022
Transformer part of 12th place solution in Riiid! Answer Correctness Prediction

kaggle_riiid Transformer part of 12th place solution in Riiid! Answer Correctness Prediction. Please see here for more information. Execution You need

Sakami Kosuke 2 Apr 23, 2022
PyTorch Implementation of Exploring Explicit Domain Supervision for Latent Space Disentanglement in Unpaired Image-to-Image Translation.

DosGAN-PyTorch PyTorch Implementation of Exploring Explicit Domain Supervision for Latent Space Disentanglement in Unpaired Image-to-Image Translation

40 Nov 30, 2022
zeus is a Python implementation of the Ensemble Slice Sampling method.

zeus is a Python implementation of the Ensemble Slice Sampling method. Fast & Robust Bayesian Inference, Efficient Markov Chain Monte Carlo (MCMC), Bl

Minas Karamanis 197 Dec 04, 2022
NeuPy is a Tensorflow based python library for prototyping and building neural networks

NeuPy v0.8.2 NeuPy is a python library for prototyping and building neural networks. NeuPy uses Tensorflow as a computational backend for deep learnin

Yurii Shevchuk 729 Jan 03, 2023
An efficient toolkit for Face Stylization based on the paper "AgileGAN: Stylizing Portraits by Inversion-Consistent Transfer Learning"

MMGEN-FaceStylor English | 简体中文 Introduction This repo is an efficient toolkit for Face Stylization based on the paper "AgileGAN: Stylizing Portraits

OpenMMLab 182 Dec 27, 2022
Revisiting Global Statistics Aggregation for Improving Image Restoration

Revisiting Global Statistics Aggregation for Improving Image Restoration Xiaojie Chu, Liangyu Chen, Chengpeng Chen, Xin Lu Paper: https://arxiv.org/pd

MEGVII Research 128 Dec 24, 2022
Scheme for training and applying a label propagation framework

Factorisation-based Image Labelling Overview This is a scheme for training and applying the factorisation-based image labelling (FIL) framework. Some

Wellcome Centre for Human Neuroimaging 2 Dec 17, 2021
Ppq - A powerful offline neural network quantization tool with custimized IR

PPL Quantization Tool(PPL 量化工具) PPL Quantization Tool (PPQ) is a powerful offlin

605 Jan 03, 2023
Focal Loss for Dense Rotation Object Detection

Convert ResNets weights from GluonCV to Tensorflow Abstract GluonCV released some new resnet pre-training weights and designed some new resnets (such

17 Nov 24, 2021
Unofficial PyTorch Implementation of AHDRNet (CVPR 2019)

AHDRNet-PyTorch This is the PyTorch implementation of Attention-guided Network for Ghost-free High Dynamic Range Imaging (CVPR 2019). The official cod

Yutong Zhang 4 Sep 08, 2022
Network Compression via Central Filter

Network Compression via Central Filter Environments The code has been tested in the following environments: Python 3.8 PyTorch 1.8.1 cuda 10.2 torchsu

2 May 12, 2022
An official PyTorch Implementation of Boundary-aware Self-supervised Learning for Video Scene Segmentation (BaSSL)

An official PyTorch Implementation of Boundary-aware Self-supervised Learning for Video Scene Segmentation (BaSSL)

Kakao Brain 72 Dec 28, 2022
NPBG++: Accelerating Neural Point-Based Graphics

[CVPR 2022] NPBG++: Accelerating Neural Point-Based Graphics Project Page | Paper This repository contains the official Python implementation of the p

Ruslan Rakhimov 57 Dec 03, 2022
Pytorch implementation of FlowNet by Dosovitskiy et al.

FlowNetPytorch Pytorch implementation of FlowNet by Dosovitskiy et al. This repository is a torch implementation of FlowNet, by Alexey Dosovitskiy et

Clément Pinard 762 Jan 02, 2023
Implementation based on Paper - Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling

Implementation based on Paper - Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling

HamasKhan 3 Jul 08, 2022
AdamW optimizer for bfloat16 models in pytorch.

Image source AdamW optimizer for bfloat16 models in pytorch. Bfloat16 is currently an optimal tradeoff between range and relative error for deep netwo

Alex Rogozhnikov 8 Nov 20, 2022