A PyTorch implementation of "SimGNN: A Neural Network Approach to Fast Graph Similarity Computation" (WSDM 2019).

Overview

SimGNN

PWC codebeat badge repo sizebenedekrozemberczki⠀⠀

A PyTorch implementation of SimGNN: A Neural Network Approach to Fast Graph Similarity Computation (WSDM 2019).

Abstract

Graph similarity search is among the most important graph-based applications, e.g. finding the chemical compounds that are most similar to a query compound. Graph similarity/distance computation, such as Graph Edit Distance (GED) and Maximum Common Subgraph (MCS), is the core operation of graph similarity search and many other applications, but very costly to compute in practice. Inspired by the recent success of neural network approaches to several graph applications, such as node or graph classification, we propose a novel neural network based approach to address this classic yet challenging graph problem, aiming to alleviate the computational burden while preserving a good performance. The proposed approach, called SimGNN, combines two strategies. First, we design a learnable embedding function that maps every graph into an embedding vector, which provides a global summary of a graph. A novel attention mechanism is proposed to emphasize the important nodes with respect to a specific similarity metric. Second, we design a pairwise node comparison method to sup plement the graph-level embeddings with fine-grained node-level information. Our model achieves better generalization on unseen graphs, and in the worst case runs in quadratic time with respect to the number of nodes in two graphs. Taking GED computation as an example, experimental results on three real graph datasets demonstrate the effectiveness and efficiency of our approach. Specifically, our model achieves smaller error rate and great time reduction compared against a series of baselines, including several approximation algorithms on GED computation, and many existing graph neural network based models. Our study suggests SimGNN provides a new direction for future research on graph similarity computation and graph similarity search.

This repository provides a PyTorch implementation of SimGNN as described in the paper:

SimGNN: A Neural Network Approach to Fast Graph Similarity Computation. Yunsheng Bai, Hao Ding, Song Bian, Ting Chen, Yizhou Sun, Wei Wang. WSDM, 2019. [Paper]

A reference Tensorflow implementation is accessible [here] and another implementation is [here].

Requirements

The codebase is implemented in Python 3.5.2. package versions used for development are just below.

networkx          2.4
tqdm              4.28.1
numpy             1.15.4
pandas            0.23.4
texttable         1.5.0
scipy             1.1.0
argparse          1.1.0
torch             1.1.0
torch-scatter     1.4.0
torch-sparse      0.4.3
torch-cluster     1.4.5
torch-geometric   1.3.2
torchvision       0.3.0
scikit-learn      0.20.0

Datasets

The code takes pairs of graphs for training from an input folder where each pair of graph is stored as a JSON. Pairs of graphs used for testing are also stored as JSON files. Every node id and node label has to be indexed from 0. Keys of dictionaries are stored strings in order to make JSON serialization possible.

Every JSON file has the following key-value structure:

{"graph_1": [[0, 1], [1, 2], [2, 3], [3, 4]],
 "graph_2":  [[0, 1], [1, 2], [1, 3], [3, 4], [2, 4]],
 "labels_1": [2, 2, 2, 2],
 "labels_2": [2, 3, 2, 2, 2],
 "ged": 1}

The **graph_1** and **graph_2** keys have edge list values which descibe the connectivity structure. Similarly, the **labels_1** and **labels_2** keys have labels for each node which are stored as list - positions in the list correspond to node identifiers. The **ged** key has an integer value which is the raw graph edit distance for the pair of graphs.

Options

Training a SimGNN model is handled by the `src/main.py` script which provides the following command line arguments.

Input and output options

  --training-graphs   STR    Training graphs folder.      Default is `dataset/train/`.
  --testing-graphs    STR    Testing graphs folder.       Default is `dataset/test/`.

Model options

  --filters-1             INT         Number of filter in 1st GCN layer.       Default is 128.
  --filters-2             INT         Number of filter in 2nd GCN layer.       Default is 64. 
  --filters-3             INT         Number of filter in 3rd GCN layer.       Default is 32.
  --tensor-neurons        INT         Neurons in tensor network layer.         Default is 16.
  --bottle-neck-neurons   INT         Bottle neck layer neurons.               Default is 16.
  --bins                  INT         Number of histogram bins.                Default is 16.
  --batch-size            INT         Number of pairs processed per batch.     Default is 128. 
  --epochs                INT         Number of SimGNN training epochs.        Default is 5.
  --dropout               FLOAT       Dropout rate.                            Default is 0.5.
  --learning-rate         FLOAT       Learning rate.                           Default is 0.001.
  --weight-decay          FLOAT       Weight decay.                            Default is 10^-5.
  --histogram             BOOL        Include histogram features.              Default is False.

Examples

The following commands learn a neural network and score on the test set. Training a SimGNN model on the default dataset.

python src/main.py

Training a SimGNN model for a 100 epochs with a batch size of 512.

python src/main.py --epochs 100 --batch-size 512

Training a SimGNN with histogram features.

python src/main.py --histogram

Training a SimGNN with histogram features and a large bin number.

python src/main.py --histogram --bins 32

Increasing the learning rate and the dropout.

python src/main.py --learning-rate 0.01 --dropout 0.9

You can save the trained model by adding the --save-path parameter.

python src/main.py --save-path /path/to/model-name

Then you can load a pretrained model using the --load-path parameter; note that the model will be used as-is, no training will be performed.

python src/main.py --load-path /path/to/model-name

License

Comments
  • Model test error is too high

    Model test error is too high

    I'm sorry to bother you.But when I tried to replicate your work,I ran into some difficulties. Here is the problem I met:

    python src/main.py +---------------------+------------------+ | Batch size | 128 | +=====================+==================+ | Bins | 16 | +---------------------+------------------+ | Bottle neck neurons | 16 | +---------------------+------------------+ | Dropout | 0.500 | +---------------------+------------------+ | Epochs | 5 | +---------------------+------------------+ | Filters 1 | 128 | +---------------------+------------------+ | Filters 2 | 64 | +---------------------+------------------+ | Filters 3 | 32 | +---------------------+------------------+ | Histogram | 0 | +---------------------+------------------+ | Learning rate | 0.001 | +---------------------+------------------+ | Tensor neurons | 16 | +---------------------+------------------+ | Testing graphs | ./dataset/test/ | +---------------------+------------------+ | Training graphs | ./dataset/train/ | +---------------------+------------------+ | Weight decay | 0.001 | +---------------------+------------------+

    Enumerating unique labels.

    100%|██████████████████████████████████████████████████████████████████████████████████| 100/100 [00:00<00:00, 2533.57it/s]

    Model training.

    Epoch: 0%| | 0/5 [00:00<?, ?it/s] /home/jovyan/SimGNN/src/simgnn.py:212: UserWarning: Using a target size (torch.Size([1, 1])) that is different to the input size (torch.Size([1])). This will likely lead to incorrect results due to broadcasting. Please ensure they havethe same size. losses = losses + torch.nn.functional.mse_loss(data["target"], prediction) Epoch (Loss=3.87038): 100%|██████████████████████████████████████████████████████████████████| 5/5 [00:16<00:00, 3.23s/it] Batches: 100%|███████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.68s/it]

    Model evaluation.

    100%|█████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 102.39it/s]

    Baseline error: 0.41597.

    Model test error: 0.94024.

    I found the model test error too high! The only thing I changed was the version of the libraries,which I replaced with the latest. Could you help me with this problem?

    opened by Alice314 7
  • About dataset

    About dataset

    I am really interested in this amazing work, but I don't understand how datasets are generated (or processed), or are training data and test data generated from public data sets (just like Linux, AIDS, mentioned in the paper)? I desire to know how syngen.py works and the output of this function.

    Thanks a lot.

    opened by BenedictWongBJUT 6
  • Extracting Latent Space

    Extracting Latent Space

    How would you recommend we approach creating network embeddings (entire network is single point/vector) using this library?

    I was thinking of modifying the forward pass to output similarity and running MDS on the similarity matrix if I'm doing all vs all on the test set.

    I am hoping to compare a couple hundred generated graphs via ERGM and latent ERGM as well as other network approaches to the original graph and output an embedding of all of the graphs.

    Please let me know your recommendations before this becomes a time sink, else, I can find a way to hack it. Thanks!

    opened by jlevy44 4
  • Questions about the model.

    Questions about the model.

    Sorry for using this section to ask technical question rather than code-wise. Had a few questions.

    • Do you have the pretrained model on any of the dataset (like IMDB) the paper talks about? The size of sample in the github dataset is small.
    • Am I right that there is no early stopping in the model? As there is no validation set. [the reason I am asking this question is as I use a bigger dataset, with higher number of epochs, most of the ged predicted values are around 1]
    opened by BehroozMansouri 3
  • Error adapting code

    Error adapting code

    Hey! I am facing some issues adapting your SimGNN code into my graph dataset. I keep encountering an error that says:

    RuntimeError: Invalid index in scatterAdd at ../aten/src/TH/generic/THTensorEvenMoreMath.cpp:520

    I even tried to change one of the training JSON files to the example of the labels and graph structure you gave. I also see a warning about size.

    Am I missing something? Any help would be greatly appreciated.

    opened by kalkidann 3
  • Notice on package versions

    Notice on package versions

    Hello,

    Trying (with some struggle unfortunately) to get this to run, I noticed that the requirements' package versions in the README file are different to some package versions in the requirements.txt.

    You may want to check that out.

    Best.

    opened by Chuhtra 2
  • Performance of SimGNN

    Performance of SimGNN

    Hi @benedekrozemberczki

    We recently used this implementation of SimGNN and noticed that the performance does not match to the one that is outlined in the paper. In particular, for AIDS dataset we get an order of magnitude higher MSE that in the original paper. Did you verify that this implementation match the performance of the paper?

    P.S. we also benchmarked the orignal repo of SimGNN and noticed that it produces slightly better results, even though it's very long to run until convergence.

    opened by nd7141 2
  • Visualizing Attention

    Visualizing Attention

    Hi, I want to visualize the attention with respect to the input graph (like Figure 5 in the paper). Can you please guide me how to visualize the attention weight with the input graphs?

    opened by sajjadriaj 2
  • Add ability to save and load a trained model

    Add ability to save and load a trained model

    To achieve repeatable results, it was also necessary to keep the label in fixed order, so that the resulting one-hot encoding vectors are the same across different runs.

    Explanation of this feature has been added to the README.

    opened by Carlovan 1
  • Batch processing required.

    Batch processing required.

    Hi.

    Is there a version of the code where we can send batched data into the model? Current version works on one graph pair at a time. This is taking too long for larger training data with each data point being fed one at a time into the model via for loop.

    Thanks.

    opened by Indradyumna 1
  • Import error for scatter_cuda

    Import error for scatter_cuda

    Hi there,

    I am having a “no module named touchy_scatter .scatter_cuda” import error. I am sure I have the necessary toolkits and driver installed.

    Does the code require a GPU environment?

    opened by jonathan-goh 1
  • Error in the example from readme

    Error in the example from readme

    In the example in the repo: {"graph_1": [[0, 1], [1, 2], [2, 3], [3, 4]], "graph_2": [[0, 1], [1, 2], [1, 3], [3, 4], [2, 4]], "labels_1": [2, 2, 2, 2], "labels_2": [2, 3, 2, 2, 2], "ged": 1} I think there is a label missing in labels_1. Also the ged for the example is 2 if we consider the missing label is 2. Am I missing smth?

    opened by merascu 1
Releases(v_00001)
Owner
Benedek Rozemberczki
Machine Learning Engineer at AstraZeneca | PhD from The University of Edinburgh.
Benedek Rozemberczki
PyBrain - Another Python Machine Learning Library.

PyBrain -- the Python Machine Learning Library =============================================== INSTALLATION ------------ Quick answer: make sure you

2.8k Dec 31, 2022
Code for "Graph-Evolving Meta-Learning for Low-Resource Medical Dialogue Generation". [AAAI 2021]

Graph Evolving Meta-Learning for Low-resource Medical Dialogue Generation Code to be further cleaned... This repo contains the code of the following p

Shuai Lin 29 Nov 01, 2022
这是一个deeplabv3-plus-pytorch的源码,可以用于训练自己的模型。

DeepLabv3+:Encoder-Decoder with Atrous Separable Convolution语义分割模型在Pytorch当中的实现 目录 性能情况 Performance 所需环境 Environment 注意事项 Attention 文件下载 Download 训练步骤

Bubbliiiing 350 Dec 28, 2022
A Unified Framework and Analysis for Structured Knowledge Grounding

UnifiedSKG 📚 : Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models Code for paper UnifiedSKG: Unifying and Mu

HKU NLP Group 370 Dec 21, 2022
SuRE Evaluation: A Supplementary Material

SuRE Evaluation: A Supplementary Material This repository contains supplementary material regarding the evaluations presented in the paper Visual Expl

NYU Visualization Lab 0 Dec 14, 2021
[ICRA 2022] CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation

This is the official implementation of our paper: Bowen Wen, Wenzhao Lian, Kostas Bekris, and Stefan Schaal. "CaTGrasp: Learning Category-Level Task-R

Bowen Wen 199 Jan 04, 2023
Implementation of Continuous Sparsification, a method for pruning and ticket search in deep networks

Continuous Sparsification Implementation of Continuous Sparsification (CS), a method based on l_0 regularization to find sparse neural networks, propo

Pedro Savarese 23 Dec 07, 2022
Implementation of CVPR'2022:Surface Reconstruction from Point Clouds by Learning Predictive Context Priors

Surface Reconstruction from Point Clouds by Learning Predictive Context Priors (CVPR 2022) Personal Web Pages | Paper | Project Page This repository c

136 Dec 12, 2022
PyTorch Implementation of Vector Quantized Variational AutoEncoders.

Pytorch implementation of VQVAE. This paper combines 2 tricks: Vector Quantization (check out this amazing blog for better understanding.) Straight-Th

Vrushank Changawala 2 Oct 06, 2021
A very short and easy implementation of Quantile Regression DQN

Quantile Regression DQN Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression (https://arx

Arsenii Senya Ashukha 80 Sep 17, 2022
A wrapper around SageMaker ML Lineage Tracking extending ML Lineage to end-to-end ML lifecycles, including additional capabilities around Feature Store groups, queries, and other relevant artifacts.

ML Lineage Helper This library is a wrapper around the SageMaker SDK to support ease of lineage tracking across the ML lifecycle. Lineage artifacts in

AWS Samples 12 Nov 01, 2022
OpenMMLab Model Deployment Toolset

Introduction English | 简体中文 MMDeploy is an open-source deep learning model deployment toolset. It is a part of the OpenMMLab project. Major features F

OpenMMLab 1.5k Dec 30, 2022
Official PyTorch implementation of "Synthesis of Screentone Patterns of Manga Characters"

Manga Character Screentone Synthesis Official PyTorch implementation of "Synthesis of Screentone Patterns of Manga Characters" presented in IEEE ISM 2

Tsubota 2 Nov 20, 2021
This repo is for segmentation of T2 hyp regions in gliomas.

T2-Hyp-Segmentor This repo is for segmentation of T2 hyp regions in gliomas. By downloading the model from here you can use it to segment your T2w ima

1 Jan 18, 2022
Official implementation of the Implicit Behavioral Cloning (IBC) algorithm

Implicit Behavioral Cloning This codebase contains the official implementation of the Implicit Behavioral Cloning (IBC) algorithm from our paper: Impl

Google Research 210 Dec 09, 2022
Codes for the compilation and visualization examples to the HIF vegetation dataset

High-impedance vegetation fault dataset This repository contains the codes that compile the "Vegetation Conduction Ignition Test Report" data, which a

1 Dec 12, 2021
This is the repo for Uncertainty Quantification 360 Toolkit.

UQ360 The Uncertainty Quantification 360 (UQ360) toolkit is an open-source Python package that provides a diverse set of algorithms to quantify uncert

International Business Machines 207 Dec 30, 2022
Differentiable Annealed Importance Sampling (DAIS)

Differentiable Annealed Importance Sampling (DAIS) This repository contains the code to reproduce the DAIS results from the paper Differentiable Annea

Guodong Zhang 6 Dec 26, 2021
The implementation of DeBERTa

DeBERTa: Decoding-enhanced BERT with Disentangled Attention This repository is the official implementation of DeBERTa: Decoding-enhanced BERT with Dis

Microsoft 1.2k Jan 06, 2023
Unofficial implementation of HiFi-GAN+ from the paper "Bandwidth Extension is All You Need" by Su, et al.

HiFi-GAN+ This project is an unoffical implementation of the HiFi-GAN+ model for audio bandwidth extension, from the paper Bandwidth Extension is All

Brent M. Spell 134 Dec 30, 2022