A PyTorch implementation of "SimGNN: A Neural Network Approach to Fast Graph Similarity Computation" (WSDM 2019).

Overview

SimGNN

PWC codebeat badge repo sizebenedekrozemberczki⠀⠀

A PyTorch implementation of SimGNN: A Neural Network Approach to Fast Graph Similarity Computation (WSDM 2019).

Abstract

Graph similarity search is among the most important graph-based applications, e.g. finding the chemical compounds that are most similar to a query compound. Graph similarity/distance computation, such as Graph Edit Distance (GED) and Maximum Common Subgraph (MCS), is the core operation of graph similarity search and many other applications, but very costly to compute in practice. Inspired by the recent success of neural network approaches to several graph applications, such as node or graph classification, we propose a novel neural network based approach to address this classic yet challenging graph problem, aiming to alleviate the computational burden while preserving a good performance. The proposed approach, called SimGNN, combines two strategies. First, we design a learnable embedding function that maps every graph into an embedding vector, which provides a global summary of a graph. A novel attention mechanism is proposed to emphasize the important nodes with respect to a specific similarity metric. Second, we design a pairwise node comparison method to sup plement the graph-level embeddings with fine-grained node-level information. Our model achieves better generalization on unseen graphs, and in the worst case runs in quadratic time with respect to the number of nodes in two graphs. Taking GED computation as an example, experimental results on three real graph datasets demonstrate the effectiveness and efficiency of our approach. Specifically, our model achieves smaller error rate and great time reduction compared against a series of baselines, including several approximation algorithms on GED computation, and many existing graph neural network based models. Our study suggests SimGNN provides a new direction for future research on graph similarity computation and graph similarity search.

This repository provides a PyTorch implementation of SimGNN as described in the paper:

SimGNN: A Neural Network Approach to Fast Graph Similarity Computation. Yunsheng Bai, Hao Ding, Song Bian, Ting Chen, Yizhou Sun, Wei Wang. WSDM, 2019. [Paper]

A reference Tensorflow implementation is accessible [here] and another implementation is [here].

Requirements

The codebase is implemented in Python 3.5.2. package versions used for development are just below.

networkx          2.4
tqdm              4.28.1
numpy             1.15.4
pandas            0.23.4
texttable         1.5.0
scipy             1.1.0
argparse          1.1.0
torch             1.1.0
torch-scatter     1.4.0
torch-sparse      0.4.3
torch-cluster     1.4.5
torch-geometric   1.3.2
torchvision       0.3.0
scikit-learn      0.20.0

Datasets

The code takes pairs of graphs for training from an input folder where each pair of graph is stored as a JSON. Pairs of graphs used for testing are also stored as JSON files. Every node id and node label has to be indexed from 0. Keys of dictionaries are stored strings in order to make JSON serialization possible.

Every JSON file has the following key-value structure:

{"graph_1": [[0, 1], [1, 2], [2, 3], [3, 4]],
 "graph_2":  [[0, 1], [1, 2], [1, 3], [3, 4], [2, 4]],
 "labels_1": [2, 2, 2, 2],
 "labels_2": [2, 3, 2, 2, 2],
 "ged": 1}

The **graph_1** and **graph_2** keys have edge list values which descibe the connectivity structure. Similarly, the **labels_1** and **labels_2** keys have labels for each node which are stored as list - positions in the list correspond to node identifiers. The **ged** key has an integer value which is the raw graph edit distance for the pair of graphs.

Options

Training a SimGNN model is handled by the `src/main.py` script which provides the following command line arguments.

Input and output options

  --training-graphs   STR    Training graphs folder.      Default is `dataset/train/`.
  --testing-graphs    STR    Testing graphs folder.       Default is `dataset/test/`.

Model options

  --filters-1             INT         Number of filter in 1st GCN layer.       Default is 128.
  --filters-2             INT         Number of filter in 2nd GCN layer.       Default is 64. 
  --filters-3             INT         Number of filter in 3rd GCN layer.       Default is 32.
  --tensor-neurons        INT         Neurons in tensor network layer.         Default is 16.
  --bottle-neck-neurons   INT         Bottle neck layer neurons.               Default is 16.
  --bins                  INT         Number of histogram bins.                Default is 16.
  --batch-size            INT         Number of pairs processed per batch.     Default is 128. 
  --epochs                INT         Number of SimGNN training epochs.        Default is 5.
  --dropout               FLOAT       Dropout rate.                            Default is 0.5.
  --learning-rate         FLOAT       Learning rate.                           Default is 0.001.
  --weight-decay          FLOAT       Weight decay.                            Default is 10^-5.
  --histogram             BOOL        Include histogram features.              Default is False.

Examples

The following commands learn a neural network and score on the test set. Training a SimGNN model on the default dataset.

python src/main.py

Training a SimGNN model for a 100 epochs with a batch size of 512.

python src/main.py --epochs 100 --batch-size 512

Training a SimGNN with histogram features.

python src/main.py --histogram

Training a SimGNN with histogram features and a large bin number.

python src/main.py --histogram --bins 32

Increasing the learning rate and the dropout.

python src/main.py --learning-rate 0.01 --dropout 0.9

You can save the trained model by adding the --save-path parameter.

python src/main.py --save-path /path/to/model-name

Then you can load a pretrained model using the --load-path parameter; note that the model will be used as-is, no training will be performed.

python src/main.py --load-path /path/to/model-name

License

Comments
  • Model test error is too high

    Model test error is too high

    I'm sorry to bother you.But when I tried to replicate your work,I ran into some difficulties. Here is the problem I met:

    python src/main.py +---------------------+------------------+ | Batch size | 128 | +=====================+==================+ | Bins | 16 | +---------------------+------------------+ | Bottle neck neurons | 16 | +---------------------+------------------+ | Dropout | 0.500 | +---------------------+------------------+ | Epochs | 5 | +---------------------+------------------+ | Filters 1 | 128 | +---------------------+------------------+ | Filters 2 | 64 | +---------------------+------------------+ | Filters 3 | 32 | +---------------------+------------------+ | Histogram | 0 | +---------------------+------------------+ | Learning rate | 0.001 | +---------------------+------------------+ | Tensor neurons | 16 | +---------------------+------------------+ | Testing graphs | ./dataset/test/ | +---------------------+------------------+ | Training graphs | ./dataset/train/ | +---------------------+------------------+ | Weight decay | 0.001 | +---------------------+------------------+

    Enumerating unique labels.

    100%|██████████████████████████████████████████████████████████████████████████████████| 100/100 [00:00<00:00, 2533.57it/s]

    Model training.

    Epoch: 0%| | 0/5 [00:00<?, ?it/s] /home/jovyan/SimGNN/src/simgnn.py:212: UserWarning: Using a target size (torch.Size([1, 1])) that is different to the input size (torch.Size([1])). This will likely lead to incorrect results due to broadcasting. Please ensure they havethe same size. losses = losses + torch.nn.functional.mse_loss(data["target"], prediction) Epoch (Loss=3.87038): 100%|██████████████████████████████████████████████████████████████████| 5/5 [00:16<00:00, 3.23s/it] Batches: 100%|███████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.68s/it]

    Model evaluation.

    100%|█████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 102.39it/s]

    Baseline error: 0.41597.

    Model test error: 0.94024.

    I found the model test error too high! The only thing I changed was the version of the libraries,which I replaced with the latest. Could you help me with this problem?

    opened by Alice314 7
  • About dataset

    About dataset

    I am really interested in this amazing work, but I don't understand how datasets are generated (or processed), or are training data and test data generated from public data sets (just like Linux, AIDS, mentioned in the paper)? I desire to know how syngen.py works and the output of this function.

    Thanks a lot.

    opened by BenedictWongBJUT 6
  • Extracting Latent Space

    Extracting Latent Space

    How would you recommend we approach creating network embeddings (entire network is single point/vector) using this library?

    I was thinking of modifying the forward pass to output similarity and running MDS on the similarity matrix if I'm doing all vs all on the test set.

    I am hoping to compare a couple hundred generated graphs via ERGM and latent ERGM as well as other network approaches to the original graph and output an embedding of all of the graphs.

    Please let me know your recommendations before this becomes a time sink, else, I can find a way to hack it. Thanks!

    opened by jlevy44 4
  • Questions about the model.

    Questions about the model.

    Sorry for using this section to ask technical question rather than code-wise. Had a few questions.

    • Do you have the pretrained model on any of the dataset (like IMDB) the paper talks about? The size of sample in the github dataset is small.
    • Am I right that there is no early stopping in the model? As there is no validation set. [the reason I am asking this question is as I use a bigger dataset, with higher number of epochs, most of the ged predicted values are around 1]
    opened by BehroozMansouri 3
  • Error adapting code

    Error adapting code

    Hey! I am facing some issues adapting your SimGNN code into my graph dataset. I keep encountering an error that says:

    RuntimeError: Invalid index in scatterAdd at ../aten/src/TH/generic/THTensorEvenMoreMath.cpp:520

    I even tried to change one of the training JSON files to the example of the labels and graph structure you gave. I also see a warning about size.

    Am I missing something? Any help would be greatly appreciated.

    opened by kalkidann 3
  • Notice on package versions

    Notice on package versions

    Hello,

    Trying (with some struggle unfortunately) to get this to run, I noticed that the requirements' package versions in the README file are different to some package versions in the requirements.txt.

    You may want to check that out.

    Best.

    opened by Chuhtra 2
  • Performance of SimGNN

    Performance of SimGNN

    Hi @benedekrozemberczki

    We recently used this implementation of SimGNN and noticed that the performance does not match to the one that is outlined in the paper. In particular, for AIDS dataset we get an order of magnitude higher MSE that in the original paper. Did you verify that this implementation match the performance of the paper?

    P.S. we also benchmarked the orignal repo of SimGNN and noticed that it produces slightly better results, even though it's very long to run until convergence.

    opened by nd7141 2
  • Visualizing Attention

    Visualizing Attention

    Hi, I want to visualize the attention with respect to the input graph (like Figure 5 in the paper). Can you please guide me how to visualize the attention weight with the input graphs?

    opened by sajjadriaj 2
  • Add ability to save and load a trained model

    Add ability to save and load a trained model

    To achieve repeatable results, it was also necessary to keep the label in fixed order, so that the resulting one-hot encoding vectors are the same across different runs.

    Explanation of this feature has been added to the README.

    opened by Carlovan 1
  • Batch processing required.

    Batch processing required.

    Hi.

    Is there a version of the code where we can send batched data into the model? Current version works on one graph pair at a time. This is taking too long for larger training data with each data point being fed one at a time into the model via for loop.

    Thanks.

    opened by Indradyumna 1
  • Import error for scatter_cuda

    Import error for scatter_cuda

    Hi there,

    I am having a “no module named touchy_scatter .scatter_cuda” import error. I am sure I have the necessary toolkits and driver installed.

    Does the code require a GPU environment?

    opened by jonathan-goh 1
  • Error in the example from readme

    Error in the example from readme

    In the example in the repo: {"graph_1": [[0, 1], [1, 2], [2, 3], [3, 4]], "graph_2": [[0, 1], [1, 2], [1, 3], [3, 4], [2, 4]], "labels_1": [2, 2, 2, 2], "labels_2": [2, 3, 2, 2, 2], "ged": 1} I think there is a label missing in labels_1. Also the ged for the example is 2 if we consider the missing label is 2. Am I missing smth?

    opened by merascu 1
Releases(v_00001)
Owner
Benedek Rozemberczki
Machine Learning Engineer at AstraZeneca | PhD from The University of Edinburgh.
Benedek Rozemberczki
Space robot - (Course Project) Using the space robot to capture the target satellite that is disabled and spinning, then stabilize and fix it up

Space robot - (Course Project) Using the space robot to capture the target satellite that is disabled and spinning, then stabilize and fix it up

Mingrui Yu 3 Jan 07, 2022
Forecasting for knowable future events using Bayesian informative priors (forecasting with judgmental-adjustment).

What is judgyprophet? judgyprophet is a Bayesian forecasting algorithm based on Prophet, that enables forecasting while using information known by the

AstraZeneca 56 Oct 26, 2022
This project is a loose implementation of paper "Algorithmic Financial Trading with Deep Convolutional Neural Networks: Time Series to Image Conversion Approach"

Stock Market Buy/Sell/Hold prediction Using convolutional Neural Network This repo is an attempt to implement the research paper titled "Algorithmic F

Asutosh Nayak 136 Dec 28, 2022
Create Data & AI apps in 20 lines of code with Shimoku

Install with: pip install shimoku-api-python Start with: from os import getenv import shimoku_api_python.client as Shimoku

Shimoku 5 Nov 07, 2022
《Lerning n Intrinsic Grment Spce for Interctive Authoring of Grment Animtion》

Learning an Intrinsic Garment Space for Interactive Authoring of Garment Animation Overview This is the demo code for training a motion invariant enco

YuanBo 213 Dec 14, 2022
Unofficial implementation of Fast-SCNN: Fast Semantic Segmentation Network

Fast-SCNN: Fast Semantic Segmentation Network Unofficial implementation of the model architecture of Fast-SCNN. Real-time Semantic Segmentation and mo

Philip Popien 69 Aug 11, 2022
An alarm clock coded in Python 3 with Tkinter

Tkinter-Alarm-Clock An alarm clock coded in Python 3 with Tkinter. Run python3 Tkinter Alarm Clock.py in a terminal if you have Python 3. NOTE: This p

CodeMaster7000 1 Dec 25, 2021
Source code of SIGIR2021 Paper 'One Chatbot Per Person: Creating Personalized Chatbots based on Implicit Profiles'

DHAP Source code of SIGIR2021 Long Paper: One Chatbot Per Person: Creating Personalized Chatbots based on Implicit User Profiles . Preinstallation Fir

ZYMa 32 Dec 06, 2022
An implementation of the proximal policy optimization algorithm

PPO Pytorch C++ This is an implementation of the proximal policy optimization algorithm for the C++ API of Pytorch. It uses a simple TestEnvironment t

Martin Huber 59 Dec 09, 2022
Run object detection model on the Raspberry Pi

Using TensorFlow Lite with Python is great for embedded devices based on Linux, such as Raspberry Pi.

Dimitri Yanovsky 6 Oct 08, 2022
for taichi voxel-challange event

Taichi Voxel Challenge Figure: result of python3 example6.py. Please replace the image above (demo.jpg) with yours, so that other people can immediate

Liming Xu 20 Nov 26, 2022
Practical Single-Image Super-Resolution Using Look-Up Table

Practical Single-Image Super-Resolution Using Look-Up Table [Paper] Dependency Python 3.6 PyTorch glob numpy pillow tqdm tensorboardx 1. Training deep

Younghyun Jo 116 Dec 23, 2022
Detecting and Tracking Small and Dense Moving Objects in Satellite Videos: A Benchmark

This dataset is a large-scale dataset for moving object detection and tracking in satellite videos, which consists of 40 satellite videos captured by Jilin-1 satellite platforms.

Qingyong 87 Dec 22, 2022
Tensorflow 2 implementations of the C-SimCLR and C-BYOL self-supervised visual representation methods from "Compressive Visual Representations" (NeurIPS 2021)

Compressive Visual Representations This repository contains the source code for our paper, Compressive Visual Representations. We developed informatio

Google Research 30 Nov 23, 2022
PyTorch Autoencoders - Implementing a Variational Autoencoder (VAE) Series in Pytorch.

PyTorch Autoencoders Implementing a Variational Autoencoder (VAE) Series in Pytorch. Inspired by this repository Model List check model paper conferen

Subin An 8 Nov 21, 2022
My solution for the 7th place / 245 in the Umoja Hack 2022 challenge

Umoja Hack 2022 : Insurance Claim Challenge My solution for the 7th place / 245 in the Umoja Hack 2022 challenge Umoja Hack Africa is a yearly hackath

Souames Annis 17 Jun 03, 2022
[NeurIPS 2021] SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning

SSUL - Official Pytorch Implementation (NeurIPS 2021) SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning Sun

Clova AI Research 44 Dec 27, 2022
QRec: A Python Framework for quick implementation of recommender systems (TensorFlow Based)

Introduction QRec is a Python framework for recommender systems (Supported by Python 3.7.4 and Tensorflow 1.14+) in which a number of influential and

Yu 1.4k Jan 01, 2023
RSNA Intracranial Hemorrhage Detection with python

RSNA Intracranial Hemorrhage Detection This is the source code for the first place solution to the RSNA2019 Intracranial Hemorrhage Detection Challeng

24 Nov 30, 2022
Efficiently Disentangle Causal Representations

Efficiently Disentangle Causal Representations Install dependency pip install -r requirements.txt Main experiments Causality direction prediction cd

4 Apr 01, 2022