HGCAE Pytorch implementation. CVPR2021 accepted.

Related tags

Deep LearningHGCAE
Overview

Hyperbolic Graph Convolutional Auto-Encoders

Accepted to CVPR2021 🎉

Official PyTorch code of Unsupervised Hyperbolic Representation Learning via Message Passing Auto-Encoders

Jiwoong Park*, Junho Cho*, Hyung Jin Chang, Jin Young Choi (* indicates equal contribution)

vis_cora Embeddings of cora dataset. GAE is Graph Auto-Encoders in Euclidean space, HGCAE is our method. P is Poincare ball, H is Hyperboloid.

Overview

This repository provides HGCAE code in PyTorch for reproducibility with

  • PoincareBall manifold
  • Link prediction task and node clustering task on graph data
    • 6 datasets: Cora, Citeseer, Wiki, Pubmed, Blog Catalog, Amazon Photo
    • Amazon Photo was downloaded via torch-geometric package.
  • Image clustering task on images
    • 2 datasets: ImageNet10, ImageNetDog
    • Image features extracted from ImageNet10, ImageNetDog with PICA image clustering algorithm
    • Mutual K-NN graph from the image features provided.
  • ImageNet-BNCR
    • We have constructed a new dataset, ImageNet-BNCR(Balanced Number of Classes across Roots), via randomly choosing 3 leaf classes per root. We chose three roots, Artifacts, Natural objects, and Animal. Thus, there exist 9 leaf classes, and each leaf class contains 1,300 images in ImageNet-BNCR dataset.
    • bncr

Installation Guide

We use docker to reproduce performance. Please refer guide.md

Usage

1. Run docker

Before training, run our docker image:

docker run --gpus all -it --rm --shm-size 100G -v $PWD:/workspace junhocho/hyperbolicgraphnn:8 bash

If you want to cache edge splits for train/val dataset and load faster afterwards, mkdir ~/tmp and run:

docker run --gpus all -it --rm --shm-size 100G -v $PWD:/workspace -v ~/tmp:/root/tmp junhocho/hyperbolicgraphnn:8 bash

2. train_<dataset>.sh

In the docker session, run each train shell script for each dataset to reproduce performance:

Graph data link prediction

Run following commands to reproduce results:

  • sh script/train_cora_lp.sh
  • sh script/train_citeseer_lp.sh
  • sh script/train_wiki_lp.sh
  • sh script/train_pubmed_lp.sh
  • sh script/train_blogcatalog_lp.sh
  • sh script/train_amazonphoto_lp.sh
ROC AP
Cora 0.94890703 0.94726805
Citeseer 0.96059407 0.96305937
Wiki 0.95510805 0.96200790
Pubmed 0.96207212 0.96083080
Blog Catalog 0.89683939 0.88651569
Amazon Photo 0.98240673 0.97655753

Graph data node clustering

  • sh script/train_cora_nc.sh
  • sh script/train_citeseer_nc.sh
  • sh script/train_wiki_nc.sh
  • sh script/train_pubmed_nc.sh
  • sh script/train_blogcatalog_nc.sh
  • sh script/train_amazonphoto_nc.sh
ACC NMI ARI
Cora 0.74667651 0.57252940 0.55212928
Citeseer 0.69311692 0.42249294 0.44101404
Wiki 0.45945946 0.46777881 0.21517031
Pubmed 0.74849115 0.37759262 0.40770875
Blog Catalog 0.55061586 0.32557388 0.25227964
Amazon Photo 0.78130719 0.69623651 0.60342107

Image clustering

  • sh script/train_ImageNet10.sh
  • sh script/train_ImageNetDog.sh
ACC NMI ARI
ImageNet10 0.85592308 0.79019131 0.74181220
ImageNetDog 0.38738462 0.36059650 0.22696503
  • At least 11GB VRAM is required to run on Pubmed, BlogCatalog, Amazon Photo.
  • We have used GTX 1080ti only in our experiments.
  • Other gpu architectures may not reproduce above performance.

Parameter description

  • dataset : Choose dataset. Refer to each training scripts.
  • c : Curvature of hypebolic space. Should be >0. Preferably choose from 0.1, 0.5 ,1 ,2.
  • c_trainable : 0 or 1. Train c if 1.
  • dropout : Dropout ratio.
  • weight_decay : Weight decay.
  • hidden_dim : Hidden layer dimension. Same dimension used in encoder and decoder.
  • dim : Embedding dimension.
  • lambda_rec : Input reconstruction loss weight.
  • act : relu, elu, tanh.
  • --manifold PoincareBall : Use Euclidean if training euclidean models.
  • --node-cluster 1 : If specified perform node clustering task. If not, link prediction task.

Acknowledgments

This repo is inspired by hgcn.

And some of the code was forked from the following repositories:

License

This work is licensed under the MIT License

Citation

@inproceedings{park2021unsupervised,
  title={Unsupervised Hyperbolic Representation Learning via Message Passing Auto-Encoders},
  author={Jiwoong Park and Junho Cho and Hyung Jin Chang and Jin Young Choi},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  year={2021}
}

Owner
Junho Cho
Integrated Ph.D candidate of Seoul National University (Perception and Intelligence Laboratory)
Junho Cho
This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.

Introduction This is an official implementation of CvT: Introducing Convolutions to Vision Transformers. We present a new architecture, named Convolut

Bin Xiao 175 Jan 08, 2023
TransNet V2: Shot Boundary Detection Neural Network

TransNet V2: Shot Boundary Detection Neural Network This repository contains code for TransNet V2: An effective deep network architecture for fast sho

Tomáš Souček 212 Dec 27, 2022
Deploy recommendation engines with Edge Computing

RecoEdge: Bringing Recommendations to the Edge A one stop solution to build your recommendation models, train them and, deploy them in a privacy prese

NimbleEdge 131 Jan 02, 2023
K-FACE Analysis Project on Pytorch

Installation Setup with Conda # create a new environment conda create --name insightKface python=3.7 # or over conda activate insightKface #install t

Jung Jun Uk 7 Nov 10, 2022
Offcial repository for the IEEE ICRA 2021 paper Auto-Tuned Sim-to-Real Transfer.

Offcial repository for the IEEE ICRA 2021 paper Auto-Tuned Sim-to-Real Transfer.

47 Jun 30, 2022
Spectralformer: Rethinking hyperspectral image classification with transformers

Spectralformer: Rethinking hyperspectral image classification with transformers Danfeng Hong, Zhu Han, Jing Yao, Lianru Gao, Bing Zhang, Antonio Plaza

Danfeng Hong 102 Dec 29, 2022
Official implementation of "OpenPifPaf: Composite Fields for Semantic Keypoint Detection and Spatio-Temporal Association" in PyTorch.

openpifpaf Continuously tested on Linux, MacOS and Windows: New 2021 paper: OpenPifPaf: Composite Fields for Semantic Keypoint Detection and Spatio-Te

VITA lab at EPFL 50 Dec 29, 2022
Checkout some cool self-projects you can try your hands on to curb your boredom this December!

SoC-Winter Checkout some cool self-projects you can try your hands on to curb your boredom this December! These are short projects that you can do you

Web and Coding Club, IIT Bombay 29 Nov 08, 2022
This is the code related to "Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation" (ICCV 2021).

Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation This is the code relat

39 Sep 23, 2022
Pytorch implementation of Masked Auto-Encoder

Masked Auto-Encoder (MAE) Pytorch implementation of Masked Auto-Encoder: Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick

Jiyuan 22 Dec 13, 2022
TensorFlow implementation of "Learning from Simulated and Unsupervised Images through Adversarial Training"

Simulated+Unsupervised (S+U) Learning in TensorFlow TensorFlow implementation of Learning from Simulated and Unsupervised Images through Adversarial T

Taehoon Kim 569 Dec 29, 2022
IDRLnet, a Python toolbox for modeling and solving problems through Physics-Informed Neural Network (PINN) systematically.

IDRLnet IDRLnet is a machine learning library on top of PyTorch. Use IDRLnet if you need a machine learning library that solves both forward and inver

IDRL 105 Dec 17, 2022
Blender Add-On for slicing meshes with planes

MeshSlicer Blender Add-On for slicing meshes with multiple overlapping planes at once. This is a simple Blender addon to slice a silmple mesh with mul

52 Dec 12, 2022
A Simulation Environment to train Robots in Large Realistic Interactive Scenes

iGibson: A Simulation Environment to train Robots in Large Realistic Interactive Scenes iGibson is a simulation environment providing fast visual rend

Stanford Vision and Learning Lab 493 Jan 04, 2023
Scripts and a shader to get you started on setting up an exported Koikatsu character in Blender.

KK Blender Shader Pack A plugin and a shader to get you started with setting up an exported Koikatsu character in Blender. The plugin is a Blender add

166 Jan 01, 2023
免费获取http代理并生成proxifier配置文件

freeproxy 免费获取http代理并生成proxifier配置文件 公众号:台下言书 工具说明:https://mp.weixin.qq.com/s?__biz=MzIyNDkwNjQ5Ng==&mid=2247484425&idx=1&sn=56ccbe130822aa35038095317

说书人 32 Mar 25, 2022
Ray tracing of a Schwarzschild black hole written entirely in TensorFlow.

TensorGeodesic Ray tracing of a Schwarzschild black hole written entirely in TensorFlow. Dependencies: Python 3 TensorFlow 2.x numpy matplotlib About

5 Jan 15, 2022
Repository for "Exploring Sparsity in Image Super-Resolution for Efficient Inference", CVPR 2021

SMSR Reposity for "Exploring Sparsity in Image Super-Resolution for Efficient Inference" [arXiv] Highlights Locate and skip redundant computation in S

Longguang Wang 225 Dec 26, 2022
An excellent hash algorithm combining classical sponge structure and RNN.

SHA-RNN Recurrent Neural Network with Chaotic System for Hash Functions Anonymous Authors [摘要] 在这次作业中我们提出了一种新的 Hash Function —— SHA-RNN。其以海绵结构为基础,融合了混

Houde Qian 5 May 15, 2022
Novel Instances Mining with Pseudo-Margin Evaluation for Few-Shot Object Detection

Novel Instances Mining with Pseudo-Margin Evaluation for Few-Shot Object Detection (NimPme) The official implementation of Novel Instances Mining with

12 Sep 08, 2022