This is an open-source toolkit for Heterogeneous Graph Neural Network(OpenHGNN) based on DGL [Deep Graph Library] and PyTorch.

Overview

OpenHGNN

This is an open-source toolkit for Heterogeneous Graph Neural Network(OpenHGNN) based on DGL [Deep Graph Library] and PyTorch. We integrate SOTA models of heterogeneous graph.

Key Features

  • Easy-to-Use: OpenHGNN provides easy-to-use interfaces for running experiments with the given models and dataset. Besides, we also integrate optuna to get hyperparameter optimization.
  • Extensibility: User can define customized task/model/dataset to apply new models to new scenarios.
  • Efficiency: The backend dgl provides efficient APIs.

Get Started

Requirements and Installation

  • Python >= 3.6

  • PyTorch >= 1.7.1

  • DGL >= 0.7.0

  • CPU or NVIDIA GPU, Linux, Python3

1. Python environment (Optional): We recommend using Conda package manager

conda create -n openhgnn python=3.7
source activate openhgnn

2. Pytorch: Install PyTorch. For example:

# CUDA versions: cpu, cu92, cu101, cu102, cu101, cu111
pip install torch==1.8.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html

3. DGL: Install DGL, follow their instructions. For example:

# CUDA versions: cpu, cu101, cu102, cu110, cu111
pip install --pre dgl-cu101 -f https://data.dgl.ai/wheels-test/repo.html

4. OpenHGNN and other dependencies:

git clone https://github.com/BUPT-GAMMA/OpenHGNN
cd OpenHGNN
pip install -r requirements.txt

Running an existing baseline model on an existing benchmark dataset

python main.py -m model_name -d dataset_name -t task_name -g 0 --use_best_config

usage: main.py [-h] [--model MODEL] [--task TASK] [--dataset DATASET] [--gpu GPU] [--use_best_config]

optional arguments: -h, --help show this help message and exit

​ --model MODEL, -m MODEL name of models

​ --task TASK, -t TASK name of task

​ --dataset DATASET, -d DATASET name of datasets

​ --gpu GPU, -g GPU controls which gpu you will use. If you do not have gpu, set -g -1.

​ --use_best_config use_best_config means you can use the best config in the dataset with the model. If you want to set the different hyper-parameter, modify the openhgnn.config.ini manually. The best_config will override the parameter in config.ini.

​ --use_hpo Besides use_best_config, we give a hyper-parameter example to search the best hyper-parameter automatically.

e.g.:

python main.py -m GTN -d imdb4GTN -t node_classification -g 0 --use_best_config

It is under development, and we release it in a nightly build version. For now, we just give some new models, such as HetGNN, NSHE, GTN, MAGNN, RSHN.

Note: If you are interested in some model, you can refer to the below models list.

Refer to the docs to get more basic and depth usage.

Models

Supported Models with specific task

The link will give some basic usage.

Model Node classification Link prediction Recommendation
RGCN[ESWC 2018] ✔️ ✔️
HAN[WWW 2019] ✔️
KGCN[WWW 2019] ✔️
HetGNN[KDD 2019] ✔️ ✔️
GTN[NeurIPS 2019] ✔️
RSHN[ICDM 2019] ✔️
DGMI[AAAI 2020] ✔️
MAGNN[WWW 2020] ✔️
CompGCN[ICLR 2020] ✔️ ✔️
NSHE[IJCAI 2020] ✔️
NARS[arxiv] ✔️
MHNF[arxiv] ✔️
HGSL[AAAI 2021] ✔️
HGNN-AC[WWW 2021] ✔️
HPN[TKDE 2021] ✔️
RHGNN[arxiv] ✔️

To be supported models

  • Metapath2vec[KDD 2017]

Candidate models

Contributors

GAMMA LAB [BUPT]: Tianyu Zhao, Yaoqi Liu, Fengqi Liang, Yibo Li, Yanhu Mo, Donglin Xia, Xinlong Zhai, Siyuan Zhang, Qi Zhang, Chuan Shi, Cheng Yang, Xiao Wang

BUPT: Jiahang Li, Anke Hu

DGL Team: Quan Gan, Jian Zhang

Comments
  • Attribute error

    Attribute error

    I am training HetGNN model for node classification. when i try to run the script for training. I get the following error. Please help me AttributeError: 'dict' object has no attribute 'srcdata'

    opened by faizan1234567 13
  • error in HetGNN_sampler.py

    error in HetGNN_sampler.py

    line 168, in assign_features_to_blocks assign_simple_node_features(blocks[0].srcdata, g, ntypes) AttributeError: 'dict' object has no attribute 'srcdata'

    opened by Kingrd97 10
  • 关于HetGNN-emb有完全相同的情况

    关于HetGNN-emb有完全相同的情况

    通过HetGNN跑提供的academic4HetGNN.zip 数据集。emb结果有完全相同的情况发生,原因未知。请问是否是符合预期的?

    如下测试:

    `import numpy as np

    emb = np.load('emb50.npy') list = emb[:,0]

    for i in np.unique(list): idx = np.argwhere(list == i) r = idx.reshape(1, -1).squeeze(0) if len(r) > 1: print('index for {}:\n'.format(i), r) for j in r: print(emb[j]) `

    opened by lixusign 9
  • Error to run without Cuda

    Error to run without Cuda

    File "C:\Users\XyZ\OpenHGNN\openhgnn\models\GTN_sparse.py", line 220, in forward sum_g = dgl.adj_sum_graph(A, 'w_sum') AttributeError: module 'dgl' has no attribute 'adj_sum_graph'

    This Issue came-up while I ran the command= python main.py -m GTN -d imdb4GTN -t node_classification -g -1 --use_best_config

    Can someone say me where I went wrong?

    opened by M-Somtirth 4
  • 无法使用gpu训练

    无法使用gpu训练

    python main.py -m KGCN -d LastFM4KGCN -t recommendation -g 0 --use_best_config

    RuntimeError: Tensor for argument #2 'mat1' is on CPU, but expected it to be on GPU (while checking arguments for addmm)

    opened by Tingting-Liu-star 4
  • where I can find the dataset build program ?

    where I can find the dataset build program ?

    for ex : https://github.com/BUPT-GAMMA/OpenHGNN/tree/main/openhgnn/dataset#academic4HetGNN in this dataset

    when I extract_archive , A bin file about g。

    but where I can find , how to build this dataset use dgl standalone program ?

    opened by lixusign 4
  • Error when running GTN&fastGTN

    Error when running GTN&fastGTN

    Thank you very much for being able to provide this tool. I get an error when I run fastGTN using:

    python main.py -m fastGTN -t node_classification -d acm4GTN -g 0 --use_best_config

    The error is as follows:

    Traceback (most recent call last): File "D:/github/OpenHGNN/main.py", line 30, in OpenHGNN(args=config) File "D:\github\OpenHGNN\openhgnn\start.py", line 19, in OpenHGNN result = flow.train() File "D:\github\OpenHGNN\openhgnn\trainerflow\node_classification.py", line 112, in train train_loss = self._full_train_step() File "D:\github\OpenHGNN\openhgnn\trainerflow\node_classification.py", line 152, in _full_train_step logits = self.model(self.hg, h_dict)[self.category] File "D:\Program Files (x86)\anaconda\envs\OpenHGNN\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "D:\github\OpenHGNN\openhgnn\models\fastGTN.py", line 119, in forward hat_A = self.layersi File "D:\Program Files (x86)\anaconda\envs\OpenHGNN\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "D:\github\OpenHGNN\openhgnn\models\fastGTN.py", line 180, in forward sum_g = dgl.adj_sum_graph(A, 'w_sum') File "D:\Program Files (x86)\anaconda\envs\OpenHGNN\lib\site-packages\dgl\transforms\functional.py", line 2766, in adj_sum_graph C_gidx, C_weights = F.csrsum(gidxs, weights) File "D:\Program Files (x86)\anaconda\envs\OpenHGNN\lib\site-packages\dgl\backend\pytorch\sparse.py", line 817, in csrsum nrows, ncols, C_indptr, C_indices, C_eids, C_weights = CSRSum.apply(gidxs, *weights) File "D:\Program Files (x86)\anaconda\envs\OpenHGNN\lib\site-packages\dgl\backend\pytorch\sparse.py", line 668, in forward gidxC, C_weights = _csrsum(gidxs, weights) File "D:\Program Files (x86)\anaconda\envs\OpenHGNN\lib\site-packages\dgl\sparse.py", line 776, in _csrsum C, C_weights = _CAPI_DGLCSRSum(As, [F.to_dgl_nd(w) for w in A_weights]) File "D:\Program Files (x86)\anaconda\envs\OpenHGNN\lib\site-packages\dgl_ffi_ctypes\function.py", line 188, in call check_call(_LIB.DGLFuncCall( File "D:\Program Files (x86)\anaconda\envs\OpenHGNN\lib\site-packages\dgl_ffi\base.py", line 65, in check_call raise DGLError(py_str(_LIB.DGLGetLastError())) dgl._ffi.base.DGLError: [15:31:21] C:\Users\Administrator\dgl-0.5\src\array\kernel.cc:471: Check failed: A[i].indptr->dtype == idtype (int64 vs. int32) : The ID types of all graphs must be equal.

    I use the following software versions:

    python = 3.8 cudatoolkit = 11.3.1 torch = 1.11.0+cu113 dgl-cu113 = 0.8.1 & 0.8.0

    Then I ran the same version of the software on my ubuntu server with no errors.

    opened by huihuijiangqiang 3
  • bugs in minibatch trainning

    bugs in minibatch trainning

    🐛 Bug

    To Reproduce

    error occurred in _mini_train_step function in trainerflow/node_classification.py when use mini_batch_flag in node_classification task and SimpleHGN model

    import argparse
    from openhgnn.experiment import Experiment
    
    if __name__ == '__main__':
        parser = argparse.ArgumentParser()
        parser.add_argument('--model', '-m', default='SimpleHGN', type=str, help='name of models')
        parser.add_argument('--task', '-t', default='node_classification', type=str, help='name of task')
        # link_prediction / node_classification
        parser.add_argument('--dataset', '-d', default='imdb4MAGNN', type=str, help='name of datasets')
        parser.add_argument('--gpu', '-g', default='0', type=int, help='-1 means cpu')
        parser.add_argument('--use_best_config', action='store_true', help='will load utils.best_config')
        parser.add_argument('--load_from_pretrained', action='store_true', help='load model from the checkpoint')
        args = parser.parse_args()
    
        experiment = Experiment(model=args.model, dataset=args.dataset, task=args.task, gpu=args.gpu,
                                use_best_config=args.use_best_config, load_from_pretrained=args.load_from_pretrained, mini_batch_flag = True, batch_size=64)
        experiment.run()
    
    

    Expected behavior

    Minibatch training on a large heterograph

    Environment

    • torch==1.12.1
    • dgl-cu113==0.9.0 # for CUDA support
    • openhgnn==0.3.0
    • Linux
    • Python 3.8.13

    Additional context

    • the default minibatch sampler is MultiLayerFullNeighborSampler
    • the blocks is a list (line 164) and the expected input in the forward function of the model (e.g. SimpleHGN) is a hg(line 159)
    for i, (input_nodes, seeds, blocks) in enumerate(loader_tqdm):
        blocks = [blk.to(self.device) for blk in blocks]
        ...
        logits = self.model(blocks, emb)[self.category]
    
    def forward(self, hg, h_dict):
        with hg.local_scope():
            hg.ndata['h'] = h_dict
    
    opened by suxnju 2
  • 关于HetGNN的emb顺序困惑

    关于HetGNN的emb顺序困惑

    请教下 在 x = self.model(blocks[0], input_features) 中返回的x 是dict 。 他里面每种node_type 的emb 和blocks[0] 的入参的点的顺序如何对应?

    我核对了以后 发现并不是 blocks[0].srcnodes[node_type].data[dgl.NID] 所代表的节点顺序。

    opened by lixusign 2
  • HIN_LinkPrediction' object has no attribute 'get_idx'

    HIN_LinkPrediction' object has no attribute 'get_idx'

    \OpenHGNN-main\openhgnn\tasks\link_prediction.py", line 32, in init self.train_hg, self.val_hg, self.test_hg = self.dataset.get_idx() AttributeError: 'HIN_LinkPrediction' object has no attribute 'get_idx'

    opened by xuptacm 2
  • 无法使用ACM4GTN数据集运行GTN

    无法使用ACM4GTN数据集运行GTN

    运行 python main.py -m GTN -t node_classification -d acm4GTN -g 0 --use_best_config

    报错信息 Using backend: pytorch Use the best config. Done saving data into cached files. Modify the out_dim with num_classes 0%| | 0/50 [00:00<?, ?it/s] Traceback (most recent call last): File "main.py", line 24, in OpenHGNN(args=config) File "/home/special/user/lihaoran/OpenHGNN_clone_from_github/openhgnn/start.py", line 17, in OpenHGNN result = flow.train() File "/home/special/user/lihaoran/OpenHGNN_clone_from_github/openhgnn/trainerflow/node_classification.py", line 77, in train loss = self._full_train_step() File "/home/special/user/lihaoran/OpenHGNN_clone_from_github/openhgnn/trainerflow/node_classification.py", line 109, in _full_train_step loss.backward() File "/opt/miniconda3/lib/python3.7/site-packages/torch/_tensor.py", line 255, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/opt/miniconda3/lib/python3.7/site-packages/torch/autograd/init.py", line 149, in backward allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag File "/opt/miniconda3/lib/python3.7/site-packages/torch/autograd/function.py", line 87, in apply return self._forward_cls.backward(self, *args) # type: ignore[attr-defined] File "/opt/miniconda3/lib/python3.7/site-packages/dgl/backend/pytorch/sparse.py", line 544, in backward gidxA.reverse(), A_weights, gidxC, dC_weights, gidxB.number_of_ntypes()) File "/opt/miniconda3/lib/python3.7/site-packages/dgl/backend/pytorch/sparse.py", line 638, in csrmm CSRMM.apply(gidxA, A_weights, gidxB, B_weights, num_vtypes) File "/opt/miniconda3/lib/python3.7/site-packages/dgl/backend/pytorch/sparse.py", line 528, in forward gidxC, C_weights = _csrmm(gidxA, A_weights, gidxB, B_weights, num_vtypes) File "/opt/miniconda3/lib/python3.7/site-packages/dgl/sparse.py", line 548, in _csrmm A, F.to_dgl_nd(A_weights), B, F.to_dgl_nd(B_weights), num_vtypes) File "dgl/_ffi/_cython/./function.pxi", line 287, in dgl._ffi._cy3.core.FunctionBase.call File "dgl/_ffi/_cython/./function.pxi", line 232, in dgl._ffi._cy3.core.FuncCall File "dgl/_ffi/_cython/./base.pxi", line 155, in dgl._ffi._cy3.core.CALL dgl._ffi.base.DGLError: [17:18:53] /opt/dgl/src/array/cuda/csr_mm.cu:87: Check failed: e == CUSPARSE_STATUS_SUCCESS: CUSPARSE ERROR: 11 Stack trace: [bt] (0) /opt/miniconda3/lib/python3.7/site-packages/dgl/libdgl.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x4f) [0x7fd13c2565df] [bt] (1) /opt/miniconda3/lib/python3.7/site-packages/dgl/libdgl.so(std::pair<dgl::aten::CSRMatrix, dgl::runtime::NDArray> dgl::aten::cusparse::CusparseSpgemm<float, int>(dgl::aten::CSRMatrix const&, dgl::runtime::NDArray, dgl::aten::CSRMatrix const&, dgl::runtime::NDArray)+0x625) [0x7fd13c6accd5] [bt] (2) /opt/miniconda3/lib/python3.7/site-packages/dgl/libdgl.so(std::pair<dgl::aten::CSRMatrix, dgl::runtime::NDArray> dgl::aten::CSRMM<2, long, float>(dgl::aten::CSRMatrix const&, dgl::runtime::NDArray, dgl::aten::CSRMatrix const&, dgl::runtime::NDArray)+0x59e) [0x7fd13c6af81e] [bt] (3) /opt/miniconda3/lib/python3.7/site-packages/dgl/libdgl.so(dgl::aten::CSRMM(dgl::aten::CSRMatrix, dgl::runtime::NDArray, dgl::aten::CSRMatrix, dgl::runtime::NDArray)+0x10d6) [0x7fd13c493466] [bt] (4) /opt/miniconda3/lib/python3.7/site-packages/dgl/libdgl.so(+0x48cfa8) [0x7fd13c493fa8] [bt] (5) /opt/miniconda3/lib/python3.7/site-packages/dgl/libdgl.so(+0x48d724) [0x7fd13c494724] [bt] (6) /opt/miniconda3/lib/python3.7/site-packages/dgl/libdgl.so(DGLFuncCall+0x48) [0x7fd13c4d5c78] [bt] (7) /opt/miniconda3/lib/python3.7/site-packages/dgl/_ffi/_cy3/core.cpython-37m-x86_64-linux-gnu.so(+0x163ea) [0x7fd1136f03ea] [bt] (8) /opt/miniconda3/lib/python3.7/site-packages/dgl/_ffi/_cy3/core.cpython-37m-x86_64-linux-gnu.so(+0x1695b) [0x7fd1136f095b]

    GPU:A100-PCIE DGL版本:dgl-cu111-0.8a211008 看起来可以获得logits,但是无法进行反向传播

    很奇怪的是运行IMDB4GTN数据集时没有任何问题 使用MHNF运行ACM4GTN也报了同样的错误

    我看到GTN有两个,一个是叫GTN_spare.py一个GTN.py,默认是用的GTN_spare。用GTN.py可以运行ACM4GTN,但是准确率只有60%上下

    opened by a772316182 2
  • Help needed: Wanted behavior of Experiment.specific_trainerflow.get method and task/trainerflow registration

    Help needed: Wanted behavior of Experiment.specific_trainerflow.get method and task/trainerflow registration

    Hi, I am trying to create a new trainer flow, as well as a new task. I am struggling a bit and have a few questions: When I register them with @register_flow(str_flow) and @register_task(str_task), must str_taskand str_flowbe identical?
    Because as my flow is not specific to a model, it is not in the specific_trainerflowdictionnary defined in the Experiment class. So the line 92 in experiment.py( trainerflow = self.specific_trainerflow.get(self.config.model, self.config.task) ) returns the key of the task as the trainerflow_key. Is this the wanted behavior?

    Thanks!

    opened by Carayolj 0
  • run HGSL model error

    run HGSL model error

    🐛 Bug

    when i run the suggest command :

    python main.py -m HGSL -d acm4GTN -t node_classification -g 0 --use_best_config
    

    this raise an error like:

    Traceback (most recent call last): File "main.py", line 21, in experiment.run() File "/workspace/OpenHGNN/openhgnn/experiment.py", line 97, in run flow = build_flow(self.config, trainerflow) File "/workspace/OpenHGNN/openhgnn/trainerflow/init.py", line 46, in build_flow return FLOW_REGISTRYflow_name File "/workspace/OpenHGNN/openhgnn/trainerflow/node_classification.py", line 42, in init self.model = build_model(self.model).build_model_from_args(self.args, self.hg).to(self.device) File "/workspace/OpenHGNN/openhgnn/models/HGSL.py", line 106, in build_model_from_args mp_emb_dim = hg.nodes["paper"].data["pap_m2v_emb"].shape[1] File "/opt/conda/lib/python3.7/site-packages/dgl/view.py", line 73, in getitem return self._graph._get_n_repr(self._ntid, self._nodes)[key] File "/opt/conda/lib/python3.7/site-packages/dgl/frame.py", line 622, in getitem return self._columns[name].data KeyError: 'pap_m2v_emb'

    it seems like there is no pap_m2v_emb key in paper nodes data, so how to fix it?


    more error update: when I just make mp_emb_dim=0 to jump this line, more errors raise, such as no hidden_dimmini_batch_flag ... defined in config, besides, when I successfully run this model, another exception was raised:

    image

    Do you have an updated version of the model?

    Sincere thanks.

    To Reproduce

    Steps to reproduce the behavior:

    1.cd OpenHGNN 2.python main.py -m HGSL -d acm4GTN -t node_classification -g 0 --use_best_config

    Expected behavior

    Environment

    • OpenHGNN Version (e.g., 1.0):
    • PyTorch latest, DGL latest
    • Linux
    • python main.py -m HGSL -d acm4GTN -t node_classification -g 0 --use_best_config
    • best_config for recommend
    opened by vchopin 1
  • How to train model using own dataset?

    How to train model using own dataset?

    ❓ Questions and Help

    I want to train my own HNN data, could you tell me how to edit this code? the data in ./openhgnn/dataset are download from https://s3.cn-north-1.amazonaws.com.cn/dgl-data/ and is .bin file. So how could I change this dataset? 救救孩子

    opened by Fino2020 1
  • [DHNE]

    [DHNE]

    Description

    Checklist

    Please feel free to remove inapplicable items for your PR.

    • [x] The PR title starts with [$CATEGORY] (such as [NN], [Model], [Doc], [Feature]])
    • [x] Changes are complete (i.e. I finished coding on this PR)
    • [x] All changes have test coverage
    • [x] Code is well-documented
    • [x] To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
    • [x] Related issue is referred in this PR
    • [x] If the PR is for a new model/paper, I've updated the example index here.

    Changes

    opened by Vera-200 0
  • [Model]Mg2vec

    [Model]Mg2vec

    Description

    Add the Mg2vec Model and add the EdgeClassification Task

    Checklist

    Please feel free to remove inapplicable items for your PR.

    • [ ] The PR title starts with [$CATEGORY] (such as [NN], [Model], [Doc], [Feature]])
    • [ ] Changes are complete (i.e. I finished coding on this PR)
    • [ ] All changes have test coverage
    • [ ] Code is well-documented
    • [ ] To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
    • [ ] Related issue is referred in this PR
    • [ ] If the PR is for a new model/paper, I've updated the example index here.

    Changes

    • [ ] Add configs for Mg2vec in config.ini and config.py
    • [ ] Add Mg2vec.py, which contains the model part
    • [ ] Add mg2vec_sampler.py for reading data
    • [ ] Add mg2vec_trainer.py for training
    • [ ] Add EdgeClassificationDataset.py for EdgeClassification Task, which is a modified version of NodeClassificationDataset.py
    • [ ] Add mg2vec_dataset.py for download/read mg2vec dataset
    • [ ] Add edge_classification.py, which is a modified version of node_classification.py
    • [ ] Add ec_with_SVC function in evaluator.py for edge_classification task
    • [ ] Add readme.md for Mg2vec model
    • [ ] Modify corresponding init.py and experiment.py
    opened by null-xyj 0
Releases(v0.3.0)
Owner
BUPT GAMMA Lab
Graph dAta Mining and MAchine learning Lab at Beijing University of Posts and Telecommunications
BUPT GAMMA Lab
Official implementation of particle-based models (GNS and DPI-Net) on the Physion dataset.

Physion: Evaluating Physical Prediction from Vision in Humans and Machines [paper] Daniel M. Bear, Elias Wang, Damian Mrowca, Felix J. Binder, Hsiao-Y

Hsiao-Yu Fish Tung 18 Dec 19, 2022
Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation

CorDA Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation Prerequisite Please create and activate the follo

Qin Wang 60 Nov 30, 2022
[EMNLP 2020] Keep CALM and Explore: Language Models for Action Generation in Text-based Games

Contextual Action Language Model (CALM) and the ClubFloyd Dataset Code and data for paper Keep CALM and Explore: Language Models for Action Generation

Princeton Natural Language Processing 43 Dec 16, 2022
EgoNN: Egocentric Neural Network for Point Cloud Based 6DoF Relocalization at the City Scale

EgonNN: Egocentric Neural Network for Point Cloud Based 6DoF Relocalization at the City Scale Paper: EgoNN: Egocentric Neural Network for Point Cloud

19 Sep 20, 2022
LONG-TERM SERIES FORECASTING WITH QUERYSELECTOR – EFFICIENT MODEL OF SPARSEATTENTION

Query Selector Here you can find code and data loaders for the paper https://arxiv.org/pdf/2107.08687v1.pdf . Query Selector is a novel approach to sp

MORAI 62 Dec 17, 2022
Speech Enhancement Generative Adversarial Network Based on Asymmetric AutoEncoder

ASEGAN: Speech Enhancement Generative Adversarial Network Based on Asymmetric AutoEncoder 中文版简介 Readme with English Version 介绍 基于SEGAN模型的改进版本,使用自主设计的非

Nitin 53 Nov 17, 2022
Evaluation framework for testing segmentation networks in PyTorch

Evaluation framework for testing segmentation networks in PyTorch. What segmentation network to choose for next Kaggle competition? This benchmark knows the answer!

Eugene Khvedchenya 37 Apr 27, 2022
Training Very Deep Neural Networks Without Skip-Connections

DiracNets v2 update (January 2018): The code was updated for DiracNets-v2 in which we removed NCReLU by adding per-channel a and b multipliers without

Sergey Zagoruyko 585 Oct 12, 2022
Hand tracking demo for DIY Smart Glasses with a remote computer doing the work

CameraStream This is a demonstration that streams the image from smartglasses to a pc, does the hand recognition on the remote pc and streams the proc

Teemu Laurila 20 Oct 13, 2022
TensorRT examples (Jetson, Python/C++)(object detection)

TensorRT examples (Jetson, Python/C++)(object detection)

Nobuo Tsukamoto 53 Dec 22, 2022
This is the official code for the paper "Ad2Attack: Adaptive Adversarial Attack for Real-Time UAV Tracking".

Ad^2Attack:Adaptive Adversarial Attack on Real-Time UAV Tracking Demo video 📹 Our video on bilibili demonstrates the test results of Ad^2Attack on se

Intelligent Vision for Robotics in Complex Environment 10 Nov 07, 2022
Toontown House CT Edition

Toontown House: Classic Toontown House Classic source that should just work. ❓ W

Open Source Toontown Servers 5 Jan 09, 2022
A curated list of long-tailed recognition resources.

Awesome Long-tailed Recognition A curated list of long-tailed recognition and related resources. Please feel free to pull requests or open an issue to

Zhiwei ZHANG 542 Jan 01, 2023
PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short-Term Transformer for Online Action Detection".

Long Short-Term Transformer for Online Action Detection Introduction This is a PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short

77 Dec 16, 2022
This repository builds a basic vision transformer from scratch so that one beginner can understand the theory of vision transformer.

vision-transformer-from-scratch This repository includes several kinds of vision transformers from scratch so that one beginner can understand the the

1 Dec 24, 2021
(ICCV 2021) ProHMR - Probabilistic Modeling for Human Mesh Recovery

ProHMR - Probabilistic Modeling for Human Mesh Recovery Code repository for the paper: Probabilistic Modeling for Human Mesh Recovery Nikos Kolotouros

Nikos Kolotouros 209 Dec 13, 2022
Scripts for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation and a convolutional neural network (CNN) for image classification

About subwAI subwAI - a project for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation

82 Jan 01, 2023
This code is 3d-CNN model that can predict environmental value

Predict-environmental-value-3dCNN This code is 3d-CNN model that can predict environmental value. Firstly, I built a model that can create a lot of bu

1 Jan 06, 2022
Analysing poker data from home games with friends

Poker Game Analysis Analysing poker data from home games with friends. Not a lot of data is collected, so this project is primarily focussed on descri

Stavros Karmaniolos 1 Oct 15, 2022
The sixth place winning solution (6/220) in 2021 Gaofen Challenge.

SwinTransformer + OBBDet The sixth place winning solution (6/220) in the track of Fine-grained Object Recognition in High-Resolution Optical Images, 2

ming71 46 Dec 02, 2022