Multivariate Time Series Forecasting with efficient Transformers. Code for the paper "Long-Range Transformers for Dynamic Spatiotemporal Forecasting."

Overview

Spacetimeformer Multivariate Forecasting

This repository contains the code for the paper, "Long-Range Transformers for Dynamic Spatiotemporal Forecasting", Grigsby, Wang and Qi, 2021. (arxiv)

spatiotemporal_embedding

Transformers are a high-performance approach to sequence-to-sequence timeseries forecasting. However, stacking multiple sequences into each token only allows the model to learn temporal relationships across time. This can ignore important spatial relationships between variables. Our model (nickamed "Spacetimeformer") flattens multivariate timeseries into extended sequences where each token represents the value of one variable at a given timestep. Long-Range Transformers can then learn relationships over both time and space. For much more information, please refer to our paper.

We will be adding additional instructions, example commands and dataset links in the coming days.

Installation

This repository was written and tested for python 3.8 and pytorch 1.9.0.

git clone https://github.com/UVA-MachineLearningBioinformatics/spacetimeformer.git
cd spacetimeformer
conda create -n spacetimeformer python==3.8
source activate spacetimeformer
pip install -r requirements.txt
pip install -e .

This installs a python package called spacetimeformer.

Dataset Setup

CSV datsets like AL Solar, NY-TX Weather, Exchange Rates, and the Toy example are included with the source code of this repo.

Larger datasets should be downloaded and their folders placed in the data/ directory. You can find them with this google drive link. Note that the metr-la and pems-bay data is directly from this repo - all we've done is skip a step for you and provide the raw train, val, test, *.npz files our dataset code expects.

Recreating Experiments with Our Training Script

The main training functionality for spacetimeformer and most baselines considered in the paper can be found in the train.py script. The training loop is based on the pytorch_lightning framework.

Commandline instructions for each experiment can be found using the format: python train.py *model* *dataset* -h.

Model Names:

  • linear: a basic autoregressive linear model.
  • lstnet: a more typical RNN/Conv1D model for multivariate forecasting. Based on the attention-free implementation of LSTNet.
  • lstm: a typical encoder-decoder LSTM without attention. We use scheduled sampling to anneal teacher forcing throughout training.
  • mtgnn: a hybrid GNN that learns its graph structure from data. For more information refer to the paper. We use the implementation from pytorch_geometric_temporal
  • spacetimeformer: the multivariate long-range transformer architecture discussed in our paper.
    • note that the "Temporal" ablation discussed in the paper is a special case of the spacetimeformer model. Set the embed_method = temporal. Spacetimeformer has many configurable options and we try to provide a thorough explanation with the commandline -h instructions.

Dataset Names:

  • metr-la and pems-bay: traffic forecasting datasets. We use a very similar setup to DCRNN.
  • toy2: is the toy dataset mentioned at the beginning of our experiments section. It is heavily based on the toy dataset in TPA-LSTM.
  • asos: Is the codebase's name for what the paper calls "NY-TX Weather."
  • solar_energy: Is the codebase's name for what is more commonly called "AL Solar."
  • exchange: A dataset of exchange rates. Spacetimeformer performs relatively well but this is tiny dataset of highly non-stationary data where linear is already a SOTA model.
  • precip: A challenging spatial message-passing task that we have not yet been able to solve. We collected daily precipitation data from a latitude-longitude grid over the Continental United States. The multivariate sequences are sampled from a ringed "radar" configuration as shown below in green. We expand the size of the dataset by randomly moving this radar around the country.

Logging with Weights and Biases

We used wandb to track all of results during development, and you can do the same by hardcoding the correct organization/username and project name in the train.py file. Comments indicate the location near the top of the main method. wandb logging can then be enabled with the --wandb flag.

There are two automated figures that can be saved to wandb between epochs. These include the attention diagrams (e.g., Figure 4 of our paper) and prediction plots (e.g., Figure 6 of our paper). Enable attention diagrams with --attn_plot and prediction curves with --plot.

Example Spacetimeformer Training Commands

Toy Dataset

python train.py spacetimeformer toy2 --run_name spatiotemporal_toy2 \
--d_model 100 --d_ff 400 --enc_layers 4 --dec_layers 4 \
--gpus 0 1 2 3 --batch_size 32 --start_token_len 4 --n_heads 4 \
--grad_clip_norm 1 --early_stopping --trials 1

Metr-LA

python train.py spacetimeformer metr-la --start_token_len 3 --batch_size 32 \
--gpus 0 1 2 3 --grad_clip_norm 1 --d_model 128 --d_ff 512 --enc_layers 5 \
--dec_layers 4 --dropout_emb .3 --dropout_ff .3 --dropout_qkv 0 \ 
--run_name spatiotemporal_metr-la --base_lr 1e-3 --l2_coeff 1e-2 \

Temporal Attention Ablation with Negative Log Likelihood Loss on NY-TX Weather ("asos") with WandB Loggin and Figures

python train.py spacetimeformer asos --context_points 160 --target_points 40 \ 
--start_token_len 8 --grad_clip_norm 1 --gpus 0 --batch_size 128 \ 
--d_model 200 --d_ff 800 --enc_layers 3 --dec_layers 3 \
--local_self_attn none --local_cross_attn none --l2_coeff .01 \
--dropout_emb .1 --run_name temporal_asos_160-40-nll --loss nll \
--time_resolution 1 --dropout_ff .2 --n_heads 8 --trials 3 \ 
--embed_method temporal --early_stopping --wandb --attn_plot --plot

Using Spacetimeformer in Other Applications

If you want to use our model in the context of other datasets or training loops, you will probably want to go a step lower than the spacetimeformer_model.Spacetimeformer_Forecaster pytorch-lightning wrapper. Please see spacetimeformer_model.nn.Spacetimeformer. arch-fig

Citation

If you use this model in academic work please feel free to cite our paper

@misc{grigsby2021longrange,
      title={Long-Range Transformers for Dynamic Spatiotemporal Forecasting}, 
      author={Jake Grigsby and Zhe Wang and Yanjun Qi},
      year={2021},
      eprint={2109.12218},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

st-embed-fig

Comments
  • training_epoch_loop.py

    training_epoch_loop.py", line 486, in _update_learning_rates raise MisconfigurationException( pytorch_lightning.utilities.exceptions.MisconfigurationException: ReduceLROnPlateau conditioned on metric val/forecast_loss which is not available. Available metrics are: ['train/mape', 'train/mae', 'train/mse', 'train/rse', 'train/forecast_loss', 'train/class_loss', 'train/loss', 'train/acc']. Condition can be set using `monitor` key in lr scheduler dict

    training_epoch_loop.py", line 486, in _update_learning_rates raise MisconfigurationException( pytorch_lightning.utilities.exceptions.MisconfigurationException: ReduceLROnPlateau conditioned on metric val/forecast_loss which is not available. Available metrics are: ['train/mape', 'train/mae', 'train/mse', 'train/rse', 'train/forecast_loss', 'train/class_loss', 'train/loss', 'train/acc']. Condition can be set using monitor key in lr scheduler dict

    opened by jeevanu 6
  • "CUDA extension for cauchy multiplication not found" when running toy project

    Hi!

    I get error "cauchy multiplication not found" when running the command below. I have tried the suggested "python setup install" in the cachy directory without success.

    python train.py spacetimeformer toy2 --run_name spatiotemporal_toy2 --d_model 100 --d_ff 400 --enc_layers 2 --dec_layers 2 --gpus 0 --batch_size 32 --start_token_len 4 --n_heads 4 --grad_clip_norm 1 --early_stopping --trials 1
    
    CUDA extension for cauchy multiplication not found. go to ./extensions/cauchy and try `python setup.py install`
    

    Any tips how to fix this error?

    opened by christofer-f 4
  • How to get inference result?

    How to get inference result?

    Thanks for your great work.

    I want to infer a trained spacetimeformer model. The model needs 4 inputs; (x_c, y_c, x_t, y_t). We have only x_c, y_c, and x_t ready.

    • y_t must be zeros with right shape. Right?

    • output, (logits, labels), attn = spacetimeformer(x_c, y_c, x_t, y_t). The output has two tensors; output.loc and output.scale. What is the real output for y_t here? output.loc or output.scale?

      Thanks in advance.

    opened by TNA8 4
  • Using the model to inference

    Using the model to inference

    Hi,

    Thank you for this module. Is there an example for inferencing? I tried adding self.save_hyperparameters() to the Spacetimeformer_Forecaster init and then using the checkpoint as below but this requires x_c, y_c, x_t and y_t. Shouldn't inference only need the context and not the target? Is there some other step that I need to take to produce a model that can be used to inference?

        model = Spacetimeformer_Forecaster.load_from_checkpoint(checkpoint_path=checkpoint_path)
    

    Thank you for your help.

    opened by threadthreasher 4
  • Question about embedding

    Question about embedding

    Hi, great job!

    I just have a quick question about the embedding part. Although position embedding is a tradition in Transformer based models, it seems you embeded so much different kinds of information into the original sequence in an addition way. I am not sure if my understanding is correct. The concatenation form is easier for the network to learn at cost of more parameters and vice versa for the addition form. I believe the nerual network bears certain capability to extract different information from the raw input and projects them into higher order latent space, but would that possibly deteriorates the performance if we add too much? And will large hidden dim mitigates?

    BTW, I also heard of the explanation: (a+b)' = a' + b' which means the addition and concatenation operations are equal fom a gradient perspective. And that contradicts to my gut feeling.

    Thanks very much!

    opened by kpmokpmo 4
  • Issue while loading a model

    Issue while loading a model

    Hi, After the training I'm trying to save the model and load it from the checkpoint. But I got mismatch size error, so I tried to load the data as soon as the training is over. But I still have the issue. It looks like that it's the same issue as #13 .

    The code :

        # Test from the code
        trainer.test(datamodule=data_module, ckpt_path="best")
        #Added only these two line
        trainer.save_checkpoint("best.ckpt")
        forecaster = forecaster.load_from_checkpoint(checkpoint_path="best.ckpt")
    

    And I got a LOT of size mismatch

    RuntimeError: Error(s) in loading state_dict for Spacetimeformer_Forecaster:
    	size mismatch for spacetimeformer.enc_embedding.x_emb.embed_weight: copying a param with shape torch.Size([7, 6]) from checkpoint, the shape in current model is torch.Size([5, 6]).
    	size mismatch for spacetimeformer.enc_embedding.y_emb.weight: copying a param with shape torch.Size([512, 43]) from checkpoint, the shape in current model is torch.Size([512, 31]).
    	size mismatch for spacetimeformer.enc_embedding.var_emb.weight: copying a param with shape torch.Size([18, 512]) from checkpoint, the shape in current model is torch.Size([1, 512]).
    	size mismatch for spacetimeformer.dec_embedding.x_emb.embed_weight: copying a param with shape torch.Size([7, 6]) from checkpoint, the shape in current model is torch.Size([5, 6]).
    	size mismatch for spacetimeformer.dec_embedding.y_emb.weight: copying a param with shape torch.Size([512, 43]) from checkpoint, the shape in current model is torch.Size([512, 31]).
    	size mismatch for spacetimeformer.dec_embedding.var_emb.weight: copying a param with shape torch.Size([18, 512]) from checkpoint, the shape in current model is torch.Size([1, 512]).
    	size mismatch for spacetimeformer.classifier.weight: copying a param with shape torch.Size([18, 512]) from checkpoint, the shape in current model is torch.Size([1, 512]).
    	size mismatch for spacetimeformer.classifier.bias: copying a param with shape torch.Size([18]) from checkpoint, the shape in current model is torch.Size([1]).
    

    What am I doing wrong ?

    Also I didn't get what does the --start_token_len does ?

    Length of decoder start token. Adds this many of the final context points to the start of the target sequence.
    

    For example if I have one data every hour, and start_token_len = 3, it will predict for the 3 hours from now ? And train to predict this value ?

    Best Regards ! And thanks for the model !

    opened by jgcb00 3
  • GPU memory leak

    GPU memory leak

    After 80 epochs (8 hours), I got this error

    Epoch 80:  87%|████████████████▌  | 750/858 [04:56<00:42,  2.53it/s, loss=0.258]Traceback (most recent call last):
      File "train.py", line 442, in <module>
        main(args)
      File "train.py", line 424, in main
        trainer.fit(forecaster, datamodule=data_module)
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 460, in fit
        self._run(model)
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 758, in _run
        self.dispatch()
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 799, in dispatch
        self.accelerator.start_training(self)
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 96, in start_training
        self.training_type_plugin.start_training(trainer)
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 144, in start_training
        self._results = trainer.run_stage()
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 809, in run_stage
        return self.run_train()
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 871, in run_train
        self.train_loop.run_training_epoch()
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 569, in run_training_epoch
        self.trainer.logger_connector.log_train_epoch_end_metrics(epoch_output)
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py", line 325, in log_train_epoch_end_metrics
        self.log_metrics(epoch_log_metrics, {})
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py", line 208, in log_metrics
        mem_map = memory.get_memory_profile(self.log_gpu_memory)
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/core/memory.py", line 365, in get_memory_profile
        memory_map = get_gpu_memory_map()
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/core/memory.py", line 384, in get_gpu_memory_map
        result = subprocess.run(
      File "/opt/conda/lib/python3.8/subprocess.py", line 516, in run
        raise CalledProcessError(retcode, process.args,
    subprocess.CalledProcessError: Command '['/usr/bin/nvidia-smi', '--query-gpu=memory.used', '--format=csv,nounits,noheader']' returned non-zero exit status 255.
    Epoch 80:  87%|████████▋ | 750/858 [04:57<00:42,  2.52it/s, loss=0.258]         
    
    opened by Suppersine 3
  • load_from_checkpoint   error

    load_from_checkpoint error

    `model = spacetimeformer_model.Spacetimeformer_Forecaster(d_x=6, d_y=6) model.load_from_checkpoint(check_point) data_module, inv_scaler, null_val = create_dset() trainer = pl.Trainer()

    trainer.test(model=model, datamodule=data_module)`

    there is an error as RuntimeError: Error(s) in loading state_dict for Spacetimeformer_Forecaster: Unexpected key(s) in state_dict:

    could you tell me how to solve it? thank you very much!

    opened by SingleXuu 3
  • Question on dimensions of Datasets

    Question on dimensions of Datasets

    Hello -

    I am looking to try new datasets with your model, but just having a little hard time understanding the x_dim and y_dim hard coded into the train.py.

    What exactly do each of these mean?

    For example, for the solar_energy data set, I see that the y_dim is 137 because it has 137 features, but where does the 6 come from?

    opened by ghost 3
  • Spacetimeformer invalid accelerator name

    Spacetimeformer invalid accelerator name

    executed the following exaple from the readme:

    python train.py spacetimeformer exchange --batch_size 32 --warmup_steps 1000 --d_model 200 --d_ff 700 --enc_layers 5 --dec_layers 6 --dropout_emb .1 --dropout_ff .3 --run_name pems-bay-spatiotemporal --base_lr 1e-3 --l2_coeff 1e-3 --loss mae --d_qk 30 --d_v 30 --n_heads 10 --patience 10 --decay_factor .8 --workers 4 --gpus 0
    

    I get the following error:

    Traceback (most recent call last):
      File "train.py", line 851, in <module>
        main(args)
      File "train.py", line 816, in main
        trainer = pl.Trainer(
      File "/opt/miniconda3/envs/spacetimeformer/lib/python3.8/site-packages/pytorch_lightning/utilities/argparse.py", line 345, in insert_env_defaults
        return fn(self, **kwargs)
      File "/opt/miniconda3/envs/spacetimeformer/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 433, in __init__
        self._accelerator_connector = AcceleratorConnector(
      File "/opt/miniconda3/envs/spacetimeformer/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 193, in __init__
        self._check_config_and_set_final_flags(
      File "/opt/miniconda3/envs/spacetimeformer/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 292, in _check_config_and_set_final_flags
        raise ValueError(
    ValueError: You selected an invalid accelerator name: `accelerator='dp'`. Available names are: cpu, cuda, hpu, ipu, mps, tpu.
    

    I see in the train.py file accelerator is set to "dp". I dont seen "dp" as a valid command in the torch lightning documentation, where dp is a data parallel strategy and the accelerator s/b gpu

    opened by alre5639 2
  • Questions about Global-Attention and Decoder Fast-Attention

    Questions about Global-Attention and Decoder Fast-Attention

    Thanks for uploading the code. This is a very interesting project!

    I have a few questions surrounding some implementation details.

    1) Global-Attention What is the purpose of WindowTime function and how does it help facilitate the concept of "Global-attention"?

    2) Decoder Fast-Attention When the Performer option is selected, the FastAttention for decoder self-attention is not instantiated with the causal flag set to True. This results in the attention mechanism of the forecast sequence attending to future tokens. Is this issue being addressed somewhere else?

    opened by ShuAiii 2
  • Time Emb Dim

    Time Emb Dim

    Default Time Emb Dim is 6, however, the paper states that the value used was 12. The example training commands don't specify this hyper parameter. To recreate the results in the paper should 12 be used? Would there be drastically different results due to this hyper parameter?

    Also had to update a line in classification_loss function: acc = torchmetrics.functional.accuracy(torch.softmax(logits, dim=1),labels,task='multiclass',num_classes=self.d_yc) in spacetimeformer_model.py. I'm assuming due to versioning differences.

    opened by angmc 0
  • Getting error on PEMS-BAY dataset

    Getting error on PEMS-BAY dataset

    I downloaded the pems-bay.h5 file from https://zenodo.org/record/4263971 and used https://github.com/liyaguang/DCRNN/blob/master/scripts/generate_training_data.py to generate test.npz, train.npz, val.npz files in ./data/pems_bay/.

    When I run the command from the README.md python train.py spacetimeformer pems-bay --batch_size 32 --warmup_steps 1000 --d_model 200 --d_ff 700 --enc_layers 5 --dec_layers 6 --dropout_emb .1 --dropout_ff .3 --run_name pems-bay-spatiotemporal --base_lr 1e-3 --l2_coeff 1e-3 --loss mae --data_path ./data/pems_bay/ --d_qk 30 --d_v 30 --n_heads 10 --patience 10 --decay_factor .8

    I get the following error

    Traceback (most recent call last): File "/spacetimeformer-main/spacetimeformer/train.py", line 854, in main(args) File "/spacetimeformer-main/spacetimeformer/train.py", line 758, in main ) = create_dset(args) File "/spacetimeformer-main/spacetimeformer/train.py", line 394, in create_dset data = stf.data.metr_la.METR_LA_Data(config.data_path) File "/spacetimeformer-main/spacetimeformer/data/metr_la/metr_la.py", line 43, in init x_c_train, y_c_train = self._split_set(context_train) File "/spacetimeformer-main/spacetimeformer/data/metr_la/metr_la.py", line 21, in _split_set time = 2.0 * x[:, :, 0] - 1.0 IndexError: too many indices for array: array is 2-dimensional, but 3 were indexed

    opened by kadattack 2
  • Purpose of ignore_col Parameter?

    Purpose of ignore_col Parameter?

    Hello -

    Thank you for all the improvements in the model, this is truly a very robust implementation.

    Quick Question: What is the purpose of the ignore_columns parameter within the CSVTimeSeries Class? Specifically, if you were targetting specific columns in a CSV dataset, how would ignoring specific columns change yc_dim and yt_dim.

    Thank you for answering a possible simple question.

    Best

    opened by ghost 0
  • All forecast values close to mean value

    All forecast values close to mean value

    Hello Jake

    I have a model which uses spatial attention with GAT and temporal attention with Transformers. I am working on Pems bay dataset for my project.

    During training, the decoder is one shot meaning all the timesteps are fed into the decoder at the same time. When I printed out the predictions, they are all near the mean value and not capturing the trends in the data.

    Is there any obvious issues that you see?

    I checked position encoding, attention weights from graphs and normalised with Z-score normalisation and use inverse scaling before comparison.

    opened by ghost 1
  • How to interpret the forecast results of Toy2, exchange, NY-TX datasets

    How to interpret the forecast results of Toy2, exchange, NY-TX datasets

    Hello dear author! Recently, after I modified it, I tested the effect on toy2, exchange and NY-TX according to the original text, and got the effect picture in the article. What are their respective prediction targets on these three datasets?(use the specified command without modification) (1) In the test of the toy2 data set, wandb records the prediction results of the test set at different time steps, with eight pictures at a single time step. In the test set of toy2, how to determine the part of the time period corresponding to the multivariate prediction output? toy2 toy2_8 (2) The result output on the exchange is also a single time step with eight pictures. Are the eight graphs here one-to-one corresponding to the exchange rates of eight different countries? (Use several other countries and national data as multivariate input to predict the output of the country) exchange (3) What is the prediction goal of the NY-TX dataset, is it to combine the data of these six sites to predict one of the sites? The article states that the experiment forecasts the temperature of three weather stations in Texas and three in New York. Appendix C.1 shows the forecast map for one of these stations (Figure 6). With so many diagrams, how do we determine which station is being predicted, and what is the approximate time period corresponding to the prediction? image

    opened by GA12WAINM 1
Releases(v1.1)
Owner
QData
http://www.cs.virginia.edu/yanjun/
QData
Compare neural networks by their feature similarity

PyTorch Model Compare A tiny package to compare two neural networks in PyTorch. There are many ways to compare two neural networks, but one robust and

Anand Krishnamoorthy 181 Jan 04, 2023
Stacked Hourglass Network with a Multi-level Attention Mechanism: Where to Look for Intervertebral Disc Labeling

⚠️ ‎‎‎ A more recent and actively-maintained version of this code is available in ivadomed Stacked Hourglass Network with a Multi-level Attention Mech

Reza Azad 14 Oct 24, 2022
[CVPR 2022 Oral] Versatile Multi-Modal Pre-Training for Human-Centric Perception

Versatile Multi-Modal Pre-Training for Human-Centric Perception Fangzhou Hong1  Liang Pan1  Zhongang Cai1,2,3  Ziwei Liu1* 1S-Lab, Nanyang Technologic

Fangzhou Hong 96 Jan 03, 2023
Pmapper is a super-resolution and deconvolution toolkit for python 3.6+

pmapper pmapper is a super-resolution and deconvolution toolkit for python 3.6+. PMAP stands for Poisson Maximum A-Posteriori, a highly flexible and a

NASA Jet Propulsion Laboratory 8 Nov 06, 2022
Linescanning - Package for (pre)processing of anatomical and (linescanning) fMRI data

line scanning repository This repository contains all of the tools used during the acquisition and postprocessing of line scanning data at the Spinoza

Jurjen Heij 4 Sep 14, 2022
tf2-keras implement yolov5

YOLOv5 in tesnorflow2.x-keras yolov5数据增强jupyter示例 Bilibili视频讲解地址: 《yolov5 解读,训练,复现》 Bilibili视频讲解PPT文件: yolov5_bilibili_talk_ppt.pdf Bilibili视频讲解PPT文件:

yangcheng 254 Jan 08, 2023
Diffgram - Supervised Learning Data Platform

Data Annotation, Data Labeling, Annotation Tooling, Training Data for Machine Learning

Diffgram 1.6k Jan 07, 2023
Scripts and outputs related to the paper Prediction of Adverse Biological Effects of Chemicals Using Knowledge Graph Embeddings.

Knowledge Graph Embeddings and Chemical Effect Prediction, 2020. Scripts and outputs related to the paper Prediction of Adverse Biological Effects of

Knowledge Graphs at the Norwegian Institute for Water Research 1 Nov 01, 2021
Machine learning Bot detection technique, based on United States election dataset

Machine learning Bot detection technique, based on United States election dataset (2020). Current github repo provides implementation described in pap

Alexander Shevtsov 4 Nov 20, 2022
Code Impementation for "Mold into a Graph: Efficient Bayesian Optimization over Mixed Spaces"

Code Impementation for "Mold into a Graph: Efficient Bayesian Optimization over Mixed Spaces" This repo contains the implementation of GEBO algorithm.

Jaeyeon Ahn 2 Mar 22, 2022
An SMPC companion library for Syft

SyMPC A library that extends PySyft with SMPC support SyMPC /ˈsɪmpəθi/ is a library which extends PySyft ≥0.3 with SMPC support. It allows computing o

Arturo Marquez Flores 0 Oct 13, 2021
Codes for NeurIPS 2021 paper "On the Equivalence between Neural Network and Support Vector Machine".

On the Equivalence between Neural Network and Support Vector Machine Codes for NeurIPS 2021 paper "On the Equivalence between Neural Network and Suppo

Leslie 8 Oct 25, 2022
.NET bindings for the Pytorch engine

TorchSharp TorchSharp is a .NET library that provides access to the library that powers PyTorch. It is a work in progress, but already provides a .NET

Matteo Interlandi 17 Aug 30, 2021
SingleVC performs any-to-one VC, which is an important component of MediumVC project.

SingleVC performs any-to-one VC, which is an important component of MediumVC project. Here is the official implementation of the paper, MediumVC.

谷下雨 26 Dec 28, 2022
Deep Learning Visuals contains 215 unique images divided in 23 categories

Deep Learning Visuals contains 215 unique images divided in 23 categories (some images may appear in more than one category). All the images were originally published in my book "Deep Learning with P

Daniel Voigt Godoy 1.3k Dec 28, 2022
This program creates a formatted excel file which highlights the undervalued stock according to Graham's number.

Over-and-Undervalued-Stocks Of Nepse Using Graham's Number Scrap the latest data using different websites and creates a formatted excel file that high

6 May 03, 2022
Code for CVPR2021 paper "Robust Reflection Removal with Reflection-free Flash-only Cues"

Robust Reflection Removal with Reflection-free Flash-only Cues (RFC) Paper | To be released: Project Page | Video | Data Tensorflow implementation for

Chenyang LEI 162 Jan 05, 2023
A particular navigation route using satellite feed and can help in toll operations & traffic managemen

How about adding some info that can quanitfy the stress on a particular navigation route using satellite feed and can help in toll operations & traffic management The current analysis is on the satel

Ashish Pandey 1 Feb 14, 2022
Sionna: An Open-Source Library for Next-Generation Physical Layer Research

Sionna: An Open-Source Library for Next-Generation Physical Layer Research Sionna™ is an open-source Python library for link-level simulations of digi

NVIDIA Research Projects 313 Dec 22, 2022
Code for KHGT model, AAAI2021

KHGT Code for KHGT accepted by AAAI2021 Please unzip the data files in Datasets/ first. To run KHGT on Yelp data, use python labcode_yelp.py For Movi

32 Nov 29, 2022