Multivariate Time Series Forecasting with efficient Transformers. Code for the paper "Long-Range Transformers for Dynamic Spatiotemporal Forecasting."

Overview

Spacetimeformer Multivariate Forecasting

This repository contains the code for the paper, "Long-Range Transformers for Dynamic Spatiotemporal Forecasting", Grigsby, Wang and Qi, 2021. (arxiv)

spatiotemporal_embedding

Transformers are a high-performance approach to sequence-to-sequence timeseries forecasting. However, stacking multiple sequences into each token only allows the model to learn temporal relationships across time. This can ignore important spatial relationships between variables. Our model (nickamed "Spacetimeformer") flattens multivariate timeseries into extended sequences where each token represents the value of one variable at a given timestep. Long-Range Transformers can then learn relationships over both time and space. For much more information, please refer to our paper.

We will be adding additional instructions, example commands and dataset links in the coming days.

Installation

This repository was written and tested for python 3.8 and pytorch 1.9.0.

git clone https://github.com/UVA-MachineLearningBioinformatics/spacetimeformer.git
cd spacetimeformer
conda create -n spacetimeformer python==3.8
source activate spacetimeformer
pip install -r requirements.txt
pip install -e .

This installs a python package called spacetimeformer.

Dataset Setup

CSV datsets like AL Solar, NY-TX Weather, Exchange Rates, and the Toy example are included with the source code of this repo.

Larger datasets should be downloaded and their folders placed in the data/ directory. You can find them with this google drive link. Note that the metr-la and pems-bay data is directly from this repo - all we've done is skip a step for you and provide the raw train, val, test, *.npz files our dataset code expects.

Recreating Experiments with Our Training Script

The main training functionality for spacetimeformer and most baselines considered in the paper can be found in the train.py script. The training loop is based on the pytorch_lightning framework.

Commandline instructions for each experiment can be found using the format: python train.py *model* *dataset* -h.

Model Names:

  • linear: a basic autoregressive linear model.
  • lstnet: a more typical RNN/Conv1D model for multivariate forecasting. Based on the attention-free implementation of LSTNet.
  • lstm: a typical encoder-decoder LSTM without attention. We use scheduled sampling to anneal teacher forcing throughout training.
  • mtgnn: a hybrid GNN that learns its graph structure from data. For more information refer to the paper. We use the implementation from pytorch_geometric_temporal
  • spacetimeformer: the multivariate long-range transformer architecture discussed in our paper.
    • note that the "Temporal" ablation discussed in the paper is a special case of the spacetimeformer model. Set the embed_method = temporal. Spacetimeformer has many configurable options and we try to provide a thorough explanation with the commandline -h instructions.

Dataset Names:

  • metr-la and pems-bay: traffic forecasting datasets. We use a very similar setup to DCRNN.
  • toy2: is the toy dataset mentioned at the beginning of our experiments section. It is heavily based on the toy dataset in TPA-LSTM.
  • asos: Is the codebase's name for what the paper calls "NY-TX Weather."
  • solar_energy: Is the codebase's name for what is more commonly called "AL Solar."
  • exchange: A dataset of exchange rates. Spacetimeformer performs relatively well but this is tiny dataset of highly non-stationary data where linear is already a SOTA model.
  • precip: A challenging spatial message-passing task that we have not yet been able to solve. We collected daily precipitation data from a latitude-longitude grid over the Continental United States. The multivariate sequences are sampled from a ringed "radar" configuration as shown below in green. We expand the size of the dataset by randomly moving this radar around the country.

Logging with Weights and Biases

We used wandb to track all of results during development, and you can do the same by hardcoding the correct organization/username and project name in the train.py file. Comments indicate the location near the top of the main method. wandb logging can then be enabled with the --wandb flag.

There are two automated figures that can be saved to wandb between epochs. These include the attention diagrams (e.g., Figure 4 of our paper) and prediction plots (e.g., Figure 6 of our paper). Enable attention diagrams with --attn_plot and prediction curves with --plot.

Example Spacetimeformer Training Commands

Toy Dataset

python train.py spacetimeformer toy2 --run_name spatiotemporal_toy2 \
--d_model 100 --d_ff 400 --enc_layers 4 --dec_layers 4 \
--gpus 0 1 2 3 --batch_size 32 --start_token_len 4 --n_heads 4 \
--grad_clip_norm 1 --early_stopping --trials 1

Metr-LA

python train.py spacetimeformer metr-la --start_token_len 3 --batch_size 32 \
--gpus 0 1 2 3 --grad_clip_norm 1 --d_model 128 --d_ff 512 --enc_layers 5 \
--dec_layers 4 --dropout_emb .3 --dropout_ff .3 --dropout_qkv 0 \ 
--run_name spatiotemporal_metr-la --base_lr 1e-3 --l2_coeff 1e-2 \

Temporal Attention Ablation with Negative Log Likelihood Loss on NY-TX Weather ("asos") with WandB Loggin and Figures

python train.py spacetimeformer asos --context_points 160 --target_points 40 \ 
--start_token_len 8 --grad_clip_norm 1 --gpus 0 --batch_size 128 \ 
--d_model 200 --d_ff 800 --enc_layers 3 --dec_layers 3 \
--local_self_attn none --local_cross_attn none --l2_coeff .01 \
--dropout_emb .1 --run_name temporal_asos_160-40-nll --loss nll \
--time_resolution 1 --dropout_ff .2 --n_heads 8 --trials 3 \ 
--embed_method temporal --early_stopping --wandb --attn_plot --plot

Using Spacetimeformer in Other Applications

If you want to use our model in the context of other datasets or training loops, you will probably want to go a step lower than the spacetimeformer_model.Spacetimeformer_Forecaster pytorch-lightning wrapper. Please see spacetimeformer_model.nn.Spacetimeformer. arch-fig

Citation

If you use this model in academic work please feel free to cite our paper

@misc{grigsby2021longrange,
      title={Long-Range Transformers for Dynamic Spatiotemporal Forecasting}, 
      author={Jake Grigsby and Zhe Wang and Yanjun Qi},
      year={2021},
      eprint={2109.12218},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

st-embed-fig

Comments
  • training_epoch_loop.py

    training_epoch_loop.py", line 486, in _update_learning_rates raise MisconfigurationException( pytorch_lightning.utilities.exceptions.MisconfigurationException: ReduceLROnPlateau conditioned on metric val/forecast_loss which is not available. Available metrics are: ['train/mape', 'train/mae', 'train/mse', 'train/rse', 'train/forecast_loss', 'train/class_loss', 'train/loss', 'train/acc']. Condition can be set using `monitor` key in lr scheduler dict

    training_epoch_loop.py", line 486, in _update_learning_rates raise MisconfigurationException( pytorch_lightning.utilities.exceptions.MisconfigurationException: ReduceLROnPlateau conditioned on metric val/forecast_loss which is not available. Available metrics are: ['train/mape', 'train/mae', 'train/mse', 'train/rse', 'train/forecast_loss', 'train/class_loss', 'train/loss', 'train/acc']. Condition can be set using monitor key in lr scheduler dict

    opened by jeevanu 6
  • "CUDA extension for cauchy multiplication not found" when running toy project

    Hi!

    I get error "cauchy multiplication not found" when running the command below. I have tried the suggested "python setup install" in the cachy directory without success.

    python train.py spacetimeformer toy2 --run_name spatiotemporal_toy2 --d_model 100 --d_ff 400 --enc_layers 2 --dec_layers 2 --gpus 0 --batch_size 32 --start_token_len 4 --n_heads 4 --grad_clip_norm 1 --early_stopping --trials 1
    
    CUDA extension for cauchy multiplication not found. go to ./extensions/cauchy and try `python setup.py install`
    

    Any tips how to fix this error?

    opened by christofer-f 4
  • How to get inference result?

    How to get inference result?

    Thanks for your great work.

    I want to infer a trained spacetimeformer model. The model needs 4 inputs; (x_c, y_c, x_t, y_t). We have only x_c, y_c, and x_t ready.

    • y_t must be zeros with right shape. Right?

    • output, (logits, labels), attn = spacetimeformer(x_c, y_c, x_t, y_t). The output has two tensors; output.loc and output.scale. What is the real output for y_t here? output.loc or output.scale?

      Thanks in advance.

    opened by TNA8 4
  • Using the model to inference

    Using the model to inference

    Hi,

    Thank you for this module. Is there an example for inferencing? I tried adding self.save_hyperparameters() to the Spacetimeformer_Forecaster init and then using the checkpoint as below but this requires x_c, y_c, x_t and y_t. Shouldn't inference only need the context and not the target? Is there some other step that I need to take to produce a model that can be used to inference?

        model = Spacetimeformer_Forecaster.load_from_checkpoint(checkpoint_path=checkpoint_path)
    

    Thank you for your help.

    opened by threadthreasher 4
  • Question about embedding

    Question about embedding

    Hi, great job!

    I just have a quick question about the embedding part. Although position embedding is a tradition in Transformer based models, it seems you embeded so much different kinds of information into the original sequence in an addition way. I am not sure if my understanding is correct. The concatenation form is easier for the network to learn at cost of more parameters and vice versa for the addition form. I believe the nerual network bears certain capability to extract different information from the raw input and projects them into higher order latent space, but would that possibly deteriorates the performance if we add too much? And will large hidden dim mitigates?

    BTW, I also heard of the explanation: (a+b)' = a' + b' which means the addition and concatenation operations are equal fom a gradient perspective. And that contradicts to my gut feeling.

    Thanks very much!

    opened by kpmokpmo 4
  • Issue while loading a model

    Issue while loading a model

    Hi, After the training I'm trying to save the model and load it from the checkpoint. But I got mismatch size error, so I tried to load the data as soon as the training is over. But I still have the issue. It looks like that it's the same issue as #13 .

    The code :

        # Test from the code
        trainer.test(datamodule=data_module, ckpt_path="best")
        #Added only these two line
        trainer.save_checkpoint("best.ckpt")
        forecaster = forecaster.load_from_checkpoint(checkpoint_path="best.ckpt")
    

    And I got a LOT of size mismatch

    RuntimeError: Error(s) in loading state_dict for Spacetimeformer_Forecaster:
    	size mismatch for spacetimeformer.enc_embedding.x_emb.embed_weight: copying a param with shape torch.Size([7, 6]) from checkpoint, the shape in current model is torch.Size([5, 6]).
    	size mismatch for spacetimeformer.enc_embedding.y_emb.weight: copying a param with shape torch.Size([512, 43]) from checkpoint, the shape in current model is torch.Size([512, 31]).
    	size mismatch for spacetimeformer.enc_embedding.var_emb.weight: copying a param with shape torch.Size([18, 512]) from checkpoint, the shape in current model is torch.Size([1, 512]).
    	size mismatch for spacetimeformer.dec_embedding.x_emb.embed_weight: copying a param with shape torch.Size([7, 6]) from checkpoint, the shape in current model is torch.Size([5, 6]).
    	size mismatch for spacetimeformer.dec_embedding.y_emb.weight: copying a param with shape torch.Size([512, 43]) from checkpoint, the shape in current model is torch.Size([512, 31]).
    	size mismatch for spacetimeformer.dec_embedding.var_emb.weight: copying a param with shape torch.Size([18, 512]) from checkpoint, the shape in current model is torch.Size([1, 512]).
    	size mismatch for spacetimeformer.classifier.weight: copying a param with shape torch.Size([18, 512]) from checkpoint, the shape in current model is torch.Size([1, 512]).
    	size mismatch for spacetimeformer.classifier.bias: copying a param with shape torch.Size([18]) from checkpoint, the shape in current model is torch.Size([1]).
    

    What am I doing wrong ?

    Also I didn't get what does the --start_token_len does ?

    Length of decoder start token. Adds this many of the final context points to the start of the target sequence.
    

    For example if I have one data every hour, and start_token_len = 3, it will predict for the 3 hours from now ? And train to predict this value ?

    Best Regards ! And thanks for the model !

    opened by jgcb00 3
  • GPU memory leak

    GPU memory leak

    After 80 epochs (8 hours), I got this error

    Epoch 80:  87%|████████████████▌  | 750/858 [04:56<00:42,  2.53it/s, loss=0.258]Traceback (most recent call last):
      File "train.py", line 442, in <module>
        main(args)
      File "train.py", line 424, in main
        trainer.fit(forecaster, datamodule=data_module)
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 460, in fit
        self._run(model)
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 758, in _run
        self.dispatch()
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 799, in dispatch
        self.accelerator.start_training(self)
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 96, in start_training
        self.training_type_plugin.start_training(trainer)
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 144, in start_training
        self._results = trainer.run_stage()
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 809, in run_stage
        return self.run_train()
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 871, in run_train
        self.train_loop.run_training_epoch()
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 569, in run_training_epoch
        self.trainer.logger_connector.log_train_epoch_end_metrics(epoch_output)
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py", line 325, in log_train_epoch_end_metrics
        self.log_metrics(epoch_log_metrics, {})
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py", line 208, in log_metrics
        mem_map = memory.get_memory_profile(self.log_gpu_memory)
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/core/memory.py", line 365, in get_memory_profile
        memory_map = get_gpu_memory_map()
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/core/memory.py", line 384, in get_gpu_memory_map
        result = subprocess.run(
      File "/opt/conda/lib/python3.8/subprocess.py", line 516, in run
        raise CalledProcessError(retcode, process.args,
    subprocess.CalledProcessError: Command '['/usr/bin/nvidia-smi', '--query-gpu=memory.used', '--format=csv,nounits,noheader']' returned non-zero exit status 255.
    Epoch 80:  87%|████████▋ | 750/858 [04:57<00:42,  2.52it/s, loss=0.258]         
    
    opened by Suppersine 3
  • load_from_checkpoint   error

    load_from_checkpoint error

    `model = spacetimeformer_model.Spacetimeformer_Forecaster(d_x=6, d_y=6) model.load_from_checkpoint(check_point) data_module, inv_scaler, null_val = create_dset() trainer = pl.Trainer()

    trainer.test(model=model, datamodule=data_module)`

    there is an error as RuntimeError: Error(s) in loading state_dict for Spacetimeformer_Forecaster: Unexpected key(s) in state_dict:

    could you tell me how to solve it? thank you very much!

    opened by SingleXuu 3
  • Question on dimensions of Datasets

    Question on dimensions of Datasets

    Hello -

    I am looking to try new datasets with your model, but just having a little hard time understanding the x_dim and y_dim hard coded into the train.py.

    What exactly do each of these mean?

    For example, for the solar_energy data set, I see that the y_dim is 137 because it has 137 features, but where does the 6 come from?

    opened by ghost 3
  • Spacetimeformer invalid accelerator name

    Spacetimeformer invalid accelerator name

    executed the following exaple from the readme:

    python train.py spacetimeformer exchange --batch_size 32 --warmup_steps 1000 --d_model 200 --d_ff 700 --enc_layers 5 --dec_layers 6 --dropout_emb .1 --dropout_ff .3 --run_name pems-bay-spatiotemporal --base_lr 1e-3 --l2_coeff 1e-3 --loss mae --d_qk 30 --d_v 30 --n_heads 10 --patience 10 --decay_factor .8 --workers 4 --gpus 0
    

    I get the following error:

    Traceback (most recent call last):
      File "train.py", line 851, in <module>
        main(args)
      File "train.py", line 816, in main
        trainer = pl.Trainer(
      File "/opt/miniconda3/envs/spacetimeformer/lib/python3.8/site-packages/pytorch_lightning/utilities/argparse.py", line 345, in insert_env_defaults
        return fn(self, **kwargs)
      File "/opt/miniconda3/envs/spacetimeformer/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 433, in __init__
        self._accelerator_connector = AcceleratorConnector(
      File "/opt/miniconda3/envs/spacetimeformer/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 193, in __init__
        self._check_config_and_set_final_flags(
      File "/opt/miniconda3/envs/spacetimeformer/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 292, in _check_config_and_set_final_flags
        raise ValueError(
    ValueError: You selected an invalid accelerator name: `accelerator='dp'`. Available names are: cpu, cuda, hpu, ipu, mps, tpu.
    

    I see in the train.py file accelerator is set to "dp". I dont seen "dp" as a valid command in the torch lightning documentation, where dp is a data parallel strategy and the accelerator s/b gpu

    opened by alre5639 2
  • Questions about Global-Attention and Decoder Fast-Attention

    Questions about Global-Attention and Decoder Fast-Attention

    Thanks for uploading the code. This is a very interesting project!

    I have a few questions surrounding some implementation details.

    1) Global-Attention What is the purpose of WindowTime function and how does it help facilitate the concept of "Global-attention"?

    2) Decoder Fast-Attention When the Performer option is selected, the FastAttention for decoder self-attention is not instantiated with the causal flag set to True. This results in the attention mechanism of the forecast sequence attending to future tokens. Is this issue being addressed somewhere else?

    opened by ShuAiii 2
  • Time Emb Dim

    Time Emb Dim

    Default Time Emb Dim is 6, however, the paper states that the value used was 12. The example training commands don't specify this hyper parameter. To recreate the results in the paper should 12 be used? Would there be drastically different results due to this hyper parameter?

    Also had to update a line in classification_loss function: acc = torchmetrics.functional.accuracy(torch.softmax(logits, dim=1),labels,task='multiclass',num_classes=self.d_yc) in spacetimeformer_model.py. I'm assuming due to versioning differences.

    opened by angmc 0
  • Getting error on PEMS-BAY dataset

    Getting error on PEMS-BAY dataset

    I downloaded the pems-bay.h5 file from https://zenodo.org/record/4263971 and used https://github.com/liyaguang/DCRNN/blob/master/scripts/generate_training_data.py to generate test.npz, train.npz, val.npz files in ./data/pems_bay/.

    When I run the command from the README.md python train.py spacetimeformer pems-bay --batch_size 32 --warmup_steps 1000 --d_model 200 --d_ff 700 --enc_layers 5 --dec_layers 6 --dropout_emb .1 --dropout_ff .3 --run_name pems-bay-spatiotemporal --base_lr 1e-3 --l2_coeff 1e-3 --loss mae --data_path ./data/pems_bay/ --d_qk 30 --d_v 30 --n_heads 10 --patience 10 --decay_factor .8

    I get the following error

    Traceback (most recent call last): File "/spacetimeformer-main/spacetimeformer/train.py", line 854, in main(args) File "/spacetimeformer-main/spacetimeformer/train.py", line 758, in main ) = create_dset(args) File "/spacetimeformer-main/spacetimeformer/train.py", line 394, in create_dset data = stf.data.metr_la.METR_LA_Data(config.data_path) File "/spacetimeformer-main/spacetimeformer/data/metr_la/metr_la.py", line 43, in init x_c_train, y_c_train = self._split_set(context_train) File "/spacetimeformer-main/spacetimeformer/data/metr_la/metr_la.py", line 21, in _split_set time = 2.0 * x[:, :, 0] - 1.0 IndexError: too many indices for array: array is 2-dimensional, but 3 were indexed

    opened by kadattack 2
  • Purpose of ignore_col Parameter?

    Purpose of ignore_col Parameter?

    Hello -

    Thank you for all the improvements in the model, this is truly a very robust implementation.

    Quick Question: What is the purpose of the ignore_columns parameter within the CSVTimeSeries Class? Specifically, if you were targetting specific columns in a CSV dataset, how would ignoring specific columns change yc_dim and yt_dim.

    Thank you for answering a possible simple question.

    Best

    opened by ghost 0
  • All forecast values close to mean value

    All forecast values close to mean value

    Hello Jake

    I have a model which uses spatial attention with GAT and temporal attention with Transformers. I am working on Pems bay dataset for my project.

    During training, the decoder is one shot meaning all the timesteps are fed into the decoder at the same time. When I printed out the predictions, they are all near the mean value and not capturing the trends in the data.

    Is there any obvious issues that you see?

    I checked position encoding, attention weights from graphs and normalised with Z-score normalisation and use inverse scaling before comparison.

    opened by ghost 1
  • How to interpret the forecast results of Toy2, exchange, NY-TX datasets

    How to interpret the forecast results of Toy2, exchange, NY-TX datasets

    Hello dear author! Recently, after I modified it, I tested the effect on toy2, exchange and NY-TX according to the original text, and got the effect picture in the article. What are their respective prediction targets on these three datasets?(use the specified command without modification) (1) In the test of the toy2 data set, wandb records the prediction results of the test set at different time steps, with eight pictures at a single time step. In the test set of toy2, how to determine the part of the time period corresponding to the multivariate prediction output? toy2 toy2_8 (2) The result output on the exchange is also a single time step with eight pictures. Are the eight graphs here one-to-one corresponding to the exchange rates of eight different countries? (Use several other countries and national data as multivariate input to predict the output of the country) exchange (3) What is the prediction goal of the NY-TX dataset, is it to combine the data of these six sites to predict one of the sites? The article states that the experiment forecasts the temperature of three weather stations in Texas and three in New York. Appendix C.1 shows the forecast map for one of these stations (Figure 6). With so many diagrams, how do we determine which station is being predicted, and what is the approximate time period corresponding to the prediction? image

    opened by GA12WAINM 1
Releases(v1.1)
Owner
QData
http://www.cs.virginia.edu/yanjun/
QData
Collection of generative models in Tensorflow

tensorflow-generative-model-collections Tensorflow implementation of various GANs and VAEs. Related Repositories Pytorch version Pytorch version of th

3.8k Dec 30, 2022
The code repository for EMNLP 2021 paper "Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization".

Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization [Paper] accepted at the EMNLP 2021: Vision Guided Genera

CAiRE 42 Jan 07, 2023
Video Frame Interpolation without Temporal Priors (a general method for blurry video interpolation)

Video Frame Interpolation without Temporal Priors (NeurIPS2020) [Paper] [video] How to run Prerequisites NVIDIA GPU + CUDA 9.0 + CuDNN 7.6.5 Pytorch 1

YoujianZhang 31 Sep 04, 2022
tree-math: mathematical operations for JAX pytrees

tree-math: mathematical operations for JAX pytrees tree-math makes it easy to implement numerical algorithms that work on JAX pytrees, such as iterati

Google 137 Dec 28, 2022
Train emoji embeddings based on emoji descriptions.

emoji2vec This is my attempt to train, visualize and evaluate emoji embeddings as presented by Ben Eisner, Tim Rocktäschel, Isabelle Augenstein, Matko

Miruna Pislar 17 Sep 03, 2022
[NeurIPS 2020] Blind Video Temporal Consistency via Deep Video Prior

pytorch-deep-video-prior (DVP) Official PyTorch implementation for NeurIPS 2020 paper: Blind Video Temporal Consistency via Deep Video Prior TensorFlo

Yazhou XING 90 Oct 19, 2022
Code for Mining the Benefits of Two-stage and One-stage HOI Detection

Status: Archive (code is provided as-is, no updates expected) PPO-EWMA [Paper] This is code for training agents using PPO-EWMA and PPG-EWMA, introduce

OpenAI 33 Dec 15, 2022
Realtime Face Anti Spoofing with Face Detector based on Deep Learning using Tensorflow/Keras and OpenCV

Realtime Face Anti-Spoofing Detection 🤖 Realtime Face Anti Spoofing Detection with Face Detector to detect real and fake faces Please star this repo

Prem Kumar 86 Aug 03, 2022
The official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averaging Approach

Graph Optimizer This repo contains the official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averagin

Chenyu 109 Dec 23, 2022
Pose Transformers: Human Motion Prediction with Non-Autoregressive Transformers

Pose Transformers: Human Motion Prediction with Non-Autoregressive Transformers This is the repo used for human motion prediction with non-autoregress

Idiap Research Institute 26 Dec 14, 2022
traiNNer is an open source image and video restoration (super-resolution, denoising, deblurring and others) and image to image translation toolbox based on PyTorch.

traiNNer traiNNer is an open source image and video restoration (super-resolution, denoising, deblurring and others) and image to image translation to

202 Jan 04, 2023
Subgraph Based Learning of Contextual Embedding

SLiCE Self-Supervised Learning of Contextual Embeddings for Link Prediction in Heterogeneous Networks Dataset details: We use four public benchmark da

Pacific Northwest National Laboratory 27 Dec 01, 2022
Riemannian Geometry for Molecular Surface Approximation (RGMolSA)

Riemannian Geometry for Molecular Surface Approximation (RGMolSA) Introduction Ligand-based virtual screening aims to reduce the cost and duration of

11 Nov 15, 2022
GuideDog is an AI/ML-based mobile app designed to assist the lives of the visually impaired, 100% voice-controlled

Guidedog Authors: Kyuhee Jo, Steven Gunarso, Jacky Wang, Raghav Sharma GuideDog is an AI/ML-based mobile app designed to assist the lives of the visua

Kyuhee Jo 5 Nov 24, 2021
A LiDAR point cloud cluster for panoptic segmentation

Divide-and-Merge-LiDAR-Panoptic-Cluster A demo video of our method with semantic prior: More information will be coming soon! As a PhD student, I don'

YimingZhao 65 Dec 22, 2022
This is a custom made virus code in python, using tkinter module.

skeleterrorBetaV0.1-Virus-code This is a custom made virus code in python, using tkinter module. This virus is not harmful to the computer, it only ma

AR 0 Nov 21, 2022
Range Image-based LiDAR Localization for Autonomous Vehicles Using Mesh Maps

Range Image-based 3D LiDAR Localization This repo contains the code for our ICRA2021 paper: Range Image-based LiDAR Localization for Autonomous Vehicl

Photogrammetry & Robotics Bonn 208 Dec 15, 2022
A basic neural network for image segmentation.

Unet_erythema_detection A basic neural network for image segmentation. 前期准备 1.在logs文件夹中下载h5权重文件,百度网盘链接在logs文件夹中 2.将所有原图 放置在“/dataset_1/JPEGImages/”文件夹

1 Jan 16, 2022
TensorFlow implementation of "TokenLearner: What Can 8 Learned Tokens Do for Images and Videos?"

TokenLearner: What Can 8 Learned Tokens Do for Images and Videos? Source: Improving Vision Transformer Efficiency and Accuracy by Learning to Tokenize

Aritra Roy Gosthipaty 23 Dec 24, 2022
Source code for the ACL-IJCNLP 2021 paper entitled "T-DNA: Taming Pre-trained Language Models with N-gram Representations for Low-Resource Domain Adaptation" by Shizhe Diao et al.

T-DNA Source code for the ACL-IJCNLP 2021 paper entitled Taming Pre-trained Language Models with N-gram Representations for Low-Resource Domain Adapta

shizhediao 17 Dec 22, 2022