Provide baselines and evaluation metrics of the task: traffic flow prediction

Overview

Note: This repo is adpoted from https://github.com/UNIMIBInside/Smart-Mobility-Prediction.

Due to technical reasons, I did not fork their code.

Introduction

This repo provide the implementations of baselines in the field traffic flow prediction. Most of the code in this field is too out-of-date to run, so I use docker to save you from installing tedious frameworks and provide one-line command to run the whole models. Before running, make sure copy TaxiBJ dataset to the data folder. Check Out QuickStart, where I provide out-of-the-box tutorial for you to use this repo!

Install tedious frameworks with few lines of code

git clone https://github.com/pengzhangzhi/Benchmark-Traffic-flow-prediction-.git
cd Benchmark-Traffic-flow-prediction-
docker pull tensorflow/tensorflow:2.4.3-gpu
docker run -it tensorflow/tensorflow:2.4.3-gpu
pip install -r requirements.txt

Run Baselines

bash train_TaxiBJ.sh
bash train_TaxiNYC.sh

Repository structure

Each of the main folders is dedicated to a specific deep learning network. Some of them were taken and modified from other repositories associated with the source paper, while others are our original implementations. Here it is an exhaustive list:

  • ST-ResNet. Folder for [1]. The original source code is here.
  • MST3D. Folder with our original implementation of the model described in [2].
  • Pred-CNN. Folder for [3]. The original repository is here.
  • ST3DNet. Folder for [4]. The starting-point code can be found here.
  • STAR. Folder for [5]. Soure code was taken from here.
  • 3D-CLoST. Folder dedicated to a model created during another research at Università Bicocca.
  • STDN. Folder referring to [6]. This folder is actually a copy of this repository, since it was never used in our experimentes.
  • Autoencoder. Refer to paper: Listening to the city, attentively: A Spatio-TemporalAttention Boosted Autoencoder for the Short-Term Flow Prediction Problem.

The contents of these folders can be a little different from each other, accordingly to the structure of the source repositories. Nevertheless, in each of them there are all the codes used to create input flow volumes, training and testing the models for single step prediction, and to evaluate performance on multi step prediction and transfer learning experiments.

The remaining folders are:

  • baselines. Contains the code implementing Historical Average and ARIMA approaches to the traffic flow prediction problem.
  • data. Folder where source data should be put in.
  • helpers. Contains some helpers code used for data visualization or to get weather info through an external API.

References

[1] Zhang, Junbo, Yu Zheng, and Dekang Qi. "Deep spatio-temporal residual networks for citywide crowd flows prediction." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 31. No. 1. 2017.

[2] Chen, Cen, et al. "Exploiting spatio-temporal correlations with multiple 3d convolutional neural networks for citywide vehicle flow prediction." 2018 IEEE international conference on data mining (ICDM). IEEE, 2018.

[3] Xu, Ziru, et al. "PredCNN: Predictive Learning with Cascade Convolutions." IJCAI. 2018.

[4] Guo, Shengnan, et al. "Deep spatial–temporal 3D convolutional neural networks for traffic data forecasting." IEEE Transactions on Intelligent Transportation Systems 20.10 (2019): 3913-3926.

[5] Wang, Hongnian, and Han Su. "STAR: A concise deep learning framework for citywide human mobility prediction." 2019 20th IEEE International Conference on Mobile Data Management (MDM). IEEE, 2019.

[6] Yao, Huaxiu, et al. "Revisiting spatial-temporal similarity: A deep learning framework for traffic prediction." Proceedings of the AAAI conference on artificial intelligence. Vol. 33. No. 01. 2019.

[7] Liu, Yang, et al. "Attention-based deep ensemble net for large-scale online taxi-hailing demand prediction." IEEE Transactions on Intelligent Transportation Systems 21.11 (2019): 4798-4807.

[8] Woo, Sanghyun, et al. "Cbam: Convolutional block attention module." Proceedings of the European conference on computer vision (ECCV). 2018.

Owner
Zhangzhi Peng
On the way of science :-)
Zhangzhi Peng
Bib-parser - Convenient script to parse .bib files with the ACM Digital Library like metadata

Bib Parser Convenient script to parse .bib files with the ACM Digital Library li

Mehtab Iqbal (Shahan) 1 Jan 26, 2022
Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021)

Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021) This repository is for BAAF-Net introduce

90 Dec 29, 2022
Deep Learning for Morphological Profiling

Deep Learning for Morphological Profiling An end-to-end implementation of a ML System for morphological profiling using self-supervised learning to di

Danielh Carranza 0 Jan 20, 2022
Image Restoration Toolbox (PyTorch). Training and testing codes for DPIR, USRNet, DnCNN, FFDNet, SRMD, DPSR, BSRGAN, SwinIR

Image Restoration Toolbox (PyTorch). Training and testing codes for DPIR, USRNet, DnCNN, FFDNet, SRMD, DPSR, BSRGAN, SwinIR

Kai Zhang 2k Dec 31, 2022
Automated Hyperparameter Optimization Competition

QQ浏览器2021AI算法大赛 - 自动超参数优化竞赛 ACM CIKM 2021 AnalyticCup 在信息流推荐业务场景中普遍存在模型或策略效果依赖于“超参数”的问题,而“超参数"的设定往往依赖人工经验调参,不仅效率低下维护成本高,而且难以实现更优效果。因此,本次赛题以超参数优化为主题,从真

20 Dec 09, 2021
PyTorch code for 'Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning'

Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning This repository is for EMSRDPN introduced in the foll

7 Feb 10, 2022
Codes for Causal Semantic Generative model (CSG), the model proposed in "Learning Causal Semantic Representation for Out-of-Distribution Prediction" (NeurIPS-21)

Learning Causal Semantic Representation for Out-of-Distribution Prediction This repository is the official implementation of "Learning Causal Semantic

Chang Liu 54 Dec 01, 2022
Project looking into use of autoencoder for semi-supervised learning and comparing data requirements compared to supervised learning.

Project looking into use of autoencoder for semi-supervised learning and comparing data requirements compared to supervised learning.

Tom-R.T.Kvalvaag 2 Dec 17, 2021
Polyp-PVT: Polyp Segmentation with Pyramid Vision Transformers (arXiv2021)

Polyp-PVT by Bo Dong, Wenhai Wang, Deng-Ping Fan, Jinpeng Li, Huazhu Fu, & Ling Shao. This repo is the official implementation of "Polyp-PVT: Polyp Se

Deng-Ping Fan 102 Jan 05, 2023
This repository is dedicated to developing and maintaining code for experiments with wide neural networks.

Wide-Networks This repository contains the code of various experiments on wide neural networks. In particular, we implement classes for abc-parameteri

Karl Hajjar 0 Nov 02, 2021
Learning Domain Invariant Representations in Goal-conditioned Block MDPs

Learning Domain Invariant Representations in Goal-conditioned Block MDPs Beining Han, Chongyi Zheng, Harris Chan, Keiran Paster, Michael R. Zhang, Jim

Chongyi Zheng 3 Apr 12, 2022
Implementations of LSTM: A Search Space Odyssey variants and their training results on the PTB dataset.

An LSTM Odyssey Code for training variants of "LSTM: A Search Space Odyssey" on Fomoro. Check out the blog post. Training Install TensorFlow. Clone th

Fomoro AI 95 Apr 13, 2022
mlpack: a scalable C++ machine learning library --

a fast, flexible machine learning library Home | Documentation | Doxygen | Community | Help | IRC Chat Download: current stable version (3.4.2) mlpack

mlpack 4.2k Jan 09, 2023
Data-depth-inference - Data depth inference with python

Welcome! This readme will guide you through the use of the code in this reposito

Marco 3 Feb 08, 2022
Why Are You Weird? Infusing Interpretability in Isolation Forest for Anomaly Detection

Why, hello there! This is the supporting notebook for the research paper — Why Are You Weird? Infusing Interpretability in Isolation Forest for Anomal

2 Dec 14, 2021
Reimplementation of the paper `Human Attention Maps for Text Classification: Do Humans and Neural Networks Focus on the Same Words? (ACL2020)`

Human Attention for Text Classification Re-implementation of the paper Human Attention Maps for Text Classification: Do Humans and Neural Networks Foc

Shunsuke KITADA 15 Dec 13, 2021
Official PyTorch code for the paper: "Point-Based Modeling of Human Clothing" (ICCV 2021)

Point-Based Modeling of Human Clothing Paper | Project page | Video This is an official PyTorch code repository of the paper "Point-Based Modeling of

Visual Understanding Lab @ Samsung AI Center Moscow 64 Nov 22, 2022
Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)

Taming Visually Guided Sound Generation • [Project Page] • [ArXiv] • [Poster] • • Listen for the samples on our project page. Overview We propose to t

Vladimir Iashin 226 Jan 03, 2023
Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples This project is for the paper "Training Confidence-Calibrated Clas

168 Nov 29, 2022
PyTorch Implement of Context Encoders: Feature Learning by Inpainting

Context Encoders: Feature Learning by Inpainting This is the Pytorch implement of CVPR 2016 paper on Context Encoders 1) Semantic Inpainting Demo Inst

321 Dec 25, 2022