Kaggle Lyft Motion Prediction for Autonomous Vehicles 4th place solution

Overview

Lyft Motion Prediction for Autonomous Vehicles

Code for the 4th place solution of Lyft Motion Prediction for Autonomous Vehicles on Kaggle.

Directory structure

input               --- Please locate data here
src
|-ensemble          --- For 4. Ensemble scripts
|-lib               --- Library codes
|-modeling          --- For 1. training, 2. prediction and 3. evaluation scripts
  |-results         --- Training, prediction and evaluation results will be stored here
README.md           --- This instruction file
requirements.txt    --- For python library versions

Hardware (The following specs were used to create the original solution)

  • Ubuntu 18.04 LTS
  • 32 CPUs
  • 128GB RAM
  • 8 x NVIDIA Tesla V100 GPUs

Software (python packages are detailed separately in requirements.txt):

Python 3.8.5 CUDA 10.1.243 cuddn 7.6.5 nvidia drivers v.55.23.0 -- Equivalent Dockerfile for the GPU installs: Use nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04 as base image

Also, we installed OpenMPI==4.0.4 for running pytorch distributed training.

Python Library

Deep learning framework, base library

  • torch==1.6.0+cu101
  • torchvision==0.7.0
  • l5kit==1.1.0
  • cupy-cuda101==7.0.0
  • pytorch-ignite==0.4.1
  • pytorch-pfn-extras==0.3.1

CNN models

Data processing/augmentation

  • albumentations==0.4.3
  • scikit-learn==0.22.2.post1

We also installed apex https://github.com/nvidia/apex

Please refer requirements.txt for more details.

Environment Variable

We recommend to set following environment variables for better performance.

export MKL_NUM_THREADS=1
export OMP_NUM_THREADS=1
export NUMEXPR_NUM_THREADS=1

Data setup

Please download competition data:

For the lyft-motion-prediction-autonomous-vehicles dataset, extract them under input/lyft-motion-prediction-autonomous-vehicles directory.

For the lyft-full-training-set data which only contains train_full.zarr, please place it under input/lyft-motion-prediction-autonomous-vehicles/scenes as follows:

input
|-lyft-motion-prediction-autonomous-vehicles
  |-scenes
    |-train_full.zarr (Place here!)
    |-train.zarr
    |-validate.zarr
    |-test.zarr
    |-... (other data)
  |-... (other data)

Pipeline

Our submission pipeline consists of 1. Training, 2. Prediction, 3. Ensemble.

Training with training/validation dataset

The training script is located under src/modeling.

train_lyft.py is the training script and the training configuration is specified by flags yaml file.

[Note] If you want to run training from scratch, please remove results folder once. The training script tries to resume from results folder when resume_if_possible=True is set.

[Note] For the first time of training, it creates cache for training to run efficiently. This cache creation should be done in single process, so please try with the single GPU training until training loop starts. The cache is directly created under input directory.

Once the cache is created, we can run multi-GPU training using same train_lyft.py script, with mpiexec command.

$ cd src/modeling

# Single GPU training (Please run this for first time, for input data cache creation)
$ python train_lyft.py --yaml_filepath ./flags/20201104_cosine_aug.yaml

# Multi GPU training (-n 8 for 8 GPU training)
$ mpiexec -x MASTER_ADDR=localhost -x MASTER_PORT=8899 -n 8 \
  python train_lyft.py --yaml_filepath ./flags/20201104_cosine_aug.yaml

We have trained 9 different models for final submission. Each training configuration can be found in src/modeling/flags, and the training results are located in src/modeling/results.

Prediction for test dataset

predict_lyft.py under src/modeling executes the prediction for test data.

Specify out as trained directory, the script uses trained model of this directory to inference. Please set --convert_world_from_agent true after l5kit==1.1.0.

$ cd src/modeling
$ python predict_lyft.py --out results/20201104_cosine_aug --use_ema true --convert_world_from_agent true

Predicted results are stored under out directory. For example, results/20201104_cosine_aug/prediction_ema/submission.csv is created with above setting.

We executed this prediction for all 9 trained models. We can submit this submission.csv file as the single model prediction.

(Optional) Evaluation with validation dataset

eval_lyft.py under src/modeling executes the evaluation for validation data (chopped data).

python eval_lyft.py --out results/20201104_cosine_aug --use_ema true

The script shows validation error, which is useful for local evaluation of model performance.

Ensemble

Finally all trained models' predictions are ensembled using GMM fitting.

The ensemble script is located under src/ensemble.

# Please execute from root of this repository.
$ python src/ensemble/ensemble_test.py --yaml_filepath src/ensemble/flags/20201126_ensemble.yaml

The location of final ensembled submission.csv is specified in the yaml file. You can submit this submission.csv by uploading it as dataset, and submit via Kaggle kernel. Please follow Save your time, submit without kernel inference for the submission procedure.

Streamlit component for TensorBoard, TensorFlow's visualization toolkit

streamlit-tensorboard This is a work-in-progress, providing a function to embed TensorBoard, TensorFlow's visualization toolkit, in Streamlit apps. In

Snehan Kekre 27 Nov 13, 2022
An executor that loads ONNX models and embeds documents using the ONNX runtime.

ONNXEncoder An executor that loads ONNX models and embeds documents using the ONNX runtime. Usage via Docker image (recommended) from jina import Flow

Jina AI 2 Mar 15, 2022
Source code for Fixed-Point GAN for Cloud Detection

FCD: Fixed-Point GAN for Cloud Detection PyTorch source code of Nyborg & Assent (2020). Abstract The detection of clouds in satellite images is an ess

Joachim Nyborg 8 Dec 22, 2022
Subgraph Based Learning of Contextual Embedding

SLiCE Self-Supervised Learning of Contextual Embeddings for Link Prediction in Heterogeneous Networks Dataset details: We use four public benchmark da

Pacific Northwest National Laboratory 27 Dec 01, 2022
Plaything for Autistic Children (demo for PaddlePaddle/Wechaty/Mixlab project)

星星的孩子 - 一款为孤独症孩子设计的聊天机器人游戏 孤独症儿童是目前常常被忽视的一类群体。他们有着类似性格内向的特征,实际却受着广泛性发育障碍的折磨。 项目背景 这类儿童在与人交往时存在着沟通障碍,其特点表现在: 社交交流差,互动障碍明显 认知能力有限,被动认知 兴趣狭窄,重复刻板,缺乏变化和想象

Tianyi Pan 35 Nov 24, 2022
Official code for "Mean Shift for Self-Supervised Learning"

MSF Official code for "Mean Shift for Self-Supervised Learning" Requirements Python = 3.7.6 PyTorch = 1.4 torchvision = 0.5.0 faiss-gpu = 1.6.1 In

UMBC Vision 44 Nov 21, 2022
Official implementation of CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification

CrossViT This repository is the official implementation of CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification. ArXiv If

International Business Machines 168 Dec 29, 2022
Repository for the semantic WMI loss

Installation: pip install -e . Installing DL2: First clone DL2 in a separate directory and install it using the following commands: git clone https:/

Nick Hoernle 4 Sep 15, 2022
YOLOX_AUDIO is an audio event detection model based on YOLOX

YOLOX_AUDIO is an audio event detection model based on YOLOX, an anchor-free version of YOLO. This repo is an implementated by PyTorch. Main goal of YOLOX_AUDIO is to detect and classify pre-defined

intflow Inc. 77 Dec 19, 2022
Implementation of parameterized soft-exponential activation function.

Soft-Exponential-Activation-Function: Implementation of parameterized soft-exponential activation function. In this implementation, the parameters are

Shuvrajeet Das 1 Feb 23, 2022
An implementation of RetinaNet in PyTorch.

RetinaNet An implementation of RetinaNet in PyTorch. Installation Training COCO 2017 Pascal VOC Custom Dataset Evaluation Todo Credits Installation In

Conner Vercellino 297 Jan 04, 2023
BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment

BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment

Holy Wu 35 Jan 01, 2023
The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter

FAPIS The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter Introduction This repo is primari

Khoi Nguyen 8 Dec 11, 2022
PyElastica is the Python implementation of Elastica, an open-source software for the simulation of assemblies of slender, one-dimensional structures using Cosserat Rod theory.

PyElastica PyElastica is the python implementation of Elastica: an open-source project for simulating assemblies of slender, one-dimensional structure

Gazzola Lab 105 Jan 09, 2023
Code for "Localization with Sampling-Argmax", NeurIPS 2021

Localization with Sampling-Argmax [Paper] [arXiv] [Project Page] Localization with Sampling-Argmax Jiefeng Li, Tong Chen, Ruiqi Shi, Yujing Lou, Yong-

JeffLi 71 Dec 17, 2022
simple_pytorch_example project is a toy example of a python script that instantiates and trains a PyTorch neural network on the FashionMNIST dataset

simple_pytorch_example project is a toy example of a python script that instantiates and trains a PyTorch neural network on the FashionMNIST dataset

Ramón Casero 1 Jan 07, 2022
Official implementation of the ICCV 2021 paper "Conditional DETR for Fast Training Convergence".

The DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergen

281 Dec 30, 2022
A Comparative Review of Recent Kinect-Based Action Recognition Algorithms (TIP2020, Matlab codes)

A Comparative Review of Recent Kinect-Based Action Recognition Algorithms This repo contains: the HDG implementation (Matlab codes) for 'Analysis and

Lei Wang 5 Oct 22, 2022
SimBERT升级版(SimBERTv2)!

RoFormer-Sim RoFormer-Sim,又称SimBERTv2,是我们之前发布的SimBERT模型的升级版。 介绍 https://kexue.fm/archives/8454 训练 tensorflow 1.14 + keras 2.3.1 + bert4keras 0.10.6 下载

318 Dec 31, 2022
내가 보려고 정리한 <프로그래밍 기초 Ⅰ> / organized for me

Programming-Basics 프로그래밍 기초 Ⅰ 아카이브 Do it! 점프 투 파이썬 주차 강의주제 비고 1주차 Syllabus 2주차 자료형 - 숫자형 3주차 자료형 - 문자열형 4주차 입력과 출력 5주차 제어문 - 조건문 if 6주차 제어문 - 반복문 whil

KIMMINSEO 1 Mar 07, 2022