This is the official pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering" on VQA Task

Related tags

Deep LearningERASOR
Overview

🌈 ERASOR (RA-L'21 with ICRA Option)

Official page of "ERASOR: Egocentric Ratio of Pseudo Occupancy-based Dynamic Object Removal for Static 3D Point Cloud Map Building", which is accepted by RA-L with ICRA'21 option [Demo Video].

overview

We provide all contents including

  • Source code of ERASOR
  • All outputs of the State-of-the-arts
  • Visualization
  • Calculation code of Preservation Rate/Rejection Rate

So enjoy our codes! :)

Contact: Hyungtae Lim ([email protected])

Advisor: Hyun Myung ([email protected])

Contents

  1. Test Env.
  2. Requirements
  3. How to Run ERASOR
  4. Calculate PR/RR
  5. Benchmark
  6. Run Your Own Code
  7. Visualization of All the State-of-the-arts
  8. Citation

Test Env.

The code is tested successfully at

  • Linux 18.04 LTS
  • ROS Melodic

Requirements

ROS Setting

  • Install ROS on a machine.
  • Also, jsk-visualization is required to visualize Scan Ratio Test (SRT) status.
sudo apt-get install ros-melodic-jsk-recognition
sudo apt-get install ros-melodic-jsk-common-msgs
sudo apt-get install ros-melodic-jsk-rviz-plugins

Buildg Our Package

mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src
git clone https://github.com/LimHyungTae/ERASOR.Official.git
cd .. && catkin build erasor 

Python Setting

  • Our metric calculation for PR/RR code is implemented by python2.7
  • To run the python code, following pakages are necessary: pypcd, tqdm, scikit-learn, and tabulate
pip install pypcd
pip install tqdm	
pip install scikit-learn
pip install tabulate

Prepared dataset

  • Download the preprocessed KITTI data encoded into rosbag.
  • The downloading process might take five minutes or so. All rosbags requires total 2.3G of storage space
wget https://urserver.kaist.ac.kr/publicdata/erasor/rosbag/00_4390_to_4530_w_interval_2_node.bag
wget https://urserver.kaist.ac.kr/publicdata/erasor/rosbag/01_150_to_250_w_interval_1_node.bag
wget https://urserver.kaist.ac.kr/publicdata/erasor/rosbag/02_860_to_950_w_interval_2_node.bag
wget https://urserver.kaist.ac.kr/publicdata/erasor/rosbag/05_2350_to_2670_w_interval_2_node.bag
wget https://urserver.kaist.ac.kr/publicdata/erasor/rosbag/07_630_to_820_w_interval_2_node.bag

Description of Preprocessed Rosbag Files

  • Please note that the rosbag consists of node. Refer to msg/node.msg.
  • Note that each label of the point is assigned in intensity for the sake of convenience.
  • And we set the following classes are dynamic classes:
# 252: "moving-car"
# 253: "moving-bicyclist"
# 254: "moving-person"
# 255: "moving-motorcyclist"
# 256: "moving-on-rails"
# 257: "moving-bus"
# 258: "moving-truck"
# 259: "moving-other-vehicle"
  • Please refer to std::vector DYNAMIC_CLASSES in our code :).

How to Run ERASOR

We will explain how to run our code on seq 05 of the KITTI dataset as an example.

Step 1. Build naive map

kittimapgen

  • Set the following parameters in launch/mapgen.launch.
    • target_rosbag: The name of target rosbag, e.g. 05_2350_to_2670_w_interval_2_node.bag
    • save_path: The path where the naively accumulated map is saved.
  • Launch mapgen.launch and play corresponding rosbag on the other bash as follows:
roscore # (Optional)
roslaunch erasor mapgen.launch
rosbag play 05_2350_to_2670_w_interval_2_node.bag
  • Then, dense map and voxelized map are auto-saved at the save path. Note that the dense map is used to fill corresponding labels (HERE). The voxelized map will be an input of step 2 as a naively accumulated map.

Step 2. Run ERASOR erasor

  • Set the following parameters in config/seq_05.yaml.

    • initial_map_path: The path of naively accumulated map
    • save_path: The path where the filtered static map is saved.
  • Run the following command for each bash.

roscore # (Optional)
roslaunch erasor run_erasor.launch target_seq:="05"
rosbag play 05_2350_to_2672_w_interval_2_node.bag
  • IMPORTANT: After finishing running ERASOR, run the following command to save the static map as a pcd file on another bash.
  • "0.2" denotes voxelization size.
rostopic pub /saveflag std_msgs/Float32 "data: 0.2"
  • Then, you can see the printed command as follows:

fig_command

  • The results will be saved under the save_path folder, i.e. $save_path$/05_result.pcd.

Calculate PR/RR

You can check our results directly.

  • First, download all pcd materials.
wget https://urserver.kaist.ac.kr/publicdata/erasor/erasor_paper_pcds.zip
unzip erasor_paper_pcds.zip

Then, run the analysis code as follows:

python analysis.py --gt $GT_PCD_PATH$ --est $EST_PCD_PATH$

E.g,

python analysis.py --gt /home/shapelim/erasor_paper_pcds/gt/05_voxel_0_2.pcd --est /home/shapelim/erasor_paper_pcds/estimate/05_ERASOR.pcd

NOTE: For estimating PR/RR, more dense pcd file, which is generated in the mapgen.launch procedure, is better to estimate PR/RR precisely.

Benchmark

  • Error metrics are a little bit different from those in the paper:

    Seq. PR [%] RR [%]
    00 91.72 97.00
    01 91.93 94.63
    02 81.08 99.11
    05 86.98 97.88
    07 92.00 98.33
  • But we provide all pcd files! Don't worry. See Visualization of All the State-of-the-arts Section.

Run Your Own Code

⚠️ TBU: The code is already in this repository, yet the explanation is incomplete.

Visualization of All the State-of-the-arts

  • First, download all pcd materials.
wget https://urserver.kaist.ac.kr/publicdata/erasor/erasor_paper_pcds.zip
unzip erasor_paper_pcds.zip
  • Set parameters in config/viz_params.yaml correctly

    • abs_dir: The absolute directory of pcd directory
    • seq: Target sequence (00, 01, 02, 05, or 07)
  • After setting the parameters, launch following command:

roslaunch erasor compare_results.launch

Citation

If you use our code or method in your work, please consider citing the following:

@article{lim2021erasor,
title={ERASOR: Egocentric Ratio of Pseudo Occupancy-Based Dynamic Object Removal for Static 3D Point Cloud Map Building},
author={Lim, Hyungtae and Hwang, Sungwon and Myung, Hyun},
journal={IEEE Robotics and Automation Letters},
volume={6},
number={2},
pages={2272--2279},
year={2021},
publisher={IEEE}
}
Owner
Hyungtae Lim
Ph.D Candidate of URL lab. @ KAIST, South Korea
Hyungtae Lim
Codes for the compilation and visualization examples to the HIF vegetation dataset

High-impedance vegetation fault dataset This repository contains the codes that compile the "Vegetation Conduction Ignition Test Report" data, which a

1 Dec 12, 2021
This repository contains the code for Direct Molecular Conformation Generation (DMCG).

Direct Molecular Conformation Generation This repository contains the code for Direct Molecular Conformation Generation (DMCG). Dataset Download rdkit

25 Dec 20, 2022
A hybrid framework (neural mass model + ML) for SC-to-FC prediction

The current workflow simulates brain functional connectivity (FC) from structural connectivity (SC) with a neural mass model. Gradient descent is applied to optimize the parameters in the neural mass

Yilin Liu 1 Jan 26, 2022
Use AI to generate a optimized stock portfolio

Use AI, Modern Portfolio Theory, and Monte Carlo simulation's to generate a optimized stock portfolio that minimizes risk while maximizing returns. Ho

Greg James 30 Dec 22, 2022
使用深度学习框架提取视频硬字幕;docker容器免安装深度学习库,使用本地api接口使得界面和后端识别分离;

extract-video-subtittle 使用深度学习框架提取视频硬字幕; 本地识别无需联网; CPU识别速度可观; 容器提供API接口; 运行环境 本项目运行环境非常好搭建,我做好了docker容器免安装各种深度学习包; 提供windows界面操作; 容器为CPU版本; 视频演示 https

歌者 16 Aug 06, 2022
It's final year project of Diploma Engineering. This project is based on Computer Vision.

Face-Recognition-Based-Attendance-System It's final year project of Diploma Engineering. This project is based on Computer Vision. Brief idea about ou

Neel 10 Nov 02, 2022
Face and other object detection using OpenCV and ML Yolo

Object-and-Face-Detection-Using-Yolo- Opencv and YOLO object and face detection is implemented. You only look once (YOLO) is a state-of-the-art, real-

Happy N. Monday 3 Feb 15, 2022
This is the face keypoint train code of project face-detection-project

face-key-point-pytorch 1. Data structure The structure of landmarks_jpg is like below: |--landmarks_jpg |----AFW |------AFW_134212_1_0.jpg |------AFW_

I‘m X 3 Nov 27, 2022
TDmatch is a Python library developed to perform matching tasks in three categories:

TDmatch TDmatch is a Python library developed to perform matching tasks in three categories: Text to Data which matches tuples of a table to text docu

Naser Ahmadi 5 Aug 11, 2022
style mixing for animation face

An implementation of StyleGAN on Animation dataset. Install git clone https://github.com/MorvanZhou/anime-StyleGAN cd anime-StyleGAN pip install -r re

Morvan 46 Nov 30, 2022
Episodic-memory - Ego4D Episodic Memory Benchmark

Ego4D Episodic Memory Benchmark EGO4D is the world's largest egocentric (first p

3 Feb 18, 2022
Quantized tflite models for ailia TFLite Runtime

ailia-models-tflite Quantized tflite models for ailia TFLite Runtime About ailia TFLite Runtime ailia TF Lite Runtime is a TensorFlow Lite compatible

ax Inc. 13 Dec 23, 2022
Neural Network Libraries

Neural Network Libraries Neural Network Libraries is a deep learning framework that is intended to be used for research, development and production. W

Sony 2.6k Dec 30, 2022
Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation

Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation Requirements This repository needs mmsegmentation Training To train

20 May 28, 2022
Codes and pretrained weights for winning submission of 2021 Brain Tumor Segmentation (BraTS) Challenge

Winning submission to the 2021 Brain Tumor Segmentation Challenge This repo contains the codes and pretrained weights for the winning submission to th

94 Dec 28, 2022
Differentiable Surface Triangulation

Differentiable Surface Triangulation This is our implementation of the paper Differentiable Surface Triangulation that enables optimization for any pe

61 Dec 07, 2022
REBEL: Relation Extraction By End-to-end Language generation

REBEL: Relation Extraction By End-to-end Language generation This is the repository for the Findings of EMNLP 2021 paper REBEL: Relation Extraction By

Babelscape 222 Jan 06, 2023
Structure Information is the Key: Self-Attention RoI Feature Extractor in 3D Object Detection

Structure Information is the Key: Self-Attention RoI Feature Extractor in 3D Object Detection abstract:Unlike 2D object detection where all RoI featur

DK. Zhang 2 Oct 07, 2022
The codes for the work "Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation"

Swin-Unet The codes for the work "Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation"(https://arxiv.org/abs/2105.05537). A validatio

869 Jan 07, 2023
PyTorch experiments with the Zalando fashion-mnist dataset

zalando-pytorch PyTorch experiments with the Zalando fashion-mnist dataset Project Organization ├── LICENSE ├── Makefile - Makefile with co

Federico Baldassarre 31 Sep 25, 2021