Multi-View Radar Semantic Segmentation

Related tags

Deep LearningMVRSS
Overview

Multi-View Radar Semantic Segmentation

Paper

teaser_schema

Multi-View Radar Semantic Segmentation, ICCV 2021.

Arthur Ouaknine, Alasdair Newson, Patrick Pérez, Florence Tupin, Julien Rebut

This repository groups the implemetations of the MV-Net and TMVA-Net architectures proposed in the paper of Ouaknine et al..

The models are trained and tested on the CARRADA dataset.

The CARRADA dataset is available on Arthur Ouaknine's personal web page at this link: https://arthurouaknine.github.io/codeanddata/carrada.

If you find this code useful for your research, please cite our paper:

@misc{ouaknine2021multiview,
      title={Multi-View Radar Semantic Segmentation},
      author={Arthur Ouaknine and Alasdair Newson and Patrick Pérez and Florence Tupin and Julien Rebut},
      year={2021},
      eprint={2103.16214},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Installation with Docker

It is strongly recommanded to use Docker with the provided Dockerfile containing all the dependencies.

  1. Clone the repo:
$ git clone https://github.com/ArthurOuaknine/MVRSS.git
  1. Create the Docker image:
$ cd MVRSS/
$ docker build . -t "mvrss:Dockerfile"

Note: The CARRADA dataset used for train and test is considered as already downloaded by default. If it is not the case, you can uncomment the corresponding command lines in the Dockerfile or follow the guidelines of the dedicated repository.

  1. Run a container and join an interactive session. Note that the option -v /host_path:/local_path is used to mount a volume (corresponding to a shared memory space) between the host machine and the Docker container and to avoid copying data (logs and datasets). You will be able to run the code on this session:
$ docker run -d --ipc=host -it -v /host_machine_path/datasets:/home/datasets_local -v /host_machine_path/logs:/home/logs --name mvrss --gpus all mvrss:Dockerfile sleep infinity
$ docker exec -it mvrss bash

Installation without Docker

You can either use Docker with the provided Dockerfile containing all the dependencies, or follow these steps.

  1. Clone the repo:
$ git clone https://github.com/ArthurOuaknine/MVRSS.git
  1. Install this repository using pip:
$ cd MVRSS/
$ pip install -e .

With this, you can edit the MVRSS code on the fly and import function and classes of MVRSS in other project as well.

  1. Install all the dependencies using pip and conda, please take a look at the Dockerfile for the list and versions of the dependencies.

  2. Optional. To uninstall this package, run:

$ pip uninstall MVRSS

You can take a look at the Dockerfile if you are uncertain about steps to install this project.

Running the code

In any case, it is mandatory to specify beforehand both the path where the CARRADA dataset is located and the path to store the logs and models. Example: I put the Carrada folder in /home/datasets_local, the path I should specify is /home/datasets_local. The same way if I store my logs in /home/logs. Please run the following command lines while adapting the paths to your settings:

$ cd MVRSS/mvrss/utils/
$ python set_paths.py --carrada /home/datasets_local --logs /home/logs

Training

In order to train a model, a JSON configuration file should be set. The configuration file corresponding to the selected parameters to train the TMVA-Net architecture is provided here: MVRSS/mvrss/config_files/tmvanet.json. To train the TMVA-Net architecture, please run the following command lines:

$ cd MVRSS/mvrss/
$ python train.py --cfg config_files/tmvanet.json

If you want to train the MV-Net architecture (baseline), please use the corresponding configuration file: mvnet.json.

Testing

To test a recorded model, you should specify the path to the configuration file recorded in your log folder during training. Per example, if you want to test a model and your log path has been set to /home/logs, you should specify the following path: /home/logs/carrada/tmvanet/name_of_the_model/config.json. This way, you should execute the following command lines:

$ cd MVRSS/mvrss/
$ python test.py --cfg /home/logs/carrada/tmvanet/name_of_the_model/config.json

Note: the current implementation of this script will generate qualitative results in your log folder. You can disable this behavior by setting get_quali=False in the parameters of the predict() method of the Tester() class.

Acknowledgements

License

The MVRSS repo is released under the Apache 2.0 license.

You might also like...
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018

Learning Pixel-level Semantic Affinity with Image-level Supervision This code is deprecated. Please see https://github.com/jiwoon-ahn/irn instead. Int

[CVPR'21] Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation
[CVPR'21] Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation

Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation Weixiang Yang, Qi Li, Wenxi Liu, Yuanlong Yu, Y

 Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP
Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP

Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP Abstract: We introduce a method that allows to automatically se

TorchDistiller - a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

This project is a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

Mae segmentation - Reproduction of semantic segmentation using masked autoencoder (mae)

ADE20k Semantic segmentation with MAE Getting started Install the mmsegmentation

Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019)
Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019)

Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019) Introduction Official implementation of Dynamic Multi-scale Filters for Semant

Reimplementation of Dynamic Multi-scale filters for Semantic Segmentation.

Paddle implementation of Dynamic Multi-scale filters for Semantic Segmentation.

PyTorch code for the paper "Complementarity is the King: Multi-modal and Multi-grained Hierarchical Semantic Enhancement Network for Cross-modal Retrieval".

Complementarity is the King: Multi-modal and Multi-grained Hierarchical Semantic Enhancement Network for Cross-modal Retrieval (M2HSE) PyTorch code fo

Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation
Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation

Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation. Generally, MAS methods register multiple atlases, i.e., medical images with corresponding labels, to a target image;

Comments
  • Sensor set up

    Sensor set up

    Hi, in the paper section 2.1 Automotive radar sensing, you say that -

    With conventional FMCW radars the RAD tensor is usually not available as it is too computing intensive to estimate.

    so what is difference between conventional FMCW and others FMCW radar?

    In addition, what CARRADA dataset camera and radar sensor setup? and the network cost time (ms) is possible to on-road online?

    Thanks you, hope you can give me some advice.

    opened by enting8696 1
  • metrics calculation on some frames without foreground pixels

    metrics calculation on some frames without foreground pixels

    Hi, I have a question about the calculation of some metrics including IoU, DICE, precision, and recall. In your codes I think you add all frames' confusion matrix together to have the metrics you want. But I found that the dataset contains some frames without any foreground pixels, for example:

    Screen Shot 2021-07-16 at 9 53 27 PM

    The frame without foreground pixel will give a 0 value for the above metrics. So I am afraid the performance of the model is actually underestimated. I wonder if it is more reasonable to exclude frames without the foreground pixel?

    opened by james20141606 1
  • test results.

    test results.

    Thanks for your great work. When I use your pretrained weight in test.py. I can only get mIoU 58.2 in test_result.json file and 12 percentage points worse than the metrics in the result.json file. Can you help me with the confusion?

    opened by sutiankang 0
Releases(v0.1)
Owner
valeo.ai
We are an international team based in Paris, conducting AI research for Valeo automotive applications, in collaboration with world-class academics.
valeo.ai
A lightweight library to compare different PyTorch implementations of the same network architecture.

TorchBug is a lightweight library designed to compare two PyTorch implementations of the same network architecture. It allows you to count, and compar

Arjun Krishnakumar 5 Jan 02, 2023
Bridging Composite and Real: Towards End-to-end Deep Image Matting

Bridging Composite and Real: Towards End-to-end Deep Image Matting Please note that the official repository of the paper Bridging Composite and Real:

Jizhizi_Li 30 Oct 31, 2022
The aim of the game, as in the original one, is to find a specific image from a group of different images of a person's face

GUESS WHO Main Links: [Github] [App] Related Links: [CLIP] [Celeba] The aim of the game, as in the original one, is to find a specific image from a gr

Arnau - DIMAI 3 Jan 04, 2022
This repo will contain code to reproduce and build upon understanding transfer learning

What is being transferred in transfer learning? This repo contains the code for the following paper: Behnam Neyshabur*, Hanie Sedghi*, Chiyuan Zhang*.

4 Jun 16, 2021
My implementation of Fully Convolutional Neural Networks in Keras

Keras-FCN This repository contains my implementation of Fully Convolutional Networks in Keras (Tensorflow backend). Currently, semantic segmentation c

The Duy Nguyen 15 Jan 13, 2020
NeuralCompression is a Python repository dedicated to research of neural networks that compress data

NeuralCompression is a Python repository dedicated to research of neural networks that compress data. The repository includes tools such as JAX-based entropy coders, image compression models, video c

Facebook Research 297 Jan 06, 2023
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples

Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples This repository is the official implementation of paper [Qimera: Data-free Q

Kanghyun Choi 21 Nov 03, 2022
Implementation of FSGNN

FSGNN Implementation of FSGNN. For more details, please refer to our paper Experiments were conducted with following setup: Pytorch: 1.6.0 Python: 3.8

19 Dec 05, 2022
Repo for EchoVPR: Echo State Networks for Visual Place Recognition

EchoVPR Repo for EchoVPR: Echo State Networks for Visual Place Recognition Currently under development Dirs: data: pre-collected hidden representation

Anil Ozdemir 4 Oct 04, 2022
[BMVC2021] The official implementation of "DomainMix: Learning Generalizable Person Re-Identification Without Human Annotations"

DomainMix [BMVC2021] The official implementation of "DomainMix: Learning Generalizable Person Re-Identification Without Human Annotations" [paper] [de

Wenhao Wang 17 Dec 20, 2022
A minimalist implementation of score-based diffusion model

sdeflow-light This is a minimalist codebase for training score-based diffusion models (supporting MNIST and CIFAR-10) used in the following paper "A V

Chin-Wei Huang 89 Dec 20, 2022
MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images

MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images This repository contains the implementation of our paper MetaAvatar: Learni

sfwang 96 Dec 13, 2022
PyMove is a Python library to simplify queries and visualization of trajectories and other spatial-temporal data

Use PyMove and go much further Information Package Status License Python Version Platforms Build Status PyPi version PyPi Downloads Conda version Cond

Insight Data Science Lab 64 Nov 15, 2022
Training a deep learning model on the noisy CIFAR dataset

Training-a-deep-learning-model-on-the-noisy-CIFAR-dataset This repository contai

1 Jun 14, 2022
Example for AUAV 2022 with obstacle avoidance.

AUAV 2022 Sample This is a sample PX4 based quadrotor path planning framework based on Ubuntu 20.04 and ROS noetic for the IEEE Autonomous UAS 2022 co

James Goppert 11 Sep 16, 2022
Official repository for Fourier model that can generate periodic signals

Conditional Generation of Periodic Signals with Fourier-Based Decoder Jiyoung Lee, Wonjae Kim, Daehoon Gwak, Edward Choi This repository provides offi

8 May 25, 2022
Code for the tech report Toward Training at ImageNet Scale with Differential Privacy

Differentially private Imagenet training Code for the tech report Toward Training at ImageNet Scale with Differential Privacy by Alexey Kurakin, Steve

Google Research 29 Nov 03, 2022
AI grand challenge 2020 Repo (Speech Recognition Track)

KorBERT를 활용한 한국어 텍스트 기반 위협 상황인지(2020 인공지능 그랜드 챌린지) 본 프로젝트는 ETRI에서 제공된 한국어 korBERT 모델을 활용하여 폭력 기반 한국어 텍스트를 분류하는 다양한 분류 모델들을 제공합니다. 본 개발자들이 참여한 2020 인공지

Young-Seok Choi 23 Jan 25, 2022
I3-master-layout - Simple master and stack layout script

Simple master and stack layout script | ------ | ----- | | | | | Ma

Tobias S 18 Dec 05, 2022
This's an implementation of deepmind Visual Interaction Networks paper using pytorch

Visual-Interaction-Networks An implementation of Deepmind visual interaction networks in Pytorch. Introduction For the purpose of understanding the ch

Mahmoud Gamal Salem 166 Dec 06, 2022