PyTorch implementation of Memory-based semantic segmentation for off-road unstructured natural environments.

Related tags

Deep LearningMemSeg
Overview

MemSeg: Memory-based semantic segmentation for off-road unstructured natural environments

Introduction

This repository is a PyTorch implementation of Memory-based semantic segmentation for off-road unstructured natural environments. This work is based on semseg.

The codebase mainly uses ResNet18, ResNet50 and MobileNet-V2 as backbone with ASPP module and can be easily adapted to other basic semantic segmentation structures.

Sample experimented dataset is RUGD.

Requirement

Hardware: >= 11G GPU memory

Software: PyTorch>=1.0.0, python3

Usage

For installation, follow installation steps below or recommend you to refer to the instructions described here.

For its pretrained ResNet50 backbone model, you can download from URL.

Getting Started

Installation

  1. Clone this repository.
git clone https://github.com/youngsjjn/MemSeg.git
  1. Install Python dependencies.
pip install -r requirements.txt

Implementation

  1. Download datasets (i.e. RUGD) and change the root of data path in config.

Download data list of RUGD here.

  1. Inference If you want to inference on pretrained models, download pretrained network in my drive and save them in ./exp/rugd/.

Inference "ResNet50 + Deeplabv3" without the memory module

sh tool/test.sh rugd deeplab50

Inference "ResNet50 + Deeplabv3" with the memory module

sh tool/test_mem.sh rugd deeplab50mem
Network mIoU
ResNet18 + PSPNet 33.42
ResNet18 + PSPNet (Memory) 34.13
ResNet18 + Deeplabv3 33.48
ResNet18 + Deeplabv3 (Memory) 35.07
ResNet50 + Deeplabv3 36.77
ResNet50 + Deeplabv3 (Memory) 37.71
  1. Train (Evaluation is included at the end of the training) Train "ResNet50 + Deeplabv3" without the memory module
sh tool/train.sh rugd deeplab50

Train "ResNet50 + Deeplabv3" without the memory module

sh tool/train_mem.sh rugd deeplab50mem

Here, the example is for training or testing on "ResNet50 + Deeplabv3". If you want to train other networks, please change "deeplab50" or "deeplab50mem" as a postfix of a config file name.

For example, train "ResNet18 + PSPNet" with the memory module:

sh tool/train_mem.sh rugd pspnet18mem

Citation

If you like our work and use the code or models for your research, please cite our work as follows.

@article{DBLP:journals/corr/abs-2108-05635,
  author    = {Youngsaeng Jin and
               David K. Han and
               Hanseok Ko},
  title     = {Memory-based Semantic Segmentation for Off-road Unstructured Natural
               Environments},
  journal   = {CoRR},
  volume    = {abs/2108.05635},
  year      = {2021},
  url       = {https://arxiv.org/abs/2108.05635},
  eprinttype = {arXiv},
  eprint    = {2108.05635},
  timestamp = {Wed, 18 Aug 2021 19:45:42 +0200},
  biburl    = {https://dblp.org/rec/journals/corr/abs-2108-05635.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
Source code, datasets and trained models for the paper Learning Advanced Mathematical Computations from Examples (ICLR 2021), by François Charton, Amaury Hayat (ENPC-Rutgers) and Guillaume Lample

Maths from examples - Learning advanced mathematical computations from examples This is the source code and data sets relevant to the paper Learning a

Facebook Research 171 Nov 23, 2022
Python scripts for performing stereo depth estimation using the MobileStereoNet model in Tensorflow Lite.

TFLite-MobileStereoNet Python scripts for performing stereo depth estimation using the MobileStereoNet model in Tensorflow Lite. Stereo depth estimati

Ibai Gorordo 4 Feb 14, 2022
Lava-DL, but with PyTorch-Lightning flavour

Deep learning project seed Use this seed to start new deep learning / ML projects. Built in setup.py Built in requirements Examples with MNIST Badges

Sami BARCHID 4 Oct 31, 2022
Nicely is a real-time Feedback and Intervention Program Depression is a prevalent issue across all age groups, socioeconomic classes, and cultural identities.

Nicely is a real-time Feedback and Intervention Program Depression is a prevalent issue across all age groups, socioeconomic classes, and cultural identities.

1 Jan 16, 2022
implementation of the paper "MarginGAN: Adversarial Training in Semi-Supervised Learning"

MarginGAN This repository is the implementation of the paper "MarginGAN: Adversarial Training in Semi-Supervised Learning". 1."preliminary" is the imp

Van 7 Dec 23, 2022
Code for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling Using BERT Adapter"

Lexicon Enhanced Chinese Sequence Labeling Using BERT Adapter Code and checkpoints for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling

274 Dec 06, 2022
SMPLpix: Neural Avatars from 3D Human Models

subject0_validation_poses.mp4 Left: SMPL-X human mesh registered with SMPLify-X, middle: SMPLpix render, right: ground truth video. SMPLpix: Neural Av

Sergey Prokudin 292 Dec 30, 2022
SafePicking: Learning Safe Object Extraction via Object-Level Mapping, ICRA 2022

SafePicking Learning Safe Object Extraction via Object-Level Mapping Kentaro Wad

Kentaro Wada 49 Oct 24, 2022
Zero-shot Learning by Generating Task-specific Adapters

Code for "Zero-shot Learning by Generating Task-specific Adapters" This is the repository containing code for "Zero-shot Learning by Generating Task-s

INK Lab @ USC 11 Dec 17, 2021
Implementation for our ICCV2021 paper: Internal Video Inpainting by Implicit Long-range Propagation

Implicit Internal Video Inpainting Implementation for our ICCV2021 paper: Internal Video Inpainting by Implicit Long-range Propagation paper | project

202 Dec 30, 2022
Code for the paper "PortraitNet: Real-time portrait segmentation network for mobile device" @ CAD&Graphics2019

PortraitNet Code for the paper "PortraitNet: Real-time portrait segmentation network for mobile device". @ CAD&Graphics 2019 Introduction We propose a

265 Dec 01, 2022
P-Tuning v2: Prompt Tuning Can Be Comparable to Finetuning Universally Across Scales and Tasks

P-tuning v2 P-Tuning v2: Prompt Tuning Can Be Comparable to Finetuning Universally Across Scales and Tasks An optimized prompt tuning strategy achievi

THUDM 540 Dec 30, 2022
Implementation of Neural Style Transfer in Pytorch

PytorchNeuralStyleTransfer Code to run Neural Style Transfer from our paper Image Style Transfer Using Convolutional Neural Networks. Also includes co

Leon Gatys 396 Dec 01, 2022
Distributing Deep Learning Hyperparameter Tuning for 3D Medical Image Segmentation

DistMIS Distributing Deep Learning Hyperparameter Tuning for 3D Medical Image Segmentation. DistriMIS Distributing Deep Learning Hyperparameter Tuning

HiEST 2 Sep 09, 2022
This is the code of NeurIPS'21 paper "Towards Enabling Meta-Learning from Target Models".

ST This is the code of NeurIPS 2021 paper "Towards Enabling Meta-Learning from Target Models". If you use any content of this repo for your work, plea

Su Lu 7 Dec 06, 2022
In-Place Activated BatchNorm for Memory-Optimized Training of DNNs

In-Place Activated BatchNorm In-Place Activated BatchNorm for Memory-Optimized Training of DNNs In-Place Activated BatchNorm (InPlace-ABN) is a novel

1.3k Dec 29, 2022
Codes to calculate solar-sensor zenith and azimuth angles directly from hyperspectral images collected by UAV. Works only for UAVs that have high resolution GNSS/IMU unit.

UAV Solar-Sensor Angle Calculation Table of Contents About The Project Built With Getting Started Prerequisites Installation Datasets Contributing Lic

Sourav Bhadra 1 Jan 15, 2022
Sketch-Based 3D Exploration with Stacked Generative Adversarial Networks

pix2vox [Demonstration video] Sketch-Based 3D Exploration with Stacked Generative Adversarial Networks. Generated samples Single-category generation M

Takumi Moriya 232 Nov 14, 2022
This tutorial repository is to introduce the functionality of KGTK to first-time users

Welcome to the KGTK notebook tutorial The goal of this tutorial repository is to introduce the functionality of KGTK to first-time users. The Knowledg

USC ISI I2 58 Dec 21, 2022
"NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search".

NAS-Bench-301 This repository containts code for the paper: "NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search". The

AutoML-Freiburg-Hannover 57 Nov 30, 2022