Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.

Overview

Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.

Build Status

By Andres Milioto @ University of Bonn.

(for the new Pytorch version, go here)

Image of cityscapes Cityscapes Urban Scene understanding.

Image of Persons Person Segmentation

Image of cwc Crop vs. Weed Semantic Segmentation.

Description

This code provides a framework to easily add architectures and datasets, in order to train and deploy CNNs for a robot. It contains a full training pipeline in python using Tensorflow and OpenCV, and it also some C++ apps to deploy a frozen protobuf in ROS and standalone. The C++ library is made in a way which allows to add other backends (such as TensorRT and MvNCS), but only Tensorflow and TensorRT are implemented for now. For now, we will keep it this way because we are mostly interested in deployment for the Jetson and Drive platforms, but if you have a specific need, we accept pull requests!

The networks included is based of of many other architectures (see below), but not exactly a copy of any of them. As seen in the videos, they run very fast in both GPU and CPU, and they are designed with performance in mind, at the cost of a slight accuracy loss. Feel free to use it as a model to implement your own architecture.

All scripts have been tested on the following configurations:

  • x86 Ubuntu 16.04 with an NVIDIA GeForce 940MX GPU (nvidia-384, CUDA9, CUDNN7, TF 1.7, TensorRT3)
  • x86 Ubuntu 16.04 with an NVIDIA GTX1080Ti GPU (nvidia-375, CUDA9, CUDNN7, TF 1.7, TensorRT3)
  • x86 Ubuntu 16.04 and 14.04 with no GPU (TF 1.7, running on CPU in NHWC mode, no TensorRT support)
  • Jetson TX2 (full Jetpack 3.2)

We also provide a Dockerfile to make it easy to run without worrying about the dependencies, which is based on the official nvidia/cuda image containing cuda9 and cudnn7. In order to build and run this image with support for X11 (to display the results), you can run this in the repo root directory (nvidia-docker should be used instead of vainilla docker):

  $ docker pull tano297/bonnet:cuda9-cudnn7-tf17-trt304
  $ nvidia-docker build -t bonnet .
  $ nvidia-docker run -ti --rm -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v $HOME/.Xauthority:/home/developer/.Xauthority -v /home/$USER/data:/shared --net=host --pid=host --ipc=host bonnet /bin/bash

-v /home/$USER/data:/share can be replaced to point to wherever you store the data and trained models, in order to include the data inside the container for inference/training.

Deployment

  • /deploy_cpp contains C++ code for deployment on robot of the full pipeline, which takes an image as input and produces the pixel-wise predictions as output, and the color masks (which depend on the problem). It includes both standalone operation which is meant as an example of usage and build, and a ROS node which takes a topic with an image and outputs 2 topics with the labeled mask and the colored labeled mask.

  • Readme here

Training

  • /train_py contains Python code to easily build CNN Graphs in Tensorflow, train, and generate the trained models used for deployment. This way the interface with Tensorflow can use the more complete Python API and we can easily work with files to augment datasets and so on. It also contains some apps for using models, which includes the ability to save and use a frozen protobuf, and to use the network using TensorRT, which reduces the time for inference when using NVIDIA GPUs.

  • Readme here

Pre-trained models

These are some models trained on some sample datasets that you can use with the trainer and deployer, but if you want to take time to write the parsers for another dataset (yaml file with classes and colors + python script to put the data into the standard dataset format) feel free to create a pull request.

If you don't have GPUs and the task is interesting for robots to exploit, I will gladly train it whenever I have some free GPU time in our servers.

  • Cityscapes:

    • 512x256 Link
    • 768x384 Link (inception-like model)
    • 768x384 Link (mobilenets-like model)
    • 1024x512 Link
  • Synthia:

  • Persons (+coco people):

  • Crop-Weed (CWC):

License

This software

Bonnet is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

Bonnet is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

Pretrained models

The pretrained models with a specific dataset keep the copyright of such dataset.

Citation

If you use our framework for any academic work, please cite its paper.

@InProceedings{milioto2019icra,
author = {A. Milioto and C. Stachniss},
title = {{Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics using CNNs}},
booktitle = {Proc. of the IEEE Intl. Conf. on Robotics \& Automation (ICRA)},
year = 2019,
codeurl = {https://github.com/Photogrammetry-Robotics-Bonn/bonnet},
videourl = {https://www.youtube.com/watch?v=tfeFHCq6YJs},
}

Our networks are strongly based on the following architectures, so if you use them for any academic work, please give a look at their papers and cite them if you think proper:

Other useful GitHub's:

  • OpenAI Checkpointed Gradients. Useful implementation of checkpointed gradients to be able to fit big models in GPU memory without sacrificing runtime.
  • Queueing tool: Very nice queueing tool to share GPU, CPU and Memory resources in a multi-GPU environment.
  • Tensorflow_cc: Very useful repo to compile Tensorflow either as a shared or static library using CMake, in order to be able to compile our C++ apps against it.

Contributors

Milioto, Andres

Special thanks to Philipp Lottes for all the work shared during the last year, and to Olga Vysotka and Susanne Wenzel for beta testing the framework :)

Acknowledgements

This work has partly been supported by the German Research Foundation under Germany's Excellence Strategy, EXC-2070 - 390732324 (PhenoRob). We also thank NVIDIA Corporation for providing a Quadro P6000 GPU partially used to develop this framework.

TODOs

  • Merge Crop-weed CNN with background knowledge into this repo.
  • Make multi-camera ROS node that exploits batching to make inference faster than sequentially.
  • Movidius Neural Stick C++ backends (plus others as they become available).
  • Inference node to show the classes selectively (e.g. with some qt visual GUI)
Owner
Photogrammetry & Robotics Bonn
Photogrammetry & Robotics Lab at the University of Bonn
Photogrammetry & Robotics Bonn
Yoloxkeypointsegment - An anchor-free version of YOLO, with a simpler design but better performance

Introduction 关键点版本:已完成 全景分割版本:已完成 实例分割版本:已完成 YOLOX is an anchor-free version of

23 Oct 20, 2022
Simple PyTorch implementations of Badnets on MNIST and CIFAR10.

Simple PyTorch implementations of Badnets on MNIST and CIFAR10.

Vera 75 Dec 13, 2022
NICE-GAN — Official PyTorch Implementation Reusing Discriminators for Encoding: Towards Unsupervised Image-to-Image Translation

NICE-GAN-pytorch - Official PyTorch implementation of NICE-GAN: Reusing Discriminators for Encoding: Towards Unsupervised Image-to-Image Translation

Runfa Chen 208 Nov 25, 2022
TVNet: Temporal Voting Network for Action Localization

TVNet: Temporal Voting Network for Action Localization This repo holds the codes of paper: "TVNet: Temporal Voting Network for Action Localization". P

hywang 5 Jul 26, 2022
Synthesizing and manipulating 2048x1024 images with conditional GANs

pix2pixHD Project | Youtube | Paper Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic image-to-image translatio

NVIDIA Corporation 6k Dec 27, 2022
Vehicle direction identification consists of three module detection , tracking and direction recognization.

Vehicle-direction-identification Vehicle direction identification consists of three module detection , tracking and direction recognization. Algorithm

5 Nov 15, 2022
A fast and easy to use, moddable, Python based Minecraft server!

PyMine PyMine - The fastest, easiest to use, Python-based Minecraft Server! Features Note: This list is not always up to date, and doesn't contain all

PyMine 144 Dec 30, 2022
这是一个facenet-pytorch的库,可以用于训练自己的人脸识别模型。

Facenet:人脸识别模型在Pytorch当中的实现 目录 性能情况 Performance 所需环境 Environment 注意事项 Attention 文件下载 Download 预测步骤 How2predict 训练步骤 How2train 参考资料 Reference 性能情况 训练数据

Bubbliiiing 210 Jan 06, 2023
Convert BART models to ONNX with quantization. 3X reduction in size, and upto 3X boost in inference speed

fast-Bart Reduction of BART model size by 3X, and boost in inference speed up to 3X BART implementation of the fastT5 library (https://github.com/Ki6a

Siddharth Sharma 19 Dec 09, 2022
BuildingNet: Learning to Label 3D Buildings

BuildingNet This is the implementation of the BuildingNet architecture described in this paper: Paper: BuildingNet: Learning to Label 3D Buildings Arx

16 Nov 07, 2022
Video2x - A lossless video/GIF/image upscaler achieved with waifu2x, Anime4K, SRMD and RealSR.

Official Discussion Group (Telegram): https://t.me/video2x A Discord server is also available. Please note that most developers are only on Telegram.

K4YT3X 5.9k Dec 31, 2022
PyTorch implementation for COMPLETER: Incomplete Multi-view Clustering via Contrastive Prediction (CVPR 2021)

Completer: Incomplete Multi-view Clustering via Contrastive Prediction This repo contains the code and data of the following paper accepted by CVPR 20

XLearning Group 72 Dec 07, 2022
SpiroMask: Measuring Lung Function Using Consumer-Grade Masks

SpiroMask: Measuring Lung Function Using Consumer-Grade Masks Anonymised repository for paper submitted for peer review at ACM HEALTH (October 2021).

0 May 10, 2022
PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

Libo Qin 25 Sep 06, 2022
An MQA (Studio, originalSampleRate) identifier for lossless flac files written in Python.

An MQA (Studio, originalSampleRate) identifier for "lossless" flac files written in Python.

Daniel 10 Oct 03, 2022
Mesh Graphormer is a new transformer-based method for human pose and mesh reconsruction from an input image

MeshGraphormer ✨ ✨ This is our research code of Mesh Graphormer. Mesh Graphormer is a new transformer-based method for human pose and mesh reconsructi

Microsoft 251 Jan 08, 2023
Code for "Primitive Representation Learning for Scene Text Recognition" (CVPR 2021)

Primitive Representation Learning Network (PREN) This repository contains the code for our paper accepted by CVPR 2021 Primitive Representation Learni

Ruijie Yan 76 Jan 02, 2023
Soft actor-critic is a deep reinforcement learning framework for training maximum entropy policies in continuous domains.

This repository is no longer maintained. Please use our new Softlearning package instead. Soft Actor-Critic Soft actor-critic is a deep reinforcement

Tuomas Haarnoja 752 Jan 07, 2023
No-reference Image Quality Assessment(NIQA) Algorithms (BRISQUE, NIQE, PIQE, RankIQA, MetaIQA)

No-Reference Image Quality Assessment Algorithms No-reference Image Quality Assessment(NIQA) is a task of evaluating an image without a reference imag

Dae-Young Song 26 Jan 04, 2023
Efficient Two-Step Networks for Temporal Action Segmentation (Neurocomputing 2021)

Efficient Two-Step Networks for Temporal Action Segmentation This repository provides a PyTorch implementation of the paper Efficient Two-Step Network

8 Apr 16, 2022