E2VID_ROS - E2VID_ROS: E2VID to a real-time system

Overview

E2VID_ROS

Introduce

We extend E2VID to a real-time system. Because Python ROS callback has a large delay, we use dvs_event_server to transport "dvs/events" ROS topic to Python.

Install with Anaconda

In addition to dependencies E2VID needs, some packages are needed. Enter the E2VID conda environment, and then

pip install rospkg
pip install pyzmq
pip install protobuf

Usage

Adjust the event camera parameter according to your camera and surroundings in run_reconstruction_ros.py

self.width = 346
self.height = 260
self.event_window_size = 30000

Adjust the event topic name and the event window size in dvs_event_server.launch

<param name="/event_topic_name" type="str" value="/dvs/events"/>
<param name="/event_window_size" type="int" value="30000" />

Make sure the ip address and the port are the same as the dvs_event_server.launch in run_reconstruction_ros.py

self.socket.connect('tcp://127.0.0.1:10001')
<param name="/ip_address" type="str" value="127.0.0.1" />
<param name="/port" type="str" value="10001" />

First run dvs_event_server

roslaunch dvs_event_server dvs_event_server.launch

And then run E2VID_ROS (when closing, run_reconstruction_ros.py should be closed before dvs_event_server)

python run_reconstruction_ros.py

You can play the rosbag or use your own camera

rosbag play example.bag

You can use dvs_renderer or dv_ros to compare the reconstructed frame with the event frame.

You can use rqt_image_view or rviz to visualize the /e2vid/image topic




High Speed and High Dynamic Range Video with an Event Camera

High Speed and High Dynamic Range Video with an Event Camera

This is the code for the paper High Speed and High Dynamic Range Video with an Event Camera by Henri Rebecq, Rene Ranftl, Vladlen Koltun and Davide Scaramuzza:

You can find a pdf of the paper here. If you use any of this code, please cite the following publications:

@Article{Rebecq19pami,
  author        = {Henri Rebecq and Ren{\'{e}} Ranftl and Vladlen Koltun and Davide Scaramuzza},
  title         = {High Speed and High Dynamic Range Video with an Event Camera},
  journal       = {{IEEE} Trans. Pattern Anal. Mach. Intell. (T-PAMI)},
  url           = {http://rpg.ifi.uzh.ch/docs/TPAMI19_Rebecq.pdf},
  year          = 2019
}
@Article{Rebecq19cvpr,
  author        = {Henri Rebecq and Ren{\'{e}} Ranftl and Vladlen Koltun and Davide Scaramuzza},
  title         = {Events-to-Video: Bringing Modern Computer Vision to Event Cameras},
  journal       = {{IEEE} Conf. Comput. Vis. Pattern Recog. (CVPR)},
  year          = 2019
}

Install

Dependencies:

Install with Anaconda

The installation requires Anaconda3. You can create a new Anaconda environment with the required dependencies as follows (make sure to adapt the CUDA toolkit version according to your setup):

conda create -n E2VID
conda activate E2VID
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
conda install pandas
conda install -c conda-forge opencv

Run

  • Download the pretrained model:
wget "http://rpg.ifi.uzh.ch/data/E2VID/models/E2VID_lightweight.pth.tar" -O pretrained/E2VID_lightweight.pth.tar
  • Download an example file with event data:
wget "http://rpg.ifi.uzh.ch/data/E2VID/datasets/ECD_IJRR17/dynamic_6dof.zip" -O data/dynamic_6dof.zip

Before running the reconstruction, make sure the conda environment is sourced:

conda activate E2VID
  • Run reconstruction:
python run_reconstruction.py \
  -c pretrained/E2VID_lightweight.pth.tar \
  -i data/dynamic_6dof.zip \
  --auto_hdr \
  --display \
  --show_events

Parameters

Below is a description of the most important parameters:

Main parameters

  • --window_size / -N (default: None) Number of events per window. This is the parameter that has the most influence of the image reconstruction quality. If set to None, this number will be automatically computed based on the sensor size, as N = width * height * num_events_per_pixel (see description of that parameter below). Ignored if --fixed_duration is set.
  • --fixed_duration (default: False) If True, will use windows of events with a fixed duration (i.e. a fixed output frame rate).
  • --window_duration / -T (default: 33 ms) Duration of each event window, in milliseconds. The value of this parameter has strong influence on the image reconstruction quality. Its value may need to be adapted to the dynamics of the scene. Ignored if --fixed_duration is not set.
  • --Imin (default: 0.0), --Imax (default: 1.0): linear tone mapping is performed by normalizing the output image as follows: I = (I - Imin) / (Imax - Imin). If --auto_hdr is set to True, --Imin and --Imax will be automatically computed as the min (resp. max) intensity values.
  • --auto_hdr (default: False) Automatically compute --Imin and --Imax. Disabled when --color is set.
  • --color (default: False): if True, will perform color reconstruction as described in the paper. Only use this with a color event camera such as the Color DAVIS346.

Output parameters

  • --output_folder: path of the output folder. If not set, the image reconstructions will not be saved to disk.
  • --dataset_name: name of the output folder directory (default: 'reconstruction').

Display parameters

  • --display (default: False): display the video reconstruction in real-time in an OpenCV window.
  • --show_events (default: False): show the input events side-by-side with the reconstruction. If --output_folder is set, the previews will also be saved to disk in /path/to/output/folder/events.

Additional parameters

  • --num_events_per_pixel (default: 0.35): Parameter used to automatically estimate the window size based on the sensor size. The value of 0.35 was chosen to correspond to ~ 15,000 events on a 240x180 sensor such as the DAVIS240C.
  • --no-normalize (default: False): Disable event tensor normalization: this will improve speed a bit, but might degrade the image quality a bit.
  • --no-recurrent (default: False): Disable the recurrent connection (i.e. do not maintain a state). For experimenting only, the results will be flickering a lot.
  • --hot_pixels_file (default: None): Path to a file specifying the locations of hot pixels (such a file can be obtained with this tool for example). These pixels will be ignored (i.e. zeroed out in the event tensors).

Example datasets

We provide a list of example (publicly available) event datasets to get started with E2VID.

Working with ROS

Because PyTorch recommends Python 3 and ROS is only compatible with Python2, it is not straightforward to have the PyTorch reconstruction code and ROS code running in the same environment. To make things easy, the reconstruction code we provide has no dependency on ROS, and simply read events from a text file or ZIP file. We provide convenience functions to convert ROS bags (a popular format for event datasets) into event text files. In addition, we also provide scripts to convert a folder containing image reconstructions back to a rosbag (or to append image reconstructions to an existing rosbag).

Note: it is not necessary to have a sourced conda environment to run the following scripts. However, ROS needs to be installed and sourced.

rosbag -> events.txt

To extract the events from a rosbag to a zip file containing the event data:

python scripts/extract_events_from_rosbag.py /path/to/rosbag.bag \
  --output_folder=/path/to/output/folder \
  --event_topic=/dvs/events

image reconstruction folder -> rosbag

python scripts/image_folder_to_rosbag.py \
  --datasets dynamic_6dof \
  --image_folder /path/to/image/folder \
  --output_folder /path/to/output_folder \
  --image_topic /dvs/image_reconstructed

Append image_reconstruction_folder to an existing rosbag

cd scripts
python embed_reconstructed_images_in_rosbag.py \
  --rosbag_folder /path/to/rosbag/folder \
  --datasets dynamic_6dof \
  --image_folder /path/to/image/folder \
  --output_folder /path/to/output_folder \
  --image_topic /dvs/image_reconstructed

Generating a video reconstruction (with a fixed framerate)

It can be convenient to convert an image folder to a video with a fixed framerate (for example for use in a video editing tool). You can proceed as follows:

export FRAMERATE=30
python resample_reconstructions.py -i /path/to/input_folder -o /tmp/resampled -r $FRAMERATE
ffmpeg -framerate $FRAMERATE -i /tmp/resampled/frame_%010d.png video_"$FRAMERATE"Hz.mp4

Acknowledgements

This code borrows from the following open source projects, whom we would like to thank:

Owner
Robin Shaun
Aerospace Engineering
Robin Shaun
Repo for 2021 SDD assessment task 2, by Felix, Anna, and James.

SoftwareTask2 Repo for 2021 SDD assessment task 2, by Felix, Anna, and James. File/folder structure: helloworld.py - demonstrates various map backgrou

3 Dec 13, 2022
A PyTorch implementation of the paper "Semantic Image Synthesis via Adversarial Learning" in ICCV 2017

Semantic Image Synthesis via Adversarial Learning This is a PyTorch implementation of the paper Semantic Image Synthesis via Adversarial Learning. Req

Seonghyeon Nam 146 Nov 25, 2022
Automate issue discovery for your projects against Lightning nightly and releases.

Automated Testing for Lightning EcoSystem Projects Automate issue discovery for your projects against Lightning nightly and releases. You get CPUs, Mu

Pytorch Lightning 41 Dec 24, 2022
OneFlow is a performance-centered and open-source deep learning framework.

OneFlow OneFlow is a performance-centered and open-source deep learning framework. Latest News Version 0.5.0 is out! First class support for eager exe

OneFlow 4.2k Jan 07, 2023
AttGAN: Facial Attribute Editing by Only Changing What You Want (IEEE TIP 2019)

News 11 Jan 2020: We clean up the code to make it more readable! The old version is here: v1. AttGAN TIP Nov. 2019, arXiv Nov. 2017 TensorFlow impleme

Zhenliang He 568 Dec 14, 2022
Your interactive network visualizing dashboard

Your interactive network visualizing dashboard Documentation: Here What is Jaal Jaal is a python based interactive network visualizing tool built usin

Mohit 177 Jan 04, 2023
Fast and scalable uncertainty quantification for neural molecular property prediction, accelerated optimization, and guided virtual screening.

Evidential Deep Learning for Guided Molecular Property Prediction and Discovery Ava Soleimany*, Alexander Amini*, Samuel Goldman*, Daniela Rus, Sangee

Alexander Amini 75 Dec 15, 2022
CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes (AAAI2022)

CMUA-Watermark The official code for CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes (AAAI2022) arxiv. It is bas

50 Nov 26, 2022
Official repository for ABC-GAN

ABC-GAN The work represented in this repository is the result of a 14 week semesterthesis on photo-realistic image generation using generative adversa

IgorSusmelj 10 Jun 23, 2022
Full Transformer Framework for Robust Point Cloud Registration with Deep Information Interaction

Full Transformer Framework for Robust Point Cloud Registration with Deep Information Interaction. arxiv This repository contains python scripts for tr

12 Dec 12, 2022
Official repository for Fourier model that can generate periodic signals

Conditional Generation of Periodic Signals with Fourier-Based Decoder Jiyoung Lee, Wonjae Kim, Daehoon Gwak, Edward Choi This repository provides offi

8 May 25, 2022
Official Pytorch implementation for AAAI2021 paper (RSPNet: Relative Speed Perception for Unsupervised Video Representation Learning)

RSPNet Official Pytorch implementation for AAAI2021 paper "RSPNet: Relative Speed Perception for Unsupervised Video Representation Learning" [Suppleme

35 Jun 24, 2022
PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time

PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time The implementation is based on SIGGRAPH Aisa'20. Dependencies Python 3.7 Ubuntu

soratobtai 124 Dec 08, 2022
Live Hand Tracking Using Python

Live-Hand-Tracking-Using-Python Project Description: In this project, we will be

Hassan Shahzad 2 Jan 06, 2022
ML model to classify between cats and dogs

Cats-and-dogs-classifier This is my first ML model which can classify between cats and dogs. Here the accuracy is around 75%, however , the accuracy c

Sharath V 4 Aug 20, 2021
Fight Recognition from Still Images in the Wild @ WACVW2022, Real-world Surveillance Workshop

Fight Detection from Still Images in the Wild Detecting fights from still images is an important task required to limit the distribution of social med

Şeymanur Aktı 10 Nov 09, 2022
Julia package for contraction of tensor networks, based on the sweep line algorithm outlined in the paper General tensor network decoding of 2D Pauli codes

Julia package for contraction of tensor networks, based on the sweep line algorithm outlined in the paper General tensor network decoding of 2D Pauli codes

Christopher T. Chubb 35 Dec 21, 2022
Custom Implementation of Non-Deep Networks

ParNet Custom Implementation of Non-deep Networks arXiv:2110.07641 Ankit Goyal, Alexey Bochkovskiy, Jia Deng, Vladlen Koltun Official Repository https

Pritama Kumar Nayak 20 May 27, 2022
RINDNet: Edge Detection for Discontinuity in Reflectance, Illumination, Normal and Depth, in ICCV 2021 (oral)

RINDNet RINDNet: Edge Detection for Discontinuity in Reflectance, Illumination, Normal and Depth Mengyang Pu, Yaping Huang, Qingji Guan and Haibin Lin

Mengyang Pu 75 Dec 15, 2022
Microscopy Image Cytometry Toolkit

Cytokit Cytokit is a collection of tools for quantifying and analyzing properties of individual cells in large fluorescent microscopy datasets with a

Hammer Lab 106 Jan 06, 2023