DeFMO: Deblurring and Shape Recovery of Fast Moving Objects (CVPR 2021)

Overview

Evaluation, Training, Demo, and Inference of DeFMO

DeFMO: Deblurring and Shape Recovery of Fast Moving Objects (CVPR 2021)

Denys Rozumnyi, Martin R. Oswald, Vittorio Ferrari, Jiri Matas, Marc Pollefeys

Qualitative results: https://www.youtube.com/watch?v=pmAynZvaaQ4

Pre-trained models

The pre-trained DeFMO model as reported in the paper is available here: https://polybox.ethz.ch/index.php/s/M06QR8jHog9GAcF. Put them into ./saved_models sub-folder.

Inference

For generating video temporal super-resolution:

python run.py --video example/falling_pen.avi

For generating temporal super-resolution of a single frame with the given background:

python run.py --im example/im.png --bgr example/bgr.png

Evaluation

After downloading the pre-trained models and downloading the evaluation datasets, you can run

python eval_dataset.py

Synthetic dataset generation

For the dataset generation, please download:

Then, insert your paths in renderer/settings.py file. To generate the dataset, run in renderer sub-folder:

python run_render.py

Note that the full training dataset with 50 object categories, 1000 objects per category, and 24 timestamps takes up to 1 TB of storage memory. Due to this and also the ShapeNet licence, we cannot make the pre-generated dataset public - please generate it by yourself using the steps above.

Training

Set up all paths in main_settings.py and run

python train.py

Evaluation on real-world datasets

All evaluation datasets can be found at http://cmp.felk.cvut.cz/fmo/. We provide a download_datasets.sh script to download the Falling Objects, the TbD-3D, and the TbD datasets.

Reference

If you use this repository, please cite the following publication ( https://arxiv.org/abs/2012.00595 ):

@inproceedings{defmo,
  author = {Denys Rozumnyi and Martin R. Oswald and Vittorio Ferrari and Jiri Matas and Marc Pollefeys},
  title = {DeFMO: Deblurring and Shape Recovery of Fast Moving Objects},
  booktitle = {CVPR},
  address = {Nashville, Tennessee, USA},
  month = jun,
  year = {2021}
}
Comments
  • Question about training set

    Question about training set

    Hi, thanks for your generous sharing.

    I have a question about training set generating in your work. I generated a training set following your codes. Its size is about 100GB, far less than 1TB. Is there anything wrong?

    Thanks.

    opened by fan-hd 11
  • Apply your model on custom longer video clips

    Apply your model on custom longer video clips

    Hi thank you for releasing your code,

    Can your model be applied on custom videos about high speed train crossing? Video clips last from 3 to 10 seconds, my idea was to preprocess them with your code in order to keep the same frame rate and have a better video quality for later object detection. This is an example frame from original video clip:

    vlcsnap-2021-05-25-15h27m32s030

    I tried to run your code on a video about 6 seconds and the result was a longer video (about 13min) with a lower level of detail, probably I'm doing something wrong. This is an example frame from output video clip:

    vlcsnap-2021-05-25-15h26m22s237

    How can I correctly reconstruct the quality of single frames usin all the information contained in the video?

    opened by fabiozappo 4
  • Question about comparison with Jin et al.'s work (CVPR2018)

    Question about comparison with Jin et al.'s work (CVPR2018)

    Hi, thank you for your interesting work! I have a question about the comparison of methods in your work. When making comparisons, did you retrain Jin et al.'s model ("Learning to Extract a Video Sequence from a Single Motion-Blurred Image" from CVPR 2018), or did you just use their pre-trained checkpoints? I couldn't find the training code on their github page.

    opened by zzh-tech 2
  • Padding in Time-Consistency Loss

    Padding in Time-Consistency Loss

    Hi,

    Congratulations!

    I found that "padding = tuple(side // 10 for side in sh[:2]) + (0,)" for normalized cross-correlation. Does it only implement padding to the height axis, since the padding tuple will be of size (4//10, H//10, 0)?

    Thanks a lot.

    opened by JLiu-Edinburgh 1
  • run on google colab!

    run on google colab!

    I'm confused! and need to run the code on google colab or more explanation about how to implement that code in vscode or something else .if it know someone please help me

    opened by ganikas 3
Releases(v1.0)
Owner
Denys Rozumnyi
PhD student at ETH Zurich.
Denys Rozumnyi
Code for DeepCurrents: Learning Implicit Representations of Shapes with Boundaries

DeepCurrents | Webpage | Paper DeepCurrents: Learning Implicit Representations of Shapes with Boundaries David Palmer*, Dmitriy Smirnov*, Stephanie Wa

Dima Smirnov 36 Dec 08, 2022
A Lightweight Experiment & Resource Monitoring Tool 📺

Lightweight Experiment & Resource Monitoring 📺 "Did I already run this experiment before? How many resources are currently available on my cluster?"

170 Dec 28, 2022
Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners

Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners This repository is built upon BEiT, thanks very much! Now, we on

Zhiliang Peng 2.3k Jan 04, 2023
This is the official code for the paper "Tracker Meets Night: A Transformer Enhancer for UAV Tracking".

SCT This is the official code for the paper "Tracker Meets Night: A Transformer Enhancer for UAV Tracking" The spatial-channel Transformer (SCT) enhan

Intelligent Vision for Robotics in Complex Environment 27 Nov 23, 2022
Detecting Human-Object Interactions with Object-Guided Cross-Modal Calibrated Semantics

[AAAI2022] Detecting Human-Object Interactions with Object-Guided Cross-Modal Calibrated Semantics Overall pipeline of OCN. Paper Link: [arXiv] [AAAI

13 Nov 21, 2022
Attentional Focus Modulates Automatic Finger‑tapping Movements

"Attentional Focus Modulates Automatic Finger‑tapping Movements", in Scientific Reports

Xingxun Jiang 1 Dec 02, 2021
Pretraining Representations For Data-Efficient Reinforcement Learning

Pretraining Representations For Data-Efficient Reinforcement Learning Max Schwarzer, Nitarshan Rajkumar, Michael Noukhovitch, Ankesh Anand, Laurent Ch

Mila 40 Dec 11, 2022
Danfeng Hong, Lianru Gao, Jing Yao, Bing Zhang, Antonio Plaza, Jocelyn Chanussot. Graph Convolutional Networks for Hyperspectral Image Classification, IEEE TGRS, 2021.

Graph Convolutional Networks for Hyperspectral Image Classification Danfeng Hong, Lianru Gao, Jing Yao, Bing Zhang, Antonio Plaza, Jocelyn Chanussot T

Danfeng Hong 154 Dec 13, 2022
Retina blood vessel segmentation with a convolutional neural network

Retina blood vessel segmentation with a convolution neural network (U-net) This repository contains the implementation of a convolutional neural netwo

Orobix 1.2k Jan 06, 2023
[ICCV21] Code for RetrievalFuse: Neural 3D Scene Reconstruction with a Database

RetrievalFuse Paper | Project Page | Video RetrievalFuse: Neural 3D Scene Reconstruction with a Database Yawar Siddiqui, Justus Thies, Fangchang Ma, Q

Yawar Nihal Siddiqui 75 Dec 22, 2022
Pytorch implementation for "Distribution-Balanced Loss for Multi-Label Classification in Long-Tailed Datasets" (ECCV 2020 Spotlight)

Distribution-Balanced Loss [Paper] The implementation of our paper Distribution-Balanced Loss for Multi-Label Classification in Long-Tailed Datasets (

Tong WU 304 Dec 22, 2022
Codebase for Attentive Neural Hawkes Process (A-NHP) and Attentive Neural Datalog Through Time (A-NDTT)

Introduction Codebase for the paper Transformer Embeddings of Irregularly Spaced Events and Their Participants. This codebase contains two packages: a

Alan Yang 28 Dec 12, 2022
Focal and Global Knowledge Distillation for Detectors

FGD Paper: Focal and Global Knowledge Distillation for Detectors Install MMDetection and MS COCO2017 Our codes are based on MMDetection. Please follow

Mesopotamia 261 Dec 23, 2022
Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression

Quantile Regression DQN Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression (https://arx

Arsenii Senya Ashukha 80 Sep 17, 2022
百度2021年语言与智能技术竞赛机器阅读理解Pytorch版baseline

项目说明: 百度2021年语言与智能技术竞赛机器阅读理解Pytorch版baseline 比赛链接:https://aistudio.baidu.com/aistudio/competition/detail/66?isFromLuge=true 官方的baseline版本是基于paddlepadd

周俊贤 54 Nov 23, 2022
FEMDA: Robust classification with Flexible Discriminant Analysis in heterogeneous data

FEMDA: Robust classification with Flexible Discriminant Analysis in heterogeneous data. Flexible EM-Inspired Discriminant Analysis is a robust supervised classification algorithm that performs well i

0 Sep 06, 2022
MBPO (paper: When to trust your model: Model-based policy optimization) in offline RL settings

offline-MBPO This repository contains the code of a version of model-based RL algorithm MBPO, which is modified to perform in offline RL settings Pape

LxzGordon 1 Oct 24, 2021
A commany has recently introduced a new type of bidding, the average bidding, as an alternative to the bid given to the current maximum bidding

Business Problem A commany has recently introduced a new type of bidding, the average bidding, as an alternative to the bid given to the current maxim

Kübra Bilinmiş 1 Jan 15, 2022
CONetV2: Efficient Auto-Channel Size Optimization for CNNs

CONetV2: Efficient Auto-Channel Size Optimization for CNNs Exciting News! CONetV2: Efficient Auto-Channel Size Optimization for CNNs has been accepted

Mahdi S. Hosseini 3 Dec 13, 2021