Code repository for "Free View Synthesis", ECCV 2020.

Overview

Free View Synthesis

Code repository for "Free View Synthesis", ECCV 2020.

Setup

Install the following Python packages in your Python environment

- numpy (1.19.1)
- scikit-image (0.15.0)
- pillow (7.2.0)
- pytorch (1.6.0)
- torchvision (0.7.0)

Clone the repository and initialize the submodule

git clone https://github.com/intel-isl/FreeViewSynthesis.git
cd FreeViewSynthesis
git submodule update --init --recursive

Finally, build the Python extension needed for preprocessing

cd ext/preprocess
cmake -DCMAKE_BUILD_TYPE=Release .
make 

Tested with Ubuntu 18.04 and macOS Catalina. If you do not have a C++17 compatible compiler, you can change the code as descibed here.

Run Free View Synthesis

Make sure you adapted the paths in config.py to point to the downloaded data!

You can download the pre-trained models here

# in FreeViewSynthesis directory
wget https://storage.googleapis.com/isl-datasets/FreeViewSynthesis/experiments.tar.gz
tar xvzf experiments.tar.gz
# there should now be net*params files in exp/experiments/*/

Then run the evaluation via

python exp.py --net rnn_vgg16unet3_gruunet4.64.3 --cmd eval --iter last --eval-dsets tat-subseq --eval-scale 0.5

This will run the pretrained network on the four Tanks and Temples sequences.

To train the network from scratch you can run

python exp.py --net rnn_vgg16unet3_gruunet4.64.3 --cmd retrain

Data

We provide the preprocessed Tanks and Temples dataset as we used it for training and evaluation here. Our new recordings can be downloaded in a preprocessed version from here.

We used COLMAP for camera registration, multi-view stereo and surface reconstruction on full resolution. The packages above contain the already undistorted and registered images. In addition, we provide the estimated camera calibrations, rendered depthmaps used for warping, and closest source image information.

In more detail, a single folder ibr3d_*_scale (where scale is the scale factor with respect to the original images) contains:

  • im_XXXXXXXX.[png|jpg] the downsampled images used as source images, or as target images.
  • dm_XXXXXXXX.npy the rendered depthmaps based on the COLMAP surface reconstruction.
  • Ks.npy contains the 3x3 intrinsic camera matrices, where Ks[idx] corresponds to the depth map dm_{idx:08d}.npy.
  • Rs.npy contains the 3x3 rotation matrices from the world coordinate system to camera coordinate system.
  • ts.npy contains the 3 translation vectors from the world coordinate system to camera coordinate system.
  • count_XXXXXXXX.npy contains the overlap information from target images to source images. I.e., the number of pixels that can be mapped from the target image to the individual source images. np.argsort(np.load('count_00000000.npy'))[::-1] will give you the sorted indices of the most overlapping source images.

Use np.load to load the numpy files.

We use the Tanks and Temples dataset for training except the following scenes that are used for evaluation.

  • train/Truck [172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196]
  • intermediate/M60 [94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129]
  • intermediate/Playground [221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252]
  • intermediate/Train [174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248]

The numbers below the scene name indicate the indices of the target images that we used for evaluation.

Citation

Please cite our paper if you find this work useful.

@inproceedings{Riegler2020FVS,
  title={Free View Synthesis},
  author={Riegler, Gernot and Koltun, Vladlen},
  booktitle={European Conference on Computer Vision},
  year={2020}
}

Video

Free View Synthesis Video

Owner
Intelligent Systems Lab Org
Intelligent Systems Lab Org
Official respository for "Modeling Defocus-Disparity in Dual-Pixel Sensors", ICCP 2020

Official respository for "Modeling Defocus-Disparity in Dual-Pixel Sensors", ICCP 2020 BibTeX @INPROCEEDINGS{punnappurath2020modeling, author={Abhi

Abhijith Punnappurath 22 Oct 01, 2022
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.

Homepage | Paper | Datasets | Leaderboard | Documentation Graph Robustness Benchmark (GRB) provides scalable, unified, modular, and reproducible evalu

THUDM 66 Dec 22, 2022
Demo project for real time anomaly detection using kafka and python

kafkaml-anomaly-detection Project for real time anomaly detection using kafka and python It's assumed that zookeeper and kafka are running in the loca

Rodrigo Arenas 36 Dec 12, 2022
Wordle Env: A Daily Word Environment for Reinforcement Learning

Wordle Env: A Daily Word Environment for Reinforcement Learning Setup Steps: git pull [email&#

2 Mar 28, 2022
A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up/down.

HandTrackingBrightnessControl A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up

Teemu Laurila 19 Feb 12, 2022
MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images

MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images This repository contains the implementation of our paper MetaAvatar: Learni

sfwang 96 Dec 13, 2022
DETReg: Unsupervised Pretraining with Region Priors for Object Detection

DETReg: Unsupervised Pretraining with Region Priors for Object Detection Amir Bar, Xin Wang, Vadim Kantorov, Colorado J Reed, Roei Herzig, Gal Chechik

Amir Bar 283 Dec 27, 2022
A dual benchmarking study of visual forgery and visual forensics techniques

A dual benchmarking study of facial forgery and facial forensics In recent years, visual forgery has reached a level of sophistication that humans can

8 Jul 06, 2022
SAFL: A Self-Attention Scene Text Recognizer with Focal Loss

SAFL: A Self-Attention Scene Text Recognizer with Focal Loss This repository implements the SAFL in pytorch. Installation conda env create -f environm

6 Aug 24, 2022
🧮 Matrix Factorization for Collaborative Filtering is just Solving an Adjoint Latent Dirichlet Allocation Model after All

Accompanying source code to the paper "Matrix Factorization for Collaborative Filtering is just Solving an Adjoint Latent Dirichlet Allocation Model A

Florian Wilhelm 39 Dec 03, 2022
Python/Rust implementations and notes from Proofs Arguments and Zero Knowledge

What is this? This is where I'll be collecting resources related to the Study Group on Dr. Justin Thaler's Proofs Arguments And Zero Knowledge Book. T

Thor 66 Jan 04, 2023
Learning to Segment Instances in Videos with Spatial Propagation Network

Learning to Segment Instances in Videos with Spatial Propagation Network This paper is available at the 2017 DAVIS Challenge website. Check our result

Jingchun Cheng 145 Sep 28, 2022
Code for the paper "Can Active Learning Preemptively Mitigate Fairness Issues?" presented at RAI 2021.

Can Active Learning Preemptively Mitigate Fairness Issues? Code for the paper "Can Active Learning Preemptively Mitigate Fairness Issues?" presented a

ElementAI 7 Aug 12, 2022
To build a regression model to predict the concrete compressive strength based on the different features in the training data.

Cement-Strength-Prediction Problem Statement To build a regression model to predict the concrete compressive strength based on the different features

Ashish Kumar 4 Jun 11, 2022
PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features

PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features Overview This repository is the Pytorch implementation of PRIN/SPRIN: On Extracting P

Yang You 17 Mar 02, 2022
This repository contains the code for "Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP".

Self-Diagnosis and Self-Debiasing This repository contains the source code for Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based

Timo Schick 62 Dec 12, 2022
Boosted CVaR Classification (NeurIPS 2021)

Boosted CVaR Classification Runtian Zhai, Chen Dan, Arun Sai Suggala, Zico Kolter, Pradeep Ravikumar NeurIPS 2021 Table of Contents Quick Start Train

Runtian Zhai 4 Feb 15, 2022
A curated list of automated deep learning (including neural architecture search and hyper-parameter optimization) resources.

Awesome AutoDL A curated list of automated deep learning related resources. Inspired by awesome-deep-vision, awesome-adversarial-machine-learning, awe

D-X-Y 2k Dec 30, 2022
Elucidating Robust Learning with Uncertainty-Aware Corruption Pattern Estimation

Elucidating Robust Learning with Uncertainty-Aware Corruption Pattern Estimation Introduction 📋 Official implementation of Explainable Robust Learnin

JeongEun Park 6 Apr 19, 2022
Official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Recognition" in AAAI2022.

AimCLR This is an official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Reco

Gty 44 Dec 17, 2022