Source code for "OmniPhotos: Casual 360° VR Photography"

Overview

OmniPhotos: Casual 360° VR Photography

Project Page | Video | Paper | Demo | Data

This repository contains the source code for creating and viewing OmniPhotos – a new approach for casual 360° VR photography using a consumer 360° video camera.

OmniPhotos: Casual 360° VR Photography
Tobias Bertel, Mingze Yuan, Reuben Lindroos, Christian Richardt
ACM Transactions on Graphics (SIGGRAPH Asia 2020)

Demo

The quickest way to try out OmniPhotos is via our precompiled demo (610 MB). Download and unzip to get started. Documentation for the precompiled binaries, which can also be downloaded separately (25 MB), can be found in the downloaded demo directory.

For the demo to run smoothly, we recommend a recently updated Windows 10 machine with a discrete GPU.

Additional OmniPhotos

We provide 31 OmniPhotos for download:

  • 9 preprocessed datasets that are ready for viewing (3.2 GB zipped, 12.8 GB uncompressed)
  • 31 unprocessed datasets with their input videos, camera poses etc.; this includes the 9 preprocessed datasets (17.4 GB zipped, 17.9 GB uncompressed)

Note: A few of the .insv files are missing for the 5.7k datasets. If you need to process these from scratch (using the insv files) these files can be found here.

How to view OmniPhotos

OmniPhotos are viewed using the "Viewer" executable, either in windowed mode (default) or in a compatible VR headset (see below). To run the viewer executable on the preprocessed datasets above, run the command:

Viewer.exe path-to-datasets/Preprocessed/

with paths adjusted for your machine. The viewer will automatically load the first dataset in the directory (in alphabetical order) and give you the option to load any of the datasets in the directory.

If you would like to run the viewer with VR enabled, please ensure that the firmware for your HMD is updated, you have SteamVR installed on your machine, and then run the command:

Viewer.exe --vr path-to-datasets/Preprocessed/

The OmniPhotos viewer can also load a specific single dataset directly:

Viewer.exe [--vr] path-to-datasets/Preprocessed/Temple3/Config/config-viewer.yaml

How to preprocess datasets

If you would like to preprocess additional datasets, for example "Ship" in the "Unprocessed" directory, run the command:

Preprocessing.exe path-to-datasets/Unpreprocessed/Ship/Config/config-viewer.yaml

This will preprocess the dataset according to the options specified in the config file. Once the preprocessing is finished, the dataset can be opened in the Viewer.

For processing new datasets from scratch, please follow the detailed documentation at Python/preprocessing/readme.md.

Compiling from source

The OmniPhotos Preprocessing and Viewer applications are written in C++11, with some Python used for preparing datasets.

Both main applications and the included libraries use CMake as build system generator. We recommend CMake 3.16 or newer, but older 3.x versions might also work.

Our code has been developed and tested with Microsoft Visual Studio 2015 and 2019 (both 64 bit).

Required dependencies

  1. GLFW 3.3 (version 3.3.1 works)
  2. Eigen 3.3 (version 3.3.2 works)
    • Please note: Ceres (an optional dependency) requires Eigen version "3.3.90" (~Eigen master branch).
  3. OpenCV 4.2
    • OpenCV 4.2 includes DIS flow in the main distribution, so precompiled OpenCV can be used.
    • OpenCV 4.1.1 needs to be compiled from source with the optflow contrib package (for DIS flow).
    • We also support the CUDA Brox flow from the cudaoptflow module, if it is compiled in. In this case, tick USE_CUDA_IN_OPENCV in CMake.
  4. OpenGL 4.1: provided by the operating system
  5. glog (newer than 0.4.0 master works)
  6. gflags (version 2.2.2 works)

Included dependencies (in /src/3rdParty/)

  1. DearImGui 1.79: included automatically as a git submodule.
  2. GL3W
  3. JsonCpp 1.8.0: almalgamated version
  4. nlohmann/json 3.6.1
  5. OpenVR 1.10.30: enable with WITH_OPENVR in CMake.
  6. TCLAP
  7. tinyfiledialogs 3.3.8

Optional dependencies

  1. Ceres (with SuiteSparse) is required for the scene-adaptive proxy geometry fitting. Enable with USE_CERES in CMake.
  2. googletest (master): automatically added when WITH_TEST is enabled in CMake.

Citation

Please cite our paper if you use this code or any of our datasets:

@article{OmniPhotos,
  author    = {Tobias Bertel and Mingze Yuan and Reuben Lindroos and Christian Richardt},
  title     = {{OmniPhotos}: Casual 360° {VR} Photography},
  journal   = {ACM Transactions on Graphics},
  year      = {2020},
  volume    = {39},
  number    = {6},
  pages     = {266:1--12},
  month     = dec,
  issn      = {0730-0301},
  doi       = {10.1145/3414685.3417770},
  url       = {https://richardt.name/omniphotos/},
}

Acknowledgements

We thank the reviewers for their thorough feedback that has helped to improve our paper. We also thank Peter Hedman, Ana Serrano and Brian Cabral for helpful discussions, and Benjamin Attal for his layered mesh rendering code.

This work was supported by EU Horizon 2020 MSCA grant FIRE (665992), the EPSRC Centre for Doctoral Training in Digital Entertainment (EP/L016540/1), RCUK grant CAMERA (EP/M023281/1), an EPSRC-UKRI Innovation Fellowship (EP/S001050/1), a Rabin Ezra Scholarship and an NVIDIA Corporation GPU Grant.

Comments
  • Documentation pipeline update

    Documentation pipeline update

    Pipeline to automatically create documenation on read the docs using doxygen.

    • [ ] link to github page on index.html (mainpage.hpp)
    • [ ] add documentation convention to docs\README.md
    • [ ] fork branch from cr333/main and apply changes to that
    • [ ] create new PR from forked branch
    opened by reubenlindroos 1
  • Adds a progress bar when circleselector is running

    Adds a progress bar when circleselector is running

    Also improves speed of the circleselector module by ~50%

    Todo:

    • [x] add tqdm to requirements.txt
    • [x] np.diffs for find_path_length
    • [x] atomic lock for incrementing the progress bar?
    opened by reubenlindroos 0
  • Circle Selector

    Circle Selector

    • [x] clean up requirements.txt
    • [x] save plot of heatmaps to the cache/dataset directory.
    • [x] move json file to capture directory
    • [x] update the template with option to switch off circlefitting
    • [x] update template to remove some of the options (e.g op_filename_expression)
    • [x] Update README.md with automatic circle selection (section 2.2)
    • [x] Update documentation for installation?
    • [x] Linting (spacing), comment convention, Pep convention
    • [x] sort imports
    • [x] replace op_filename_epression with original_filename_expression

    cv_utils

    • [x] remove extra copy of computeColor
    • [x] change pjoin to os.path.join
    • [x] add more documentation for parameters in cv_utils (change lookatang to look_at_angle)
    • [x] 'nxt' to 'next'
    • [x] comment on line 100 (slice_equirect)

    datatypes

    • [x] more comments on some of the methods in PointDict
    opened by reubenlindroos 0
  • Documentation pipeline update

    Documentation pipeline update

    Pipeline to automatically create documenation on read the docs using doxygen.

    • [x] link to github page on index.html (mainpage.hpp)
    • [x] add documentation convention to docs\README.md
    • [x] fork branch from cr333/main and apply changes to that
    • [x] create new PR from forked branch
    • [x] remove documentation for header comment block in docs/README.md
    • [x] cleanup index.rst (try removing, see if sphinx can build anyway)
    • [ ] mainpage.hpp cleanup (capitalise, centralise)
    • [x] clarify line 48 in README.md
    opened by reubenlindroos 0
  • Demo updated

    Demo updated

    Converts demo documentation files from rst and based in sphinx to be hosted in Github. The sites markdown API now renders the documentation files rather than using sphinx + rtd.

    opened by reubenlindroos 0
  • Adds build test to master branch on push and PR

    Adds build test to master branch on push and PR

    build

    • [ ] change actions to not send email for every build
    • [x] fix requested changes
    • [x] make into squash merge to not mess with main branch history
    • [x] group build steps (building dependencies which ahve been left seperate for debugging purposes)
    • [x] check glog build variables in cmake (e.g BUILD_TEST should not be enabled)
    • [x] check eigen warnings in build log
    • [x] check if precompiled headers might speed up build
    • [x] check if multithread build could be used
    • [x] remove verbose flag from extraction of opencv

    test

    • [x] add test data download
    • [x] reduce size of test dataset
    • [x] check what happens on failure
    • [ ] check if we can "publish" test results (xml?)
    opened by reubenlindroos 0
  • Get problems while preprocessing

    Get problems while preprocessing

    I did download all of those binary files from here:https://github.com/cr333/OmniPhotos/releases/download/v1.1/OmniPhotos-v1.1-win10-x64.zip

    And I did put ffmpeg.exe into system Path. However, Im getting errors saying this below:

    $ ./preproc/preproc.exe -c preproc-config-template.yaml [23276] Failed to execute script 'main' due to unhandled exception! Traceback (most recent call last): File "main.py", line 24, in File "preproc_app.py", line 39, in init File "data_preprocessor.py", line 32, in init File "abs_preprocessor.py", line 70, in init File "abs_preprocessor.py", line 225, in load_origin_data_info File "ffmpeg_probe.py", line 20, in probe File "subprocess.py", line 800, in init File "subprocess.py", line 1207, in _execute_child FileNotFoundError: [WinError 2]

    image

    opened by BlairLeng 2
Releases(v1.1)
Owner
Christian Richardt
Christian Richardt
EEGEyeNet is benchmark to evaluate ET prediction based on EEG measurements with an increasing level of difficulty

Introduction EEGEyeNet EEGEyeNet is a benchmark to evaluate ET prediction based on EEG measurements with an increasing level of difficulty. Overview T

Ard Kastrati 23 Dec 22, 2022
Summary Explorer is a tool to visually explore the state-of-the-art in text summarization.

Summary Explorer Summary Explorer is a tool to visually inspect the summaries from several state-of-the-art neural summarization models across multipl

Webis 42 Aug 14, 2022
Official Pytorch implementation of "Unbiased Classification Through Bias-Contrastive and Bias-Balanced Learning (NeurIPS 2021)

Unbiased Classification Through Bias-Contrastive and Bias-Balanced Learning (NeurIPS 2021) Official Pytorch implementation of Unbiased Classification

Youngkyu 17 Jan 01, 2023
PushForKiCad - AISLER Push for KiCad EDA

AISLER Push for KiCad Push your layout to AISLER with just one click for instant

AISLER 31 Dec 29, 2022
2D Time independent Schrodinger equation solver for arbitrary shape of well

Schrodinger Well Python Python solver for timeless Schrodinger equation for well with arbitrary shape https://imgur.com/a/jlhK7OZ Pictures of circular

WeightAn 24 Nov 18, 2022
A simple pygame dino game which can also be trained and played by a NEAT KI

Dino Game AI Game The game itself was developed with the Pygame module pip install pygame You can also play it yourself by making the dino jump with t

Kilian Kier 7 Dec 05, 2022
Riemann Noise Injection With PyTorch

Riemann Noise Injection - PyTorch A module for modeling GAN noise injection based on Riemann geometry, as described in Ruili Feng, Deli Zhao, and Zhen

2 May 27, 2022
Learning with Noisy Labels via Sparse Regularization, ICCV2021

Learning with Noisy Labels via Sparse Regularization This repository is the official implementation of [Learning with Noisy Labels via Sparse Regulari

Xiong Zhou 38 Oct 20, 2022
Pure python implementation reverse-mode automatic differentiation

MiniGrad A minimal implementation of reverse-mode automatic differentiation (a.k.a. autograd / backpropagation) in pure Python. Inspired by Andrej Kar

Kenny Song 76 Sep 12, 2022
This is the formal code implementation of the CVPR 2022 paper 'Federated Class Incremental Learning'.

Official Pytorch Implementation for GLFC [CVPR-2022] Federated Class-Incremental Learning This is the official implementation code of our paper "Feder

Race Wang 57 Dec 27, 2022
The code for our CVPR paper PISE: Person Image Synthesis and Editing with Decoupled GAN, Project Page, supp.

PISE The code for our CVPR paper PISE: Person Image Synthesis and Editing with Decoupled GAN, Project Page, supp. Requirement conda create -n pise pyt

jinszhang 110 Nov 21, 2022
Python scripts form performing stereo depth estimation using the CoEx model in ONNX.

ONNX-CoEx-Stereo-Depth-estimation Python scripts form performing stereo depth estimation using the CoEx model in ONNX. Stereo depth estimation on the

Ibai Gorordo 8 Dec 29, 2022
CoReNet is a technique for joint multi-object 3D reconstruction from a single RGB image.

CoReNet CoReNet is a technique for joint multi-object 3D reconstruction from a single RGB image. It produces coherent reconstructions, where all objec

Google Research 80 Dec 25, 2022
Based on Stockfish neural network(similar to LcZero)

MarcoEngine Marco Engine - interesnaya neyronnaya shakhmatnaya set', kotoraya ispol'zuyet metod samoobucheniya(dostizheniye khoroshoy igy putem proboy

Marcus Kemaul 4 Mar 12, 2022
Autoregressive Predictive Coding: An unsupervised autoregressive model for speech representation learning

Autoregressive Predictive Coding This repository contains the official implementation (in PyTorch) of Autoregressive Predictive Coding (APC) proposed

iamyuanchung 173 Dec 18, 2022
Easy to use Python camera interface for NVIDIA Jetson

JetCam JetCam is an easy to use Python camera interface for NVIDIA Jetson. Works with various USB and CSI cameras using Jetson's Accelerated GStreamer

NVIDIA AI IOT 358 Jan 02, 2023
Multimodal Descriptions of Social Concepts: Automatic Modeling and Detection of (Highly Abstract) Social Concepts evoked by Art Images

MUSCO - Multimodal Descriptions of Social Concepts Automatic Modeling of (Highly Abstract) Social Concepts evoked by Art Images This project aims to i

0 Aug 22, 2021
git《Self-Attention Attribution: Interpreting Information Interactions Inside Transformer》(AAAI 2021) GitHub:

Self-Attention Attribution This repository contains the implementation for AAAI-2021 paper Self-Attention Attribution: Interpreting Information Intera

60 Dec 29, 2022
Official implement of "CAT: Cross Attention in Vision Transformer".

CAT: Cross Attention in Vision Transformer This is official implement of "CAT: Cross Attention in Vision Transformer". Abstract Since Transformer has

100 Dec 15, 2022
Resources for the Ki testnet challenge

Ki Testnet Challenge This repository hosts ki-testnet-challenge. A set of scripts and resources to be used for the Ki Testnet Challenge What is the te

Ki Foundation 23 Aug 08, 2022