Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image

Overview

Inverse Rendering for Complex Indoor Scenes:
Shape, Spatially-Varying Lighting and SVBRDF
From a Single Image
(Project page)

Zhengqin Li, Mohammad Shafiei, Ravi Ramamoorthi, Kalyan Sunkavalli, Manmohan Chandraker

Useful links:

Results on our new dataset

This is the official code release of paper Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image. The original models were trained by extending the SUNCG dataset with an SVBRDF-mapping. Since SUNCG is not available now due to copyright issues, we are not able to release the original models. Instead, we rebuilt a new high-quality synthetic indoor scene dataset and trained our models on it. We will release the new dataset in the near future. The geometry configurations of the new dataset are based on ScanNet [1], which is a large-scale repository of 3D scans of real indoor scenes. Some example images can be found below. A video is at this link Insverse rendering results of the models trained on the new datasets are shown below. Scene editing applications results on real images are shown below, including results on object insertion and material editing. Models trained on the new dataset achieve comparable performances compared with our previous models. Quantitaive comparisons are listed below, where [Li20] represents our previous models trained on the extended SUNCG dataset.

Download the trained models

The trained models can be downloaded from the link. To test the models, please copy the models to the same directory as the code and run the commands as shown below.

Train and test on the synthetic dataset

To train the full models on the synthetic dataset, please run the commands

  • python trainBRDF.py --cuda --cascadeLevel 0 --dataRoot DATA: Train the first cascade of MGNet.
  • python trainLight.py --cuda --cascadeLevel 0 --dataRoot DATA: Train the first cascade of LightNet.
  • python trainBRDFBilateral.py --cuda --cascadeLevel 0 --dataRoot DATA: Train the bilateral solvers.
  • python outputBRDFLight.py --cuda --dataRoot DATA: Output the intermediate predictions, which will be used to train the second cascade.
  • python trainBRDF.py --cuda --cascadeLevel 1 --dataRoot DATA: Train the first cascade of MGNet.
  • python trainLight.py --cuda --cascadeLevel 1 --dataRoot DATA: Train the first cascade of LightNet.
  • python trainBRDFBilateral.py --cuda --cascadeLevel 1 --dataRoot DATA: Train the bilateral solvers.

To test the full models on the synthetic dataset, please run the commands

  • python testBRDFBilateral.py --cuda --dataRoot DATA: Test the BRDF and geometry predictions.
  • python testLight.py --cuda --cascadeLevel 0 --dataRoot DATA: Test the light predictions of the first cascade.
  • python testLight.py --cuda --cascadeLevel 1 --dataRoot DATA: Test the light predictions of the first cascade.

Train and test on IIW dataset for intrinsic decomposition

To train on the IIW dataset, please first train on the synthetic dataset and then run the commands:

  • python trainFineTuneIIW.py --cuda --dataRoot DATA --IIWRoot IIW: Fine-tune the network on the IIW dataset.

To test the network on the IIW dataset, please run the commands

  • bash runIIW.sh: Output the predictions for the IIW dataset.
  • python CompareWHDR.py: Compute the WHDR on the predictions.

Please fixing the data route in runIIW.sh and CompareWHDR.py.

Train and test on NYU dataset for geometry prediction

To train on the BYU dataset, please first train on the synthetic dataset and then run the commands:

  • python trainFineTuneNYU.py --cuda --dataRoot DATA --NYURoot NYU: Fine-tune the network on the NYU dataset.
  • python trainFineTuneNYU_casacde1.py --cuda --dataRoot DATA --NYURoot NYU: Fine-tune the network on the NYU dataset.

To test the network on the NYU dataset, please run the commands

  • bash runNYU.sh: Output the predictions for the NYU dataset.
  • python CompareNormal.py: Compute the normal error on the predictions.
  • python CompareDepth.py: Compute the depth error on the predictions.

Please remember fixing the data route in runNYU.sh, CompareNormal.py and CompareDepth.py.

Train and test on Garon19 [2] dataset for object insertion

There is no fine-tuning for the Garon19 dataset. To test the network, download the images from this link. And then run bash runReal20.sh. Please remember fixing the data route in runReal20.sh.

All object insertion results and comparisons with prior works can be found from this link. The code to run object insertion can be found from this link.

Differences from the original paper

The current implementation has 3 major differences from the original CVPR20 implementation.

  • In the new models, we do not use spherical Gaussian parameters generated from optimization for supervision. That is mainly because the optimization proceess is time consuming and we have not finished that process yet. We will update the code once it is done. The performance with spherical Gaussian supervision is expected to be better.
  • The resolution of the second cascade is changed from 480x640 to 240x320. We find that the networks can generate smoother results with smaller resolution.
  • We remove the light source segmentation mask as an input. It does not have a major impact on the final results.

Reference

[1] Dai, A., Chang, A. X., Savva, M., Halber, M., Funkhouser, T., & Nießner, M. (2017). Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 5828-5839).

[2] Garon, M., Sunkavalli, K., Hadap, S., Carr, N., & Lalonde, J. F. (2019). Fast spatially-varying indoor lighting estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 6908-6917).

Hysterese plugin with two temperature offset areas

craftbeerpi4 plugin OffsetHysterese Temperatur-Steuerungs-Plugin mit zwei tempereaturbereich abhängigen Offsets. Installation sudo pip3 install https:

HappyHibo 1 Dec 21, 2021
To SMOTE, or not to SMOTE?

To SMOTE, or not to SMOTE? This package includes the code required to repeat the experiments in the paper and to analyze the results. To SMOTE, or not

Amazon Web Services 1 Jan 03, 2022
Pytorch implementation of Masked Auto-Encoder

Masked Auto-Encoder (MAE) Pytorch implementation of Masked Auto-Encoder: Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick

Jiyuan 22 Dec 13, 2022
A concise but complete implementation of CLIP with various experimental improvements from recent papers

x-clip (wip) A concise but complete implementation of CLIP with various experimental improvements from recent papers Install $ pip install x-clip Usag

Phil Wang 515 Dec 26, 2022
CMP 414/765 course repository for Spring 2022 semester

CMP414/765: Artificial Intelligence Spring2021 This is the GitHub repository for course CMP 414/765: Artificial Intelligence taught at The City Univer

ch00226855 4 May 16, 2022
Deep Learning Package based on TensorFlow

White-Box-Layer is a Python module for deep learning built on top of TensorFlow and is distributed under the MIT license. The project was started in M

YeongHyeon Park 7 Dec 27, 2021
Code for the TPAMI paper: "Syntax Customized Video Captioning by Imitating Exemplar Sentences"

Syntax-Customized-Video-Captioning Code for the TPAMI paper: "Syntax Customized Video Captioning by Imitating Exemplar Sentences". This is my second w

3 Dec 05, 2022
Edge Restoration Quality Assessment

ERQA - Edge Restoration Quality Assessment ERQA - a full-reference quality metric designed to analyze how good image and video restoration methods (SR

MSU Video Group 27 Dec 17, 2022
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

FlyingRoastDuck 59 Oct 31, 2022
Data-Uncertainty Guided Multi-Phase Learning for Semi-supervised Object Detection

An official implementation of paper Data-Uncertainty Guided Multi-Phase Learning for Semi-supervised Object Detection

11 Nov 23, 2022
Python scripts for performing stereo depth estimation using the MobileStereoNet model in ONNX

ONNX-MobileStereoNet Python scripts for performing stereo depth estimation using the MobileStereoNet model in ONNX Stereo depth estimation on the cone

Ibai Gorordo 23 Nov 29, 2022
A user-friendly research and development tool built to standardize RL competency assessment for custom agents and environments.

Built with ❤️ by Sam Showalter Contents Overview Installation Dependencies Usage Scripts Standard Execution Environment Development Environment Benchm

SRI-AIC 1 Nov 18, 2021
TalkingHead-1KH is a talking-head dataset consisting of YouTube videos

TalkingHead-1KH Dataset TalkingHead-1KH is a talking-head dataset consisting of YouTube videos, originally created as a benchmark for face-vid2vid: On

173 Dec 29, 2022
C3D is a modified version of BVLC caffe to support 3D ConvNets.

C3D C3D is a modified version of BVLC caffe to support 3D convolution and pooling. The main supporting features include: Training or fine-tuning 3D Co

Meta Archive 1.1k Nov 14, 2022
This is an official implementation for "Self-Supervised Learning with Swin Transformers".

Self-Supervised Learning with Vision Transformers By Zhenda Xie*, Yutong Lin*, Zhuliang Yao, Zheng Zhang, Qi Dai, Yue Cao and Han Hu This repo is the

Swin Transformer 529 Jan 02, 2023
Finite-temperature variational Monte Carlo calculation of uniform electron gas using neural canonical transformation.

CoulombGas This code implements the neural canonical transformation approach to the thermodynamic properties of uniform electron gas. Building on JAX,

FermiFlow 9 Mar 03, 2022
PN-Net a neural field-based framework for depth estimation from single-view RGB images.

PN-Net We present a neural field-based framework for depth estimation from single-view RGB images. Rather than representing a 2D depth map as a single

1 Oct 02, 2021
Official PyTorch code for "BAM: Bottleneck Attention Module (BMVC2018)" and "CBAM: Convolutional Block Attention Module (ECCV2018)"

BAM and CBAM Official PyTorch code for "BAM: Bottleneck Attention Module (BMVC2018)" and "CBAM: Convolutional Block Attention Module (ECCV2018)" Updat

Jongchan Park 1.7k Jan 01, 2023
A library for using chemistry in your applications

Chemistry in python Resources Used The following items are not made by me! Click the words to go to the original source Periodic Tab Json - Used in -

Tech Penguin 28 Dec 17, 2021
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [2021]

Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations This repo contains the Pytorch implementation of our paper: Revisit

Wouter Van Gansbeke 80 Nov 20, 2022