Datasets for new state-of-the-art challenge in disentanglement learning

Overview

High resolution disentanglement datasets

This repository contains the Falcor3D and Isaac3D datasets, which present a state-of-the-art challenge for controllable generation in terms of image resolution, photorealism, and richness of style factors, as compared to existing disentanglement datasets.

Falor3D

The Falcor3D dataset consists of 233,280 images based on the 3D scene of a living room, where each image has a resolution of 1024x1024. The meta code corresponds to all possible combinations of 7 factors of variation:

  • lighting_intensity (5)
  • lighting_x-dir (6)
  • lighting_y-dir (6)
  • lighting_z-dir (6)
  • camera_x-pos (6)
  • camera_y-pos (6)
  • camera_z-pos (6)

Note that the number m behind each factor represents that the factor has m possible values, uniformly sampled in the normalized range of variations [0, 1].

Each image has as filename padded_index.png where

index = lighting_intensity * 46656 + lighting_x-dir * 7776 + lighting_y-dir * 1296 + 
lighting_z-dir * 216 + camera_x-pos * 36 + camera_y-pos * 6 + camera_z-pos

padded_index = index padded with zeros such that it has 6 digits.

To see the Falcor3D images by varying each factor of variation individually, you can run

python dataset_demo.py --dataset Falor3D

and the results are saved in the examples/falcor3d_samples folder.

You can also check out the Falcor3D images here: falcor3d_samples_demo, which includes all the ground-truth latent traversals.

Isaac3D

The Isaac3D dataset consists of 737,280 images, based on the 3D scene of a kitchen, where each image has a resolution of 512x512. The meta code corresponds to all possible combinations of 9 factors of variation:

  • object_shape (3)
  • object_scale (4)
  • camera_height (4)
  • robot_x-movement (8)
  • robot_y-movement (5)
  • lighting_intensity (4)
  • lighting_y-dir (6)
  • object_color (4)
  • wall_color (4)

Similarly, the number m behind each factor represents that the factor has m possible values, uniformly sampled in the normalized range of variations [0, 1].

Each image has as filename padded_index.png where

index = object_shape * 245760 + object_scale * 30720 + camera_height * 6144 + 
robot_x-movement * 1536 + robot_y-movement * 384 + lighting_intensity * 96 + 
lighting_y-dir * 16 + object_color * 4 + wall color

padded_index = index padded with zeros such that it has 6 digits.

To see the Isaac3D images by varying each factor of variation individually, you can run

python dataset_demo.py --dataset Isaac3D

and the results are saved in the examples/isaac3d_samples folder.

You can also check out the Isaac3D images here: isaac3d_samples_demo, which includes all the ground-truth latent traversals.

Links to datasets

The two datasets can be downloaded from Google Drive:

  • Falcor3D (98 GB): link
  • Isaac3D (190 GB): link

Besides, we also provide a downsampled version (resolution 128x128) of the two datasets:

  • Falcor3D_128x128 (3.7 GB): link
  • Isaac3D_128x128 (13 GB): link

License

This work is licensed under a Creative Commons Attribution 4.0 International License by NVIDIA Corporation (https://creativecommons.org/licenses/by/4.0/).

Owner
NVIDIA Research Projects
NVIDIA Research Projects
cisip-FIRe - Fast Image Retrieval

Fast Image Retrieval (FIRe) is an open source image retrieval project release by Center of Image and Signal Processing Lab (CISiP Lab), Universiti Malaya. This project implements most of the major bi

CISiP Lab 39 Nov 25, 2022
Backdoor Attack through Frequency Domain

Backdoor Attack through Frequency Domain DEPENDENCIES python==3.8.3 numpy==1.19.4 tensorflow==2.4.0 opencv==4.5.1 idx2numpy==1.2.3 pytorch==1.7.0 Data

5 Jun 18, 2022
Stochastic Scene-Aware Motion Prediction

Stochastic Scene-Aware Motion Prediction [Project Page] [Paper] Description This repository contains the training code for MotionNet and GoalNet of SA

Mohamed Hassan 31 Dec 09, 2022
Drone Task1 - Drone Task1 With Python

Drone_Task1 Matching Results 3.mp4 1.mp4

MLV Lab (Machine Learning and Vision Lab at Korea University) 11 Nov 14, 2022
Code for ECCV 2020 paper "Contacts and Human Dynamics from Monocular Video".

Contact and Human Dynamics from Monocular Video This is the official implementation for the ECCV 2020 spotlight paper by Davis Rempe, Leonidas J. Guib

Davis Rempe 207 Jan 05, 2023
BirdCLEF 2021 - Birdcall Identification 4th place solution

BirdCLEF 2021 - Birdcall Identification 4th place solution My solution detail kaggle discussion Inference Notebook (best submission) Environment Use K

tattaka 42 Jan 02, 2023
The reference baseline of final exam for XMU machine learning course

Mini-NICO Baseline The baseline is a reference method for the final exam of machine learning course. Requirements Installation we use /python3.7 /torc

JoaquinChou 3 Dec 29, 2021
Code release for "Self-Tuning for Data-Efficient Deep Learning" (ICML 2021)

Self-Tuning for Data-Efficient Deep Learning This repository contains the implementation code for paper: Self-Tuning for Data-Efficient Deep Learning

THUML @ Tsinghua University 101 Dec 11, 2022
Official code of the paper "Expanding Low-Density Latent Regions for Open-Set Object Detection" (CVPR 2022)

OpenDet Expanding Low-Density Latent Regions for Open-Set Object Detection (CVPR2022) Jiaming Han, Yuqiang Ren, Jian Ding, Xingjia Pan, Ke Yan, Gui-So

csuhan 64 Jan 07, 2023
🦕 NanoSaur is a little tracked robot ROS2 enabled, made for an NVIDIA Jetson Nano

🦕 nanosaur NanoSaur is a little tracked robot ROS2 enabled, made for an NVIDIA Jetson Nano Website: nanosaur.ai Do you need an help? Discord For tech

NanoSaur 162 Dec 09, 2022
EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21)

EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21) Citation If y

addisonwang 18 Nov 11, 2022
WatermarkRemoval-WDNet-WACV2021

WatermarkRemoval-WDNet-WACV2021 Thank you for your attention. Citation Please cite the related works in your publications if it helps your research: @

LUYI 63 Dec 05, 2022
Teaches a student network from the knowledge obtained via training of a larger teacher network

Distilling-the-knowledge-in-neural-network Teaches a student network from the knowledge obtained via training of a larger teacher network This is an i

Abhishek Sinha 146 Dec 11, 2022
Database Reasoning Over Text project for ACL paper

Database Reasoning over Text This repository contains the code for the Database Reasoning Over Text paper, to appear at ACL2021. Work is performed in

Facebook Research 320 Dec 12, 2022
68 keypoint annotations for COFW test data

68 keypoint annotations for COFW test data This repository contains manually annotated 68 keypoints for COFW test data (original annotation of CFOW da

31 Dec 06, 2022
Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI

Hourglass Transformer - Pytorch (wip) Implementation of Hourglass Transformer, in Pytorch. It will also contain some of my own ideas about how to make

Phil Wang 61 Dec 25, 2022
wgan, wgan2(improved, gp), infogan, and dcgan implementation in lasagne, keras, pytorch

Generative Adversarial Notebooks Collection of my Generative Adversarial Network implementations Most codes are for python3, most notebooks works on C

tjwei 1.5k Dec 16, 2022
PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnell (ICLR 2018)

1-bit Wide ResNet PyTorch implementation of training 1-bit Wide ResNets from this paper: Training wide residual networks for deployment using a single

Sergey Zagoruyko 122 Dec 07, 2022
Multimodal Descriptions of Social Concepts: Automatic Modeling and Detection of (Highly Abstract) Social Concepts evoked by Art Images

MUSCO - Multimodal Descriptions of Social Concepts Automatic Modeling of (Highly Abstract) Social Concepts evoked by Art Images This project aims to i

0 Aug 22, 2021
YKKDetector For Python

YKKDetector OpenCVを利用した機械学習データをもとに、VRChatのスクリーンショットなどからYKKさん(もとい「幽狐族のお姉様」)を検出できるソフトウェアです。 マニュアル こちらから実行環境のセットアップから解説する詳細なマニュアルをご覧いただけます。 ライセンス 本ソフトウェア

あんふぃとらいと 5 Dec 07, 2021