PyTorch implementation of MulMON

Overview

MulMON

This repository contains a PyTorch implementation of the paper:
Learning Object-Centric Representations of Multi-object Scenes from Multiple Views

Li Nanbo, Cian Eastwood, Robert B. Fisher
NeurIPS 2020 (Spotlight)

Working examples

Check our video presentation for more: https://youtu.be/Og2ic2L77Pw.

Requirements

Hardware:

  • GPU. Currently, at least one GPU device is required to run this code, however, we will consider adding CPU demo code in the future.
  • Disk space: we do NOT have any hard requirement for the disk space, this is totally data-dependent. To use all the datasets we provide, you will need ~9GB disk space. However, it is not necessary to use all of our datasets (or even our datasets), see Data section for more details.

Python Environement:

  1. We use Anaconda to manage our python environment. Check conda installation guide here: https://docs.anaconda.com/anaconda/install/linux/.

  2. Open a new terminal, direct to the MulMON directory:

cd <YOUR-PATH-TO-MulMON>/MulMON/

create a new conda environment called "mulmon" and then activate it:

conda env create -f ./conda-env-spec.yml  
conda activate mulmon
  1. Install a gpu-supported PyTorch (tested with PyTorch 1.1, 1.2 and 1.7). It is very likely that there exists a PyTorch installer that is compatible with both your CUDA and this code. Go find it on PyTorch official site, and install it with one line of command.

  2. Install additional packages:

pip install tensorboard  
pip install scikit-image

If pytorch <=1.2 is used, you will also need to execute: pip install tensorboardX and import it in the ./trainer/base_trainer.py file. This can be done by commenting the 4th line AND uncommenting the 5th line of that file.

Data

  • Data structure (important):
    We use a data structure as follows:

    <YOUR-PATH>                                          
        ├── ...
        └── mulmon_datasets
              ├── clevr                                   # place your own CLEVR-MV under this directory if you go the fun way
              │    ├── ...
              │    ├── clevr_mv            
              │    │    └── ... (omit)                    # see clevr_<xxx> for subdirectory details
              │    ├── clevr_aug           
              │    │    └── ... (omit)                    # see clevr_<xxx> for subdirectory details
              │    └── clevr_<xxx>
              │         ├── ...
              │         ├── data                          # contains a list of scene files
              │         │    ├── CLEVR_new_#.npy          # one .npy --> one scene sample
              │         │    ├── CLEVR_new_#.npy       
              │         │    └── ...
              │         ├── clevr_<xxx>_train.json        # meta information of the training scenes
              │         └── clevr_<xxx>_test.json         # meta information of the testing scenes  
              └── GQN  
                   ├── ...
                   └── gqn-jaco                 
                        ├── gqn_jaco_train.h5
                        └── gqn_jaco_test.h5
    

    We recommend one to get the necessary data folders ready before downloading/generating the data files:

    mkdir <YOUR-PATH>/mulmon_datasets  
    mkdir <YOUR-PATH>/mulmon_datasets/clevr  
    mkdir <YOUR-PATH>/mulmon_datasets/GQN
    
  • Get Datasets

    • Easy way:
      Download our datasets:

      • clevr_mv.tar.gz and place it under the <YOUR-PATH>/mulmon_datasets/clevr/ directory (~1.8GB when extracted).
      • clevr_aug.tar.gz and place it under the <YOUR-PATH>/mulmon_datasets/clevr/ directory (~3.8GB when extracted).
      • gqn_jaco.tar.gz and place it under the <YOUR-PATH>/mulmon_datasets/GQN/ directory (~3.2GB when extracted).

      and extract them in places. For example, the command for extracting clevr_mv.tar.gz:

      tar -zxvf <YOUR-PATH>/mulmon_datasets/clevr/clevr_mv.tar.gz -C <YOUR-PATH>/mulmon_datasets/clevr/
      

      Note that: 1) we used only a subset of the DeepMind GQN-Jaco dataset, more available at deepmind/gqn-datasets, and 2) the published clevr_aug dataset differs slightly from the CLE-Aug used in the paper---we added more shapes (such as dolphins) into the dataset to make the dataset more interesting (also more complex).

    • Fun way :
      Customise your own multi-view CLEVR data. (available soon...)

Pre-trained models

Download the pretrained models (← click) and place it under `MulMON/', i.e. the root directory of this repository, then extract it by executing: tar -zxvf ./logs.tar.gz. Note that some of them are slightly under-trained, so one could train them further to achieve better results (How to train?).

Usage

Configure data path
To run the code, the data path, i.e. the <YOUR-PATH> in a script, needs to be correctly configured. For example, we store the MulMON dataset folder mulmon_datasets in ../myDatasets/, to train a MulMON on GQN-Jaco dataset using a single GPU, the 4th line of the ./scripts/train_jaco.sh script should look like: data_path=../myDatasets/mulmon_datasets/GQN.

  • Demo (Environment Test)
    Before running the below code, make sure the pretrained models are downloaded and saved first:

    . scripts/demo.sh  
    

    Check ./logs folder for the generated demos.

    • Notes for disentanglement demos: we randomly pick one object for each scene to create the disentanglement demo, so for scene samples where an empty object slot is picked, you won't see any object manipulation effect in the corresponding gifs (especially for the GQN-Jaco scenes). To create a demo like the shown one, one needs to specify (hard-coding) an object slot of interest and traverse informative latent dimensions (as some dimensions are redundant---capture no object property).
  • Train

    • On a single gpu (e.g. using the GQN-Jaco dataset):
    . scripts/train_jaco.sh  
    
    • On multiple GPUs (e.g. using the GQN-Jaco dataset):
    . scripts/train_jaco_parallel.sh  
    
    • To resume training from a stopped session, i.e. saved weights checkpoint-epoch<#number>.pth, simply append a flag --resume_epoch <#number> to one of the flags in the script files.
      For example, to resume previous training (saved as checkpoint-epoch2000.pth) on GQN-Jaco data, we just need to reconfigure the 10th line of the ./scripts/train_jaco.sh as:
      --input_dir ${data_path} --output_dir ${log_path} --resume_epoch 2000 \.
  • Evaluation

    • On a single gpu (e.g. using the Clevr_MV dataset):
    . scripts/eval_clevr.sh  
    
    • Here is a list of imporant evaluation settings which one might wants to play with
      --resume_epoch specify a model to evaluate --test_batch how many batches of test data one uses for evaluation.
      --vis_batch how many batches of output one visualises (save) while evaluation. (note: <= --test_batch)
      --analyse_batch how many batches of latent codes one saves for a post analysis, e.g. disentanglement. (note: <= --test_batch)
      --eval_all (boolean) set True for all [--eval_recon, --eval_seg, --eval_qry_obs, --eval_qry_seg] items, one could also use each of the four independently.
      --eval_dist (boolean) save latent codes for disentanglement analysis. (note: not controlled by --eval_all)
    • For the disentanglement evaluation, run the scripts/eval_clevr.sh script with --eval_dist flag set to True and set the --analyse_batch variable (which controls how many scenes of latent codes one wants to analyse) to be greater than 0. This saves the ouptut latent codes and ground-truth information that allows you to conduct disentanglement quantification using the QEDR framework.
    • You might observe that the evaluation results on the CLE-Aug dataset differ form those on the original paper, this is because the CLE-Aug here is slightly different the one we used for the paper (see more details).

Contact

We constantly respond to the raised ''issues'' in terms of running the code. For further inquiries and discussions (e.g. questions about the paper), email: [email protected].

Cite

Please cite our paper if you find this code useful.

@inproceedings{nanbo2020mulmon,
  title={Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views},
  author={Nanbo, Li and Eastwood, Cian and Fisher, Robert B},
  booktitle={Advances in Neural Information Processing Systems},
  year={2020}
}
Owner
NanboLi
PhD Student, University of Edinburgh
NanboLi
[ICLR'19] Trellis Networks for Sequence Modeling

TrellisNet for Sequence Modeling This repository contains the experiments done in paper Trellis Networks for Sequence Modeling by Shaojie Bai, J. Zico

CMU Locus Lab 460 Oct 13, 2022
General purpose Slater-Koster tight-binding code for electronic structure calculations

tight-binder Introduction General purpose tight-binding code for electronic structure calculations based on the Slater-Koster approximation. The code

9 Dec 15, 2022
Code release for ICCV 2021 paper "Anticipative Video Transformer"

Anticipative Video Transformer Ranked first in the Action Anticipation task of the CVPR 2021 EPIC-Kitchens Challenge! (entry: AVT-FB-UT) [project page

Facebook Research 123 Dec 13, 2022
A deep learning tabular classification architecture inspired by TabTransformer with integrated gated multilayer perceptron.

The GatedTabTransformer. A deep learning tabular classification architecture inspired by TabTransformer with integrated gated multilayer perceptron. C

Radi Cho 60 Dec 15, 2022
EPSANet:An Efficient Pyramid Split Attention Block on Convolutional Neural Network

EPSANet:An Efficient Pyramid Split Attention Block on Convolutional Neural Network This repo contains the official Pytorch implementaion code and conf

Hu Zhang 175 Jan 07, 2023
GraPE is a Rust/Python library for high-performance Graph Processing and Embedding.

GraPE GraPE (Graph Processing and Embedding) is a fast graph processing and embedding library, designed to scale with big graphs and to run on both of

AnacletoLab 194 Dec 29, 2022
A library for implementing Decentralized Graph Neural Network algorithms.

decentralized-gnn A package for implementing and simulating decentralized Graph Neural Network algorithms for classification of peer-to-peer nodes. De

Multimedia Knowledge and Social Analytics Lab 5 Nov 07, 2022
Code implementing "Improving Deep Learning Interpretability by Saliency Guided Training"

Saliency Guided Training Code implementing "Improving Deep Learning Interpretability by Saliency Guided Training" by Aya Abdelsalam Ismail, Hector Cor

8 Sep 22, 2022
Wenzhou-Kean University AI-LAB

AI-LAB This is Wenzhou-Kean University AI-LAB. Our research interests are in Computer Vision and Natural Language Processing. Computer Vision Please g

WKU AI-LAB 10 May 05, 2022
MXNet implementation for: Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution

Octave Convolution MXNet implementation for: Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution Imag

Meta Research 549 Dec 28, 2022
PyTorch implementation of DARDet: A Dense Anchor-free Rotated Object Detector in Aerial Images

DARDet PyTorch implementation of "DARDet: A Dense Anchor-free Rotated Object Detector in Aerial Images", [pdf]. Highlights: 1. We develop a new dense

41 Oct 23, 2022
Code for the paper "Benchmarking and Analyzing Point Cloud Classification under Corruptions"

ModelNet-C Code for the paper "Benchmarking and Analyzing Point Cloud Classification under Corruptions". For the latest updates, see: sites.google.com

Jiawei Ren 45 Dec 28, 2022
StackRec: Efficient Training of Very Deep Sequential Recommender Models by Iterative Stacking

StackRec: Efficient Training of Very Deep Sequential Recommender Models by Iterative Stacking Datasets You can download datasets that have been pre-pr

25 May 29, 2022
FAST Aiming at the problems of cumbersome steps and slow download speed of GNSS data

FAST Aiming at the problems of cumbersome steps and slow download speed of GNSS data, a relatively complete set of integrated multi-source data download terminal software fast is developed. The softw

ChangChuntao 23 Dec 31, 2022
Extracts data from the database for a graph-node and stores it in parquet files

subgraph-extractor Extracts data from the database for a graph-node and stores it in parquet files Installation For developing, it's recommended to us

Cardstack 0 Jan 10, 2022
Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification.

Easy Few-Shot Learning Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification. This repository is made for you

Sicara 399 Jan 08, 2023
Reinforcement Learning for Automated Trading

Reinforcement Learning for Automated Trading This thesis has been realized for the obtention of the Master's in Mathematical Engineering at the Polite

Pierpaolo Necchi 80 Jun 19, 2022
Denoising Normalizing Flow

Denoising Normalizing Flow Christian Horvat and Jean-Pascal Pfister 2021 We combine Normalizing Flows (NFs) and Denoising Auto Encoder (DAE) by introd

CHrvt 17 Oct 15, 2022
Activating More Pixels in Image Super-Resolution Transformer

HAT [Paper Link] Activating More Pixels in Image Super-Resolution Transformer Xiangyu Chen, Xintao Wang, Jiantao Zhou and Chao Dong BibTeX @article{ch

XyChen 270 Dec 27, 2022
Reimplementation of Learning Mesh-based Simulation With Graph Networks

Pytorch Implementation of Learning Mesh-based Simulation With Graph Networks This is the unofficial implementation of the approach described in the pa

Jingwei Xu 33 Dec 14, 2022