This package contains a PyTorch Implementation of IB-GAN of the submitted paper in AAAI 2021

Related tags

Deep LearningIB-GAN
Overview

The PyTorch implementation of IB-GAN model of AAAI 2021

This package contains a PyTorch implementation of IB-GAN presented in the submitted paper (IB-GAN: Disentangled Representation Learning with Information Bottleneck Generative Adversarial Networks) in AAAI 2021.

You can reproduce the experiment on dSprite (Color-dSprite, 3DChairs, and CelebA) dataset with the this code.

Current implementation is based on python==1.4.0. Please refer environments.yml for the environment settings.

Please refer to the Technical appendix page for more detailed information of hypter parameter settings for each experiment.

Contents

  • Main code for dsprites (and cdsprite): "main.py"

  • IB-GAN model for dsprites (and cdsprite): "./model/model.py"

  • Disentanglement Evaluation codes for dsprites (and cdsprite): "evaluator.py", "checkout_scores.ipynb"

  • Main code for 3d Chairs (and CelebA): "main2.py"

  • IB-GAN model for dsprites (and cdsprite): "./model/model2.py"

Visdom for visualization

Since the defulat visidom option for main.py is True, you first want to run Visidom server berfore excuting the main program by typing

python -m visdom.server -p 8097

Then you can observe the visualization of the "convergence plot and generated samples" for each training iterations from

localhost:8097

Reproducing dSprite experiment

  • dSprite dataset : "./data/dsprites-dataset/dsprites_ndarray_co1sh3sc6or40x32y32_64x64.npz"

You can reproduce dSprite expreiment by typing:

python -W ignore main.py --seed 7 --z_dim 16 --r_dim 10 --batch_size 64 --optim rmsprop --dataset dsprites --viz True --viz_port 8097 --z_bias 0 --viz_name dsprites --beta 0.141 --alpha 1 --gamma 1 --G_lr 5e-5 --D_lr 1e-6 --max_iter 150000 --logiter 500 --ptriter 2500 --ckptiter 2500 --load_ckpt -1 --init_type normal --save_img True

Note, all the default parameter settings are optimally set up for the dSprite experiment (in the "main.py" file). For more details on the parameter settings for other datasets, please refer to the Technical appendix.

  • dSprite dataset for Kim's disentanglement score evaluation : Evauation file is currently not available. (will be update soon) The evaulation process and code is same as cdsprite experiment.

Reproducing Color-dSprite expreiemnt

  • Color-dSprite dataset : Color dSprite Dataset is currently not available.

But you can create Colored-dSprites dataset by changing RGB channel of the original dsprites dataset.

Each channel of RGB takes 8 discrete values as : [0.00, 36.42, 72.85, 109.28, 145.71, 182.14, 218.57, 255.00] )

Then move Color-dSprites datset (eg. cdsprites_ndarray_co1sh3sc6or40x32y32_64x64.npz) npz file to the folder (./data/dsprites-dataset/)

Run the code with following argument:

python -W ignore main.py --seed 7 --z_dim 16 --r_dim 10 --batch_size 64 --optim rmsprop --dataset cdsprites --viz True --viz_port 8097 --z_bias 0 --viz_name dsprites --beta 0.071 --alpha 1 --gamma 1 --G_lr 5e-5 --D_lr 1e-6 --max_iter 500000 --logiter 500 --ptriter 2500 --ckptiter 2500 --load_ckpt -1 --init_type normal --save_img True
  • Color-dSprite dataset for Kim's disentanglement score evaluation : "./data/img4eval_cdsprites.7z".

You first need to unzip "imgs4eval_cdsprites.7z" file using 7za. Please locate all the unzip files in "/data/imgs4eval_cdsprites/*" folder.

run the evaluation on Kim's disentanglment metric, type

python evaluator.py --dset_dir data/imgs4eval_cdsprites --logiter 5000 --lastiter 500000 --name main

After all the evaluations for each checkpoint is done, you can see the overall disentanglement scores with the "checkout_scores.ipynb" (jupyter notebook) file. or you can just type

import os
import torch
torch.load('checkpoint/main/result.metric')

to see the scores in the python console. Then move Color-dSprites datset (eg. cdsprites_ndarray_co1sh3sc6or40x32y32_64x64.npz) to ./data/dsprites-dataset/

Reproducing CelebA experiment

  • CelebA dataset : please download CelebA dataset and prepare 64x64 center cropped image files into the folder (./data/CelebA/cropped_64)

Then run the code with following argument:

python -W ignore main2.py --seed 0 --z_dim 64 --r_dim 15 --batch_size 64 --optim rmsprop --dataset celeba --viz_port 8097 --z_bias 0 --r_weight 0 --viz_name celeba --beta 0.35 --alpha 1 --gamma 1 --max_iter 1000000 --G_lr 5e-5 --D_lr 2e-6 --R_lr 5e-5 --ckpt_dir checkpoint --output_dir output --logiter 500 --ptriter 20000 --ckptiter 20000 --ngf 64 --ndf 64 --label_smoothing True --instance_noise_start 0.5 --instance_noise_end 0.01 --init_type orthogonal

Reproducing 3dChairs experiment

  • 3dChairs dataset : please download 3dChairs dataset and move image files into the folder (./data/3DChairs/images)
python -W ignore main2.py --seed 0 --z_dim 64 --r_dim 10 --batch_size 64 --optim rmsprop --dataset 3dchairs --viz_port 8097 --z_bias 0 --r_weight 0 --viz_name 3dchairs --beta 0.325 --alpha 1 --gamma 1 --max_iter 700000 --G_lr 5e-5 --D_lr 2e-6 --R_lr 5e-5 --ckpt_dir checkpoint --output_dir output --logiter 500 --ptriter 20000 --ckptiter 20000 --ngf 32 --ndf 32 --label_smoothing True --instance_noise_start 0.5 --instance_noise_end 0.01 --init_type orthogonal

Citing IB-GAN

If you like this work and end up using IB-GAN for your reseach, please cite our paper with the bibtex code:

@inproceedings{jeon2021ib, title={IB-GAN: Disengangled Representation Learning with Information Bottleneck Generative Adversarial Networks}, author={Jeon, Insu and Lee, Wonkwang and Pyeon, Myeongjang and Kim, Gunhee}, booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, volume={35}, number={9}, pages={7926--7934}, year={2021} }

The disclosure and use of the currently published code is limited to research purposes only.

Owner
Insu Jeon
Stay hungry, stay foolish.
Insu Jeon
CTRL-C: Camera calibration TRansformer with Line-Classification

CTRL-C: Camera calibration TRansformer with Line-Classification This repository contains the official code and pretrained models for CTRL-C (Camera ca

57 Nov 14, 2022
USAD - UnSupervised Anomaly Detection on multivariate time series

USAD - UnSupervised Anomaly Detection on multivariate time series Scripts and utility programs for implementing the USAD architecture. Implementation

116 Jan 04, 2023
Self-Supervised Pre-Training for Transformer-Based Person Re-Identification

Self-Supervised Pre-Training for Transformer-Based Person Re-Identification [pdf] The official repository for Self-Supervised Pre-Training for Transfo

Hao Luo 116 Jan 04, 2023
SASM - simple crossplatform IDE for NASM, MASM, GAS and FASM assembly languages

SASM (SimpleASM) - простая кроссплатформенная среда разработки для языков ассемблера NASM, MASM, GAS, FASM с подсветкой синтаксиса и отладчиком. В SA

Dmitriy Manushin 5.6k Jan 06, 2023
Spectral Temporal Graph Neural Network (StemGNN in short) for Multivariate Time-series Forecasting

Spectral Temporal Graph Neural Network for Multivariate Time-series Forecasting This repository is the official implementation of Spectral Temporal Gr

Microsoft 306 Dec 29, 2022
PyTorch implementation of PP-LCNet: A Lightweight CPU Convolutional Neural Network

PyTorch implementation of PP-LCNet Reproduction of PP-LCNet architecture as described in PP-LCNet: A Lightweight CPU Convolutional Neural Network by C

Quan Nguyen (Fly) 47 Nov 02, 2022
The final project of "Applying AI to EHR Data" of "AI for Healthcare" nanodegree - Udacity.

Patient Selection for Diabetes Drug Testing Project Overview EHR data is becoming a key source of real-world evidence (RWE) for the pharmaceutical ind

Omar Laham 1 Jan 14, 2022
Pose Detection and Machine Learning for real-time body posture analysis during exercise to provide audiovisual feedback on improvement of form.

Posture: Pose Tracking and Machine Learning for prescribing corrective suggestions to improve posture and form while exercising. This repository conta

Pratham Mehta 10 Nov 11, 2022
《Dual-Resolution Correspondence Network》(NeurIPS 2020)

Dual-Resolution Correspondence Network Dual-Resolution Correspondence Network, NeurIPS 2020 Dependency All dependencies are included in asset/dualrcne

Active Vision Laboratory 45 Nov 21, 2022
Marine debris detection with commercial satellite imagery and deep learning.

Marine debris detection with commercial satellite imagery and deep learning. Floating marine debris is a global pollution problem which threatens mari

Inter Agency Implementation and Advanced Concepts 56 Dec 16, 2022
1st ranked 'driver careless behavior detection' for AI Online Competition 2021, hosted by MSIT Korea.

2021AICompetition-03 본 repo 는 mAy-I Inc. 팀으로 참가한 2021 인공지능 온라인 경진대회 중 [이미지] 운전 사고 예방을 위한 운전자 부주의 행동 검출 모델] 태스크 수행을 위한 레포지토리입니다. mAy-I 는 과학기술정보통신부가 주최하

Junhyuk Park 9 Dec 01, 2022
Using CNN to mimic the driver based on training data from Torcs

Behavioural-Cloning-in-autonomous-driving Using CNN to mimic the driver based on training data from Torcs. Approach First, the data was collected from

Sudharshan 2 Jan 05, 2022
My 1st place solution at Kaggle Hotel-ID 2021

1st place solution at Kaggle Hotel-ID My 1st place solution at Kaggle Hotel-ID to Combat Human Trafficking 2021. https://www.kaggle.com/c/hotel-id-202

Kohei Ozaki 18 Aug 19, 2022
Composable transformations of Python+NumPy programsComposable transformations of Python+NumPy programs

Chex Chex is a library of utilities for helping to write reliable JAX code. This includes utils to help: Instrument your code (e.g. assertions) Debug

DeepMind 506 Jan 08, 2023
基于PaddleOCR搭建的OCR server... 离线部署用

开头说明 DangoOCR 是基于大家的 CPU处理器 来运行的,CPU处理器 的好坏会直接影响其速度, 但不会影响识别的精度 ,目前此版本识别速度可能在 0.5-3秒之间,具体取决于大家机器的配置,可以的话尽量不要在运行时开其他太多东西。需要配合团子翻译器 Ver3.6 及其以上的版本才可以使用!

胖次团子 131 Dec 25, 2022
(SIGIR2020) “Asymmetric Tri-training for Debiasing Missing-Not-At-Random Explicit Feedback’’

Asymmetric Tri-training for Debiasing Missing-Not-At-Random Explicit Feedback About This repository accompanies the real-world experiments conducted i

yuta-saito 19 Dec 01, 2022
Our VMAgent is a platform for exploiting Reinforcement Learning (RL) on Virtual Machine (VM) scheduling tasks.

VMAgent is a platform for exploiting Reinforcement Learning (RL) on Virtual Machine (VM) scheduling tasks. VMAgent is constructed based on one month r

56 Dec 12, 2022
Official PyTorch implementation of "AASIST: Audio Anti-Spoofing using Integrated Spectro-Temporal Graph Attention Networks"

AASIST This repository provides the overall framework for training and evaluating audio anti-spoofing systems proposed in 'AASIST: Audio Anti-Spoofing

Clova AI Research 56 Jan 02, 2023
Implementation of OpenAI paper with Simple Noise Scale on Fastai V2

README Implementation of OpenAI paper "An Empirical Model of Large-Batch Training" for Fastai V2. The code is based on the batch size finder implement

13 Dec 10, 2021
Pytorch implementation of our method for regularizing nerual radiance fields for few-shot neural volume rendering.

InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering Pytorch implementation of our method for regularizing nerual radiance fields f

106 Jan 06, 2023