The code for the NeurIPS 2021 paper "A Unified View of cGANs with and without Classifiers".

Overview

Energy-based Conditional Generative Adversarial Network (ECGAN)

This is the code for the NeurIPS 2021 paper "A Unified View of cGANs with and without Classifiers". The repository is modified from StudioGAN. If you find our work useful, please consider citing the following paper:

@inproceedings{chen2021ECGAN,
  title   = {A Unified View of cGANs with and without Classifiers},
  author  = {Si-An Chen and Chun-Liang Li and Hsuan-Tien Lin},
  booktitle = {Advances in Neural Information Processing Systems},
  year    = {2021}
}

Please feel free to contact Si-An Chen if you have any questions about the code/paper.

Introduction

We propose a new Conditional Generative Adversarial Network (cGAN) framework called Energy-based Conditional Generative Adversarial Network (ECGAN) which provides a unified view of cGANs and achieves state-of-the-art results. We use the decomposition of the joint probability distribution to connect the goals of cGANs and classification as a unified framework. The framework, along with a classic energy model to parameterize distributions, justifies the use of classifiers for cGANs in a principled manner. It explains several popular cGAN variants, such as ACGAN, ProjGAN, and ContraGAN, as special cases with different levels of approximations. An illustration of the framework is shown below.

Requirements

  • Anaconda
  • Python >= 3.6
  • 6.0.0 <= Pillow <= 7.0.0
  • scipy == 1.1.0 (Recommended for fast loading of Inception Network)
  • sklearn
  • seaborn
  • h5py
  • tqdm
  • torch >= 1.6.0 (Recommended for mixed precision training and knn analysis)
  • torchvision >= 0.7.0
  • tensorboard
  • 5.4.0 <= gcc <= 7.4.0 (Recommended for proper use of adaptive discriminator augmentation module)

You can install the recommended environment as follows:

conda env create -f environment.yml -n studiogan

With docker, you can use:

docker pull mgkang/studiogan:0.1

Quick Start

  • Train (-t) and evaluate (-e) the model defined in CONFIG_PATH using GPU 0
CUDA_VISIBLE_DEVICES=0 python3 src/main.py -t -e -c CONFIG_PATH
  • Train (-t) and evaluate (-e) the model defined in CONFIG_PATH using GPUs (0, 1, 2, 3) and DataParallel
CUDA_VISIBLE_DEVICES=0,1,2,3 python3 src/main.py -t -e -c CONFIG_PATH

Try python3 src/main.py to see available options.

Dataset

  • CIFAR10: StudioGAN will automatically download the dataset once you execute main.py.

  • Tiny Imagenet, Imagenet, or a custom dataset:

    1. download Tiny Imagenet and Imagenet. Prepare your own dataset.
    2. make the folder structure of the dataset as follows:
┌── docs
├── src
└── data
    └── ILSVRC2012 or TINY_ILSVRC2012 or CUSTOM
        ├── train
        │   ├── cls0
        │   │   ├── train0.png
        │   │   ├── train1.png
        │   │   └── ...
        │   ├── cls1
        │   └── ...
        └── valid
            ├── cls0
            │   ├── valid0.png
            │   ├── valid1.png
            │   └── ...
            ├── cls1
            └── ...

Examples and Results

The src/configs directory contains config files used in our experiments.

CIFAR10 (3x32x32)

To train and evaluate ECGAN-UC on CIFAR10:

python3 src/main.py -t -e -c src/configs/CIFAR10/ecgan_v2_none_0_0p01.json
Method Reference IS(⭡) FID(⭣) F_1/8(⭡) F_8(⭡) Cfg Log Weights
BigGAN-Mod StudioGAN 9.746 8.034 0.995 0.994 - - -
ContraGAN StudioGAN 9.729 8.065 0.993 0.992 - - -
Ours - 10.078 7.936 0.990 0.988 Cfg Log Link

Tiny ImageNet (3x64x64)

To train and evaluate ECGAN-UC on Tiny ImageNet:

python3 src/main.py -t -e -c src/configs/TINY_ILSVRC2012/ecgan_v2_none_0_0p01.json --eval_type valid
Method Reference IS(⭡) FID(⭣) F_1/8(⭡) F_8(⭡) Cfg Log Weights
BigGAN-Mod StudioGAN 11.998 31.92 0.956 0.879 - - -
ContraGAN StudioGAN 13.494 27.027 0.975 0.902 - - -
Ours - 18.445 18.319 0.977 0.973 Cfg Log Link

ImageNet (3x128x128)

To train and evaluate ECGAN-UCE on ImageNet (~12 days on 8 NVIDIA V100 GPUs):

python3 src/main.py -t -e -l -sync_bn -c src/configs/ILSVRC2012/imagenet_ecgan_v2_contra_1_0p05.json --eval_type valid
Method Reference IS(⭡) FID(⭣) F_1/8(⭡) F_8(⭡) Cfg Log Weights
BigGAN StudioGAN 28.633 24.684 0.941 0.921 - - -
ContraGAN StudioGAN 25.249 25.161 0.947 0.855 - - -
Ours - 80.685 8.491 0.984 0.985 Cfg Log Link

Generated Images

Here are some selected images generated by ECGAN.

Owner
sianchen
Ph.D. student in Computer Science at National Taiwan University
sianchen
Face Mask Detector by live camera using tensorflow-keras, openCV and Python

Face Mask Detector 😷 by Live Camera Detecting masked or unmasked faces by live camera with percentange of mask occupation About Project: This an Arti

Karan Shingde 2 Apr 04, 2022
[ICCV 2021] Deep Hough Voting for Robust Global Registration

Deep Hough Voting for Robust Global Registration, ICCV, 2021 Project Page | Paper | Video Deep Hough Voting for Robust Global Registration Junha Lee1,

Junha Lee 10 Dec 02, 2022
[ICML 2022] The official implementation of Graph Stochastic Attention (GSAT).

Graph Stochastic Attention (GSAT) The official implementation of GSAT for our paper: Interpretable and Generalizable Graph Learning via Stochastic Att

85 Nov 27, 2022
Numbering permanent and deciduous teeth via deep instance segmentation in panoramic X-rays

Numbering permanent and deciduous teeth via deep instance segmentation in panoramic X-rays In this repo, you will find the instructions on how to requ

Intelligent Vision Research Lab 4 Jul 21, 2022
Sequential Model-based Algorithm Configuration

SMAC v3 Project Copyright (C) 2016-2018 AutoML Group Attention: This package is a reimplementation of the original SMAC tool (see reference below). Ho

AutoML-Freiburg-Hannover 778 Jan 05, 2023
Hierarchical Few-Shot Generative Models

Hierarchical Few-Shot Generative Models Giorgio Giannone, Ole Winther This repo contains code and experiments for the paper Hierarchical Few-Shot Gene

Giorgio Giannone 6 Dec 12, 2022
Python script that allows you to automatically setup your Growtopia server.

AutoSetup Python script that allows you to automatically setup your Growtopia server. How To Use Firstly, install all the required modules that used i

Aspire 3 Mar 06, 2022
DrNAS: Dirichlet Neural Architecture Search

This paper proposes a novel differentiable architecture search method by formulating it into a distribution learning problem. We treat the continuously relaxed architecture mixing weight as random va

Xiangning Chen 37 Jan 03, 2023
CVPR 2021

Smoothing the Disentangled Latent Style Space for Unsupervised Image-to-image Translation [Paper] | [Poster] | [Codes] Yahui Liu1,3, Enver Sangineto1,

Yahui Liu 37 Sep 12, 2022
An image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testingAn image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testing

SVM Données Une base d’images contient 490 images pour l’apprentissage (400 voitures et 90 bateaux), et encore 21 images pour fait des tests. Prétrait

Achraf Rahouti 3 Nov 30, 2021
The codebase for Data-driven general-purpose voice activity detection.

Data driven GPVAD Repository for the work in TASLP 2021 Voice activity detection in the wild: A data-driven approach using teacher-student training. S

Heinrich Dinkel 75 Nov 27, 2022
RIFE - Real-Time Intermediate Flow Estimation for Video Frame Interpolation

RIFE - Real-Time Intermediate Flow Estimation for Video Frame Interpolation YouTube | BiliBili 16X interpolation results from two input images: Introd

旷视天元 MegEngine 28 Dec 09, 2022
A framework that allows people to write their own Rocket League bots.

YOU PROBABLY SHOULDN'T PULL THIS REPO Bot Makers Read This! If you just want to make a bot, you don't need to be here. Instead, start with one of thes

543 Dec 20, 2022
Code of the paper "Part Detector Discovery in Deep Convolutional Neural Networks" by Marcel Simon, Erik Rodner and Joachim Denzler

Part Detector Discovery This is the code used in our paper "Part Detector Discovery in Deep Convolutional Neural Networks" by Marcel Simon, Erik Rodne

Computer Vision Group Jena 17 Feb 22, 2022
PyTorch CZSL framework containing GQA, the open-world setting, and the CGE and CompCos methods.

Compositional Zero-Shot Learning This is the official PyTorch code of the CVPR 2021 works Learning Graph Embeddings for Compositional Zero-shot Learni

EML Tübingen 70 Dec 27, 2022
A Pytorch Implementation for Compact Bilinear Pooling.

CompactBilinearPooling-Pytorch A Pytorch Implementation for Compact Bilinear Pooling. Adapted from tensorflow_compact_bilinear_pooling Prerequisites I

169 Dec 23, 2022
Implementation of Convolutional LSTM in PyTorch.

ConvLSTM_pytorch This file contains the implementation of Convolutional LSTM in PyTorch made by me and DavideA. We started from this implementation an

Andrea Palazzi 1.3k Dec 29, 2022
pytorch, hand(object) detect ,yolo v5,手检测

YOLO V5 物体检测,包括手部检测。 项目介绍 手部检测 手部检测示例如下 : 视频示例: 项目配置 作者开发环境: Python 3.7 PyTorch = 1.5.1 数据集 手部检测数据集 该项目数据集采用 TV-Hand 和 COCO-Hand (COCO-Hand-Big 部分) 进

Eric.Lee 11 Dec 20, 2022
Bringing Computer Vision and Flutter together , to build an awesome app !!

Bringing Computer Vision and Flutter together , to build an awesome app !! Explore the Directories Flutter · Machine Learning Table of Contents About

Padmanabha Banerjee 14 Apr 07, 2022
A Nim frontend for pytorch, aiming to be mostly auto-generated and internally using ATen.

Master Release Pytorch - Py + Nim A Nim frontend for pytorch, aiming to be mostly auto-generated and internally using ATen. Because Nim compiles to C+

Giovanni Petrantoni 425 Dec 22, 2022