Official code for "On the Frequency Bias of Generative Models", NeurIPS 2021

Overview

Frequency Bias of Generative Models

Generator Testbed Discriminator Testbed

This repository contains official code for the paper On the Frequency Bias of Generative Models.

You can find detailed usage instructions for analyzing standard GAN-architectures and your own models below.

If you find our code or paper useful, please consider citing

@inproceedings{Schwarz2021NEURIPS,
  title = {On the Frequency Bias of Generative Models},
  author = {Schwarz, Katja and Liao, Yiyi and Geiger, Andreas},
  booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
  year = {2021}
}

Installation

Please note, that this repo requires one GPU for running. First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called fbias using

conda env create -f environment.yml
conda activate fbias

Generator Testbed

You can run a demo of our generator testbed via:

chmod +x ./scripts/demo_generator_testbed.sh
./scripts/demo_generator_testbed.sh

This will train the Generator of Progressive Growing GAN to regress a single image. Further, the training progression on the image regression, spectrum, and spectrum error are summarized in output/generator_testbed/baboon64/pggan/eval.

In general, to analyze the spectral properties of a generator architecture you can train a model by running

python generator_testbed.py *EXPERIMENT_NAME* *PATH/TO/CONFIG*

This script should create a folder output/generator_testbed/*EXPERIMENT_NAME* where you can find the training progress. To evaluate the spectral properties of the trained model run

python eval_generator.py *EXPERIMENT_NAME* --psnr --image-evolution --spectrum-evolution --spectrum-error-evolution

This will print the average PSNR of the regressed images and visualize image evolution, spectrum evolution, and spectrum error evolution in output/generator_testbed/*EXPERIMENT_NAME*/eval.

Discriminator Testbed

You can run a demo of our discriminator testbed via:

chmod +x ./scripts/demo_discriminator_testbed.sh
./scripts/demo_discriminator_testbed.sh

This will train the Discriminator of Progressive Growing GAN to regress a single image. Further, the training progression on the image regression, spectrum, and spectrum error are summarized in output/discriminator_testbed/baboon64/pggan/eval.

In general, to analyze the spectral properties of a discriminator architecture you can train a model by running

python discriminator_testbed.py *EXPERIMENT_NAME* *PATH/TO/CONFIG*

This script should create a folder output/discriminator_testbed/*EXPERIMENT_NAME* where you can find the training progress. To evaluate the spectral properties of the trained model run

python eval_discriminator.py *EXPERIMENT_NAME* --psnr --image-evolution --spectrum-evolution --spectrum-error-evolution

This will print the average PSNR of the regressed images and visualize image evolution, spectrum evolution, and spectrum error evolution in output/discriminator_testbed/*EXPERIMENT_NAME*/eval.

Datasets

Toyset

You can generate a toy dataset with Gaussian peaks as spectrum by running

cd data
python toyset.py 64 100
cd ..

This creates a folder data/toyset/ and generates 100 images of resolution 64x64 pixels.

CelebA-HQ

Download celebA_hq. Then, update data:root: *PATH/TO/CELEBA_HQ* in the config file.

Other datasets

The config setting data:root: *PATH/TO/DATA* needs to point to a folder with the training images. You can use any dataset which follows the folder structure

*PATH/TO/DATA*/xxx.png
*PATH/TO/DATA*/xxy.png
...

By default, the images are center-cropped and optionally resized to the resolution specified in the config file underdata:resolution. Note, that you can also use a subset of images via data:subset.

Architectures

StyleGAN Support

In addition to Progressive Growing GAN, this repository supports analyzing the following architectures

For this, you need to initialize the stylegan3 submodule by running

git pull --recurse-submodules
cd models/stylegan3/stylegan3
git submodule init
git submodule update
cd ../../../

Next, you need to install any additional requirements for this repo. You can do this by running

conda activate fbias
conda env update --file environment_sg3.yml --prune

You can now analyze the spectral properties of the StyleGAN architectures by running

# StyleGAN2
python generator_testbed.py baboon64/StyleGAN2 configs/generator_testbed/sg2.yaml
python discriminator_testbed.py baboon64/StyleGAN2 configs/discriminator_testbed/sg2.yaml
# StyleGAN3
python generator_testbed.py baboon64/StyleGAN3 configs/generator_testbed/sg3.yaml

Other architectures

To analyze any other network architectures, you can add the respective model file (or submodule) under models. You then need to write a wrapper class to integrate the architecture seamlessly into this code base. Examples for wrapper classes are given in

  • models/stylegan2_generator.py for the Generator
  • models/stylegan2_discriminator.py for the Discriminator

Further Information

This repository builds on Lars Mescheder's awesome framework for GAN training. Further, we utilize code from the Stylegan3-repo and GenForce.

U2-Net: Going Deeper with Nested U-Structure for Salient Object Detection

The code for our newly accepted paper in Pattern Recognition 2020: "U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection."

Xuebin Qin 6.5k Jan 09, 2023
Machine learning notebooks in different subjects optimized to run in google collaboratory

Notebooks Name Description Category Link Training pix2pix This notebook shows a simple pipeline for training pix2pix on a simple dataset. Most of the

Zaid Alyafeai 363 Dec 06, 2022
Analyzes your GitHub Profile and presents you with a report on how likely you are to become the next MLH Fellow!

Fellowship Prediction GitHub Profile Comparative Analysis Tool Built with BentoML Table of Contents: Features Disclaimer Technologies Used Contributin

Damir Temir 51 Dec 29, 2022
Activity image-based video retrieval

Cross-modal-retrieval Our approach is focus on Activity Image-to-Video Retrieval (AIVR) task. The compared methods are state-of-the-art single modalit

BCMI 75 Oct 21, 2021
DeepStochlog Package For Python

DeepStochLog Installation Installing SWI Prolog DeepStochLog requires SWI Prolog to run. Run the following commands to install: sudo apt-add-repositor

KU Leuven Machine Learning Research Group 17 Dec 23, 2022
The 3rd place solution for competition

The 3rd place solution for competition "Lyft Motion Prediction for Autonomous Vehicles" at Kaggle Team behind this solution: Artsiom Sanakoyeu [Homepa

Artsiom 104 Nov 22, 2022
This is the official source code for SLATE. We provide the code for the model, the training code, and a dataset loader for the 3D Shapes dataset. This code is implemented in Pytorch.

SLATE This is the official source code for SLATE. We provide the code for the model, the training code and a dataset loader for the 3D Shapes dataset.

Gautam Singh 66 Dec 26, 2022
VarCLR: Variable Semantic Representation Pre-training via Contrastive Learning

    VarCLR: Variable Representation Pre-training via Contrastive Learning New: Paper accepted by ICSE 2022. Preprint at arXiv! This repository contain

squaresLab 32 Oct 24, 2022
Out-of-boundary View Synthesis towards Full-frame Video Stabilization

Out-of-boundary View Synthesis towards Full-frame Video Stabilization Introduction | Update | Results Demo | Introduction This repository contains the

25 Oct 10, 2022
Picasso: a methods for embedding points in 2D in a way that respects distances while fitting a user-specified shape.

Picasso Code to generate Picasso embeddings of any input matrix. Picasso maps the points of an input matrix to user-defined, n-dimensional shape coord

Pachter Lab 45 Dec 23, 2022
Scikit-event-correlation - Event Correlation and Forecasting over High Dimensional Streaming Sensor Data algorithms

scikit-event-correlation Event Correlation and Changing Detection Algorithm Theo

Intellia ICT 5 Oct 30, 2022
Personalized Transfer of User Preferences for Cross-domain Recommendation (PTUPCDR)

Personalized Transfer of User Preferences for Cross-domain Recommendation (PTUPCDR) This is the official implementation of our paper Personalized Tran

Yongchun Zhu 81 Dec 29, 2022
PyTorch implementation of Graph Convolutional Networks in Feature Space for Image Deblurring and Super-resolution, IJCNN 2021.

GCResNet PyTorch implementation of Graph Convolutional Networks in Feature Space for Image Deblurring and Super-resolution, IJCNN 2021. The code will

11 May 19, 2022
Semantic Scholar's Author Disambiguation Algorithm & Evaluation Suite

S2AND This repository provides access to the S2AND dataset and S2AND reference model described in the paper S2AND: A Benchmark and Evaluation System f

AI2 54 Nov 28, 2022
Answering Open-Domain Questions of Varying Reasoning Steps from Text

This repository contains the authors' implementation of the Iterative Retriever, Reader, and Reranker (IRRR) model in the EMNLP 2021 paper "Answering Open-Domain Questions of Varying Reasoning Steps

26 Dec 22, 2022
Pytorch implementation of the unsupervised object discovery method LOST.

LOST Pytorch implementation of the unsupervised object discovery method LOST. More details can be found in the paper: Localizing Objects with Self-Sup

Valeo.ai 189 Dec 25, 2022
This repository contains several jupyter notebooks to help users learn to use neon, our deep learning framework

neon_course This repository contains several jupyter notebooks to help users learn to use neon, our deep learning framework. For more information, see

Nervana 92 Jan 03, 2023
automated systems to assist guarding corona Virus precautions for Closed Rooms (e.g. Halls, offices, etc..)

Automatic-precautionary-guard automated systems to assist guarding corona Virus precautions for Closed Rooms (e.g. Halls, offices, etc..) what is this

badra 0 Jan 06, 2022
A tool for calculating distortion parameters in coordination complexes.

OctaDist Octahedral distortion calculator: A tool for calculating distortion parameters in coordination complexes. https://octadist.github.io/ Registe

OctaDist 12 Oct 04, 2022
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models

DSEE Codes for [Preprint] DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models Xuxi Chen, Tianlong Chen, Yu Cheng, Weizhu Ch

VITA 4 Dec 27, 2021