Universal Probability Distributions with Optimal Transport and Convex Optimization

Overview

Sylvester normalizing flows for variational inference

Pytorch implementation of Sylvester normalizing flows, based on our paper:

Sylvester normalizing flows for variational inference (UAI 2018)
Rianne van den Berg*, Leonard Hasenclever*, Jakub Tomczak, Max Welling

*Equal contribution

Requirements

The latest release of the code is compatible with:

  • pytorch 1.0.0

  • python 3.7

Thanks to Martin Engelcke for adapting the code to provide this compatibility.

Version v0.3.0_2.7 is compatible with:

  • pytorch 0.3.0 WARNING: More recent versions of pytorch have different default flags for the binary cross entropy loss module: nn.BCELoss(). You have to adapt the appropriate flags if you want to port this code to a later vers
    ion.

  • python 2.7

Data

The experiments can be run on the following datasets:

  • static MNIST: dataset is in data folder;
  • OMNIGLOT: the dataset can be downloaded from link;
  • Caltech 101 Silhouettes: the dataset can be downloaded from link.
  • Frey Faces: the dataset can be downloaded from link.

Usage

Below, example commands are given for running experiments on static MNIST with different types of Sylvester normalizing flows, for 4 flows:

Orthogonal Sylvester flows
This example uses a bottleneck of size 8 (Q has 8 columns containing orthonormal vectors).

python main_experiment.py -d mnist -nf 4 --flow orthogonal --num_ortho_vecs 8 

Householder Sylvester flows
This example uses 8 Householder reflections per orthogonal matrix Q.

python main_experiment.py -d mnist -nf 4 --flow householder --num_householder 8

Triangular Sylvester flows

python main_experiment.py -d mnist -nf 4 --flow triangular 

To run an experiment with other types of normalizing flows or just with a factorized Gaussian posterior, see below.


Factorized Gaussian posterior

python main_experiment.py -d mnist --flow no_flow

Planar flows

python main_experiment.py -d mnist -nf 4 --flow planar

Inverse Autoregressive flows
This examples uses MADEs with 320 hidden units.

python main_experiment.py -d mnist -nf 4 --flow iaf --made_h_size 320

More information about additional argument options can be found by running ```python main_experiment.py -h```

Cite

Please cite our paper if you use this code in your own work:

@inproceedings{vdberg2018sylvester,
  title={Sylvester normalizing flows for variational inference},
  author={van den Berg, Rianne and Hasenclever, Leonard and Tomczak, Jakub and Welling, Max},
  booktitle={proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI)},
  year={2018}
}
Comments
  • about log_p_zk

    about log_p_zk

    Hi Rianne, This is a great code, and I have a little question about logp(zk), we hope p(zk) in VAE can be a distribution whose form is no fixed, but it seems that the calculate of logp(zk) in line81 of loss.py imply that p(zk) is a standard Gaussion. Are there some mistakes about my understanding?
    Thank your for this code

    opened by Archer666 10
  • loss = bce + beta * kl

    loss = bce + beta * kl

    hello Rianne: Thanks very much. I am a bit confused with line 44 in loss.py : loss = bce + beta * kl. Based on equation 3 in Tomczak's paper (Improving Variational Auto-Encoder Using Householder Flows), shouldn't "loss = bce - beta * kl "? Also, why use -ELBO instead of ELBO when reporting your metrics? Thanks

    opened by tumis1946 4
  • PyTorch_v1 and Python3 compatibility

    PyTorch_v1 and Python3 compatibility

    Hi Rianne,

    This PR contains a 'minimal' set of changes to run the code with the latest PyTorch versions and Python 3 ( #1 #2 )

    It is 'minimal' in the sense that I only made changes that affect functionality. There are additional cosmetic changes that could be made; e.g. Variable(), the volatile flag, and F.sigmoid() have been deprecated but they should not affect functionality.

    I tested the changes with PyTorch 1.0.0 and Python 3.7 on MNIST and Freyfaces, giving me similar results for the baseline VAE without any flows.

    I am not sure if more rigorous test should be done and if you want to merge this into master or keep a separate branch.

    Best, Martin

    opened by martinengelcke 1
  • PR for PyTorch 1.+ and Python 3 support

    PR for PyTorch 1.+ and Python 3 support

    Hi Rianne,

    Thank you for this really nice code release :)

    I cloned the repo and made some changes so that it runs with PyTorch 1.+ and Python 3. Also solved the issue mentioned in #1 . I tested the changes on MNIST (binary input) and Freyfaces (multinomial input), giving similar results to the original code.

    If you are interested in reviewing and potentially adding this to the repo, I would be happy to clean things up and make a PR.

    Best, Martin

    opened by martinengelcke 1
  • RuntimeError in default main experiment

    RuntimeError in default main experiment

    Hi Rianne,

    I'm trying to run the default experiment on cpu with a small latent space dimension (z=5):

    python main_experiment.py -d mnist --flow no_flow -nc --z_size 5

    Which unfortunately gives the following error:

    Traceback (most recent call last):
      File "main_experiment.py", line 278, in <module>
        run(args, kwargs)
      File "main_experiment.py", line 189, in run
        tr_loss = train(epoch, train_loader, model, optimizer, args)
      File ".../sylvester-flows/optimization/training.py", line 39, in train
        loss.backward()
      File "//anaconda/envs/dl/lib/python3.6/site-packages/torch/tensor.py", line 102, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph)
      File "//anaconda/envs/dl/lib/python3.6/site-packages/torch/autograd/__init__.py", line 90, in backward
        allow_unreachable=True)  # allow_unreachable flag
    RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
    

    I am using PyTorch version 1.0.0 and did not modify the code.

    opened by trdavidson 1
  • How to sample from latent distribution

    How to sample from latent distribution

    Hello,

    I was wondering how I can generate samples using the decoder network after training. In a VAE, I would just sample from the prior distribution z~N(0,1) and generate a data point using the decoder. In TriangularSylvesterVAE, however, I also have to provide hyperparameters lambda(x) that depend on the input. How can I sample from my latent distribution and generate samples from it?

    I am new to normalizing flows in general and would appreciate any help.

    opened by crlz182 2
Releases(v1.0.0_3.7)
  • v1.0.0_3.7(Jul 5, 2019)

    Sylvester Normalizing Flow repository compatible with Pytorch 1.0.0 and Python 3.7. Thanks to martinengelcke for taking care of this compatibility.

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0_2.7(Jul 5, 2019)

Owner
Rianne van den Berg
Senior researcher @Microsoft research Amsterdam. Formerly at Google Brain and University of Amsterdam
Rianne van den Berg
The code for the NSDI'21 paper "BMC: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing".

BMC The code for the NSDI'21 paper "BMC: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing". BibTex entry available here. B

Orange 383 Dec 16, 2022
PyDEns is a framework for solving Ordinary and Partial Differential Equations (ODEs & PDEs) using neural networks

PyDEns PyDEns is a framework for solving Ordinary and Partial Differential Equations (ODEs & PDEs) using neural networks. With PyDEns one can solve PD

Data Analysis Center 220 Dec 26, 2022
Corgis are the cutest creatures; have 30K of them!

corgi-net This is a dataset of corgi images scraped from the corgi subreddit. After filtering using an ImageNet classifier, the training set consists

Alex Nichol 6 Dec 24, 2022
BBScan py3 - BBScan py3 With Python

BBScan_py3 This repository is forked from lijiejie/BBScan 1.5. I migrated the fo

baiyunfei 12 Dec 30, 2022
PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, EfficientNetV2, NFNet, Vision Transformer, MixNet, MobileNet-V3/V2, RegNet, DPN, CSPNet, and more

PyTorch Image Models Sponsors What's New Introduction Models Features Results Getting Started (Documentation) Train, Validation, Inference Scripts Awe

Ross Wightman 22.9k Jan 09, 2023
PyTorch wrapper for Taichi data-oriented class

Stannum PyTorch wrapper for Taichi data-oriented class PRs are welcomed, please see TODOs. Usage from stannum import Tin import torch data_oriented =

86 Dec 23, 2022
PyTorch implementation of a collections of scalable Video Transformer Benchmarks.

PyTorch implementation of Video Transformer Benchmarks This repository is mainly built upon Pytorch and Pytorch-Lightning. We wish to maintain a colle

Xin Ma 156 Jan 08, 2023
A real-time approach for mapping all human pixels of 2D RGB images to a 3D surface-based model of the body

DensePose: Dense Human Pose Estimation In The Wild Rıza Alp Güler, Natalia Neverova, Iasonas Kokkinos [densepose.org] [arXiv] [BibTeX] Dense human pos

Meta Research 6.4k Jan 01, 2023
NeRViS: Neural Re-rendering for Full-frame Video Stabilization

Neural Re-rendering for Full-frame Video Stabilization

Yu-Lun Liu 9 Jun 17, 2022
Accelerated Multi-Modal MR Imaging with Transformers

Accelerated Multi-Modal MR Imaging with Transformers Dependencies numpy==1.18.5 scikit_image==0.16.2 torchvision==0.8.1 torch==1.7.0 runstats==1.8.0 p

54 Dec 16, 2022
This repository collects project-relevant Isabelle/HOL formalizations.

Isabelle/HOL formalizations related to the AuReLeE project Formalization of Abstract Argumentation Frameworks See AbstractArgumentation folder for the

AuReLeE project 1 Sep 10, 2022
An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow implementation of SERank model. The code is developed based on TF-Ranking.

SERank An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow

Zhihu 44 Oct 20, 2022
PyTorch Implementation of PortaSpeech: Portable and High-Quality Generative Text-to-Speech

PortaSpeech - PyTorch Implementation PyTorch Implementation of PortaSpeech: Portable and High-Quality Generative Text-to-Speech. Model Size Module Nor

Keon Lee 279 Jan 04, 2023
Long Expressive Memory (LEM)

Long Expressive Memory for Sequence Modeling This repository contains the implementation to reproduce the numerical experiments of the paper Long Expr

Konstantin Rusch 47 Dec 17, 2022
Robust Self-augmentation for NER with Meta-reweighting

Robust Self-augmentation for NER with Meta-reweighting

Lam chi 17 Nov 22, 2022
Official repository for "Orthogonal Projection Loss" (ICCV'21)

Orthogonal Projection Loss (ICCV'21) Kanchana Ranasinghe, Muzammal Naseer, Munawar Hayat, Salman Khan, & Fahad Shahbaz Khan Paper Link | Project Page

Kanchana Ranasinghe 83 Dec 26, 2022
Open-sourcing the Slates Dataset for recommender systems research

FINN.no Recommender Systems Slate Dataset This repository accompany the paper "Dynamic Slate Recommendation with Gated Recurrent Units and Thompson Sa

FINN.no 48 Nov 28, 2022
50-days-of-Statistics-for-Data-Science - This repository consist of a 50-day program

50-days-of-Statistics-for-Data-Science - This repository consist of a 50-day program. All the statistics required for the complete understanding of data science will be uploaded in this repository.

komal_lamba 22 Dec 09, 2022
Ejemplo Algoritmo Viterbi - Example of a Viterbi algorithm applied to a hidden Markov model on DNA sequence

Ejemplo Algoritmo Viterbi Ejemplo de un algoritmo Viterbi aplicado a modelo ocul

Mateo Velásquez Molina 1 Jan 10, 2022
BC3407-Group-5-Project - BC3407 Group Project With Python

BC3407-Group-5-Project As the world struggles to contain the ever-changing varia

1 Jan 26, 2022