PyTorch implementation of normalizing flow models

Overview

Normalizing Flows

This is a PyTorch implementation of several normalizing flows, including a variational autoencoder. It is used in the articles A Gradient Based Strategy for Hamiltonian Monte Carlo Hyperparameter Optimization and Resampling Base Distributions of Normalizing Flows.

Implemented Flows

Methods of Installation

The latest version of the package can be installed via pip

pip install --upgrade git+https://github.com/VincentStimper/normalizing-flows.git

If you want to use a GPU, make sure that PyTorch is set up correctly by by following the instructions at the PyTorch website.

To run the example notebooks clone the repository first

git clone https://github.com/VincentStimper/normalizing-flows.git

and then install the dependencies.

pip install -r requirements_examples.txt
Comments
  • Replication of comparable glow with papers

    Replication of comparable glow with papers

    Hi, Thanks for developing this package. I find it very neat and flexible and would like to use it for my research. I noticed that in the paper "Resampling Base Distributions of Normalizing Flows", the bpd of your glow can reach 3.2~3.3, which is comparable to the original paper.

    I was wondering if it is possible to share your training scripts and details to train glow on cifar10 to achieve the above bpd. The current example notebook is too sketchy and only results in bpd of 3.8. Thanks very much!

    opened by prclibo 1
  • Changed ActNorm flag to buffer to allow saving

    Changed ActNorm flag to buffer to allow saving

    Without it being a buffer, when you load a model and sample from it, the flow thinks it is the first run through so overwrites all the trained ActNorm parameters.

    opened by arc82 1
  • Fix deprecation warning

    Fix deprecation warning

    Fixes the deprecation warning documented in https://github.com/VincentStimper/normalizing-flows/issues/12.


    Sanity check: Running this before the change:

    import normflows as nf
    import torch
    
    torch.manual_seed(42)
    
    flow = nf.NormalizingFlow(
        nf.distributions.DiagGaussian(1, trainable=False),
        [
            nf.flows.AutoregressiveRationalQuadraticSpline(1, 1, 1),
            nf.flows.LULinearPermute(1)
        ]
    )
    
    with torch.no_grad():
        samples_flow, _ = flow.sample(4)
    
    print(samples_flow)
    

    gives:

    tensor([[0.4528],
            [0.6410],
            [0.5200],
            [0.5567]])
    

    After the change, the output stays the same.

    opened by timothygebhard 0
  • Sampling from flow raises deprecation warning

    Sampling from flow raises deprecation warning

    Running the following minimal example:

    import normflows as nf
    import torch
    
    torch.manual_seed(42)
    
    flow = nf.NormalizingFlow(
        nf.distributions.DiagGaussian(1, trainable=False),
        [
            nf.flows.AutoregressiveRationalQuadraticSpline(1, 1, 1),
            nf.flows.LULinearPermute(1)
        ]
    )
    
    with torch.no_grad():
        samples_flow, _ = flow.sample(4)
    
    print(samples_flow)
    

    raises a UserWarning about an upcoming deprecation:

    /Users/timothy/Desktop/normalizing-flows/normflows/flows/mixing.py:437: UserWarning: torch.triangular_solve is deprecated in favor of torch.linalg.solve_triangular and will be removed in a future PyTorch release.
    torch.linalg.solve_triangular has its arguments reversed and does not return a copy of one of the inputs.
    X = torch.triangular_solve(B, A).solution
    should be replaced with
    X = torch.linalg.solve_triangular(A, B). (Triggered internally at  /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp:2189.)
      outputs, _ = torch.triangular_solve(
    

    I will submit a PR shortly that fixes the issue 🙂

    opened by timothygebhard 0
  • Vberenz/mkdocs

    Vberenz/mkdocs

    Added mkdocs structure, and refactored the docstrings (and applied black)

    to install the dependencies for documentation building:

    pip install -e ".[docs]"
    

    To see the doc:

    mkdocs serve
    

    This starts a live server. Modifications of the documentation are rendered live (excluded the modications to docstrings)

    To build the docs:

    mkdocs build
    

    this will create the site folder (including index.html)

    To expend the docs:

    Markdown files can be added in the docs folder, then the "nav" section of the mkdocs.yml file has to be updated, e.g.

    nav:
      - about: index.md
      - API: references.md
      - my other page: mymarkdown.md
    
    • good to know: markdown can be used in the docstrings.
    • apparently, deploying the documentation online on github after built is as simple as calling mkdocs gh-deploy (I did not try it yet)

    I still need to do:

    • continuous build on github (documentation is rebuilt and deployed at each merge into master)
    • a correction pass on the docstrings (I updated them, but did not check them one by one yet)
    • the layout is not so nice (especially for the API), needs to be improved
    • apparently mkdocs allows to display jupyter notebooks, I need to dig
    opened by vincentberenz 0
  • feat: add optional gradient clipping to HMC flow

    feat: add optional gradient clipping to HMC flow

    Add the option to clip the gradient of the target log prob within HMC. For some target distributions, the log prob may have some very large gradients which can cause numerical instability - the gradient clipping can help with this.

    opened by lollcat 0
  • Added minor fixes for bugs and warnings

    Added minor fixes for bugs and warnings

    This commit made three changes to the original repo:

    1. Fixes the warning regarding the 'is' keyword:
    /home/donglin/Github/normalizing-flows/normflow/nets.py:45: SyntaxWarning: "is" with a literal. Did you mean "=="?
      if output_fn is "sigmoid":
    /home/donglin/Github/normalizing-flows/normflow/nets.py:47: SyntaxWarning: "is" with a literal. Did you mean "=="?
      elif output_fn is "relu":
    /home/donglin/Github/normalizing-flows/normflow/nets.py:49: SyntaxWarning: "is" with a literal. Did you mean "=="?
      elif output_fn is "tanh":
    /home/donglin/Github/normalizing-flows/normflow/nets.py:51: SyntaxWarning: "is" with a literal. Did you mean "=="?
      elif output_fn is "clampexp":
    
    1. Fixes the warning regarding 'torch.qr':
    /home/donglin/Github/normalizing-flows/normflow/flows.py:616: UserWarning: torch.qr is deprecated in favor of torch.linalg.qr and will be removed in a future PyTorch release.
    The boolean parameter 'some' has been replaced with a string parameter 'mode'.
    Q, R = torch.qr(A, some)
    should be replaced with
    Q, R = torch.linalg.qr(A, 'reduced' if some else 'complete') (Triggered internally at  /opt/conda/conda-bld/pytorch_1623448278899/work/aten/src/ATen/native/BatchLinearAlgebra.cpp:1940.)
      Q = torch.qr(torch.randn(self.num_channels, self.num_channels))[0]
    
    1. Eliminated the "nf.util.ToDevice" call during the data pre-processing in glow.ipynb
    opened by Donglin-Wang2 0
  • RuntimeError: output with shape [32] doesn't match the broadcast shape [1, 32]

    RuntimeError: output with shape [32] doesn't match the broadcast shape [1, 32]

    Hi, when doing experiments, I'd suggest doing some other tutorials, for example for the ClassCondFlow, as while trying to one on my own, I keep encountering this error.

    opened by maulberto3 0
  • Normalizing Flow vs Normalizing Flow VAE behavior

    Normalizing Flow vs Normalizing Flow VAE behavior

    I can't help but to wonder why the NormalizingFlow class use the flows' inverse method when computing forward_kl, but, on the contrary, when using the NormalizingFlowVAE, it uses the flows' forward method.

    This way, when trying to fit MNIST with NormalizingFlow, when training and passing a batch of say (64, 784) images I get the following error:

         34 for i in range(len(self.flows) - 1, -1, -1):
         35     z, log_det = self.flows[i].inverse(z)
    ---> 36     log_q += log_det
         37 log_q += self.q0.log_prob(z)
         38 return -torch.mean(log_q)
    
    RuntimeError: output with shape [64] doesn't match the broadcast shape [1, 64]
    

    Any help/suggestion?

    opened by maulberto3 0
  • Inconsistency between log_q and log_p in Encoder and NormalizingFlowVAE

    Inconsistency between log_q and log_p in Encoder and NormalizingFlowVAE

    In NormalizingFlowVAE class in core.py, this line says that the encoder outputs log_q:

    z, log_q = self.q0(x, num_samples=num_samples)

    Suppose that, as in this example, the encoder Gaussian (q0) is parameterized by an MLP. Looking at distributions.encoder.py source code, the forward method of NNDiagGaussian class says that it outputs log_p:

    return z, log_p

    Inconsistency or not?

    opened by maulberto3 0
  • NormalizingFlow class in core.py does not provide context in forward_kld

    NormalizingFlow class in core.py does not provide context in forward_kld

    Thank you for a repo that's easy to handle with a normalizing flow of one's choice!

    I would like to implement a normalizing flow that optimizes multiple target distributions at once depending on the context I would provide to it. Yet, currently, afai, no context can be providided in the .forward_kld method of the NormalizingFlow class.

    Would be great if that's added!

    Cheers,

    Yves

    opened by ybernaerts 3
  • Decoupled sampling and generation interfaces

    Decoupled sampling and generation interfaces

    Hi, this PR added some new interface to get better control during sampling, e.g. to repeatedly generate on same latent code when training the model. Please check if it is useful as a merge:)

    opened by prclibo 0
Releases(v1.5)
  • v1.5(Dec 21, 2022)

    A rendered documentation is added to the repository, which is available on https://vincentstimper.github.io/normalizing-flows/.

    Test were added for several flow modules, which can be run via pytest. With these new tests, several bugs were detected and fixed. The current coverage is about 61%. More tests will be added in the future as well as automated testing and coverage analysis on GitHub.

    Moreover, the code is adapted to the syntax of newer PyTorch Versions.

    Source code(tar.gz)
    Source code(zip)
  • v1.4(Jul 26, 2022)

    The package is now available on PyPI, which means that it can just be installed with

    pip install normflows
    

    from now on. The code was reformatted to conform to the black coding style.

    Moreover, the following fixes and additions are included:

    • The computation of the alpha-divergence objective was corrected.
    • A bug regarding sampling from the mixture of Gaussian base distribution was fixed.
    • A flow layer to warp periodic variables was added.
    • The dependency from the Residual Flow repository was removed.
    Source code(tar.gz)
    Source code(zip)
  • v1.2(Apr 5, 2022)

    The code was reorganized to be more hierarchical and readable. Also all required functionality for Neural Spline Flows were added to the repository to remove the dependency on the original Neural Spline Flow repository.

    Furthermore, the following features were introduced:

    • Class to reverse a flow layer
    • Class to build a chain of flow layers
    • Affine Masked Autoregressive Flows (MAF)
    • Circular Neural Spline Flows
    • Neural Spline Flows with circular and non-circular coordinates
    Source code(tar.gz)
    Source code(zip)
  • v1.1(Feb 6, 2022)

  • v1.0(Nov 25, 2021)

    Normalizing flow library comprising the most popular flow architectures, among them Real NVP, Glow, Neural Spline Flow, and Residual Flow.

    Source code(tar.gz)
    Source code(zip)
Owner
Vincent Stimper
PhD student in Machine Learning at the University of Cambridge and the Max Planck Institute for Intelligent Systems
Vincent Stimper
Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"

Hold me tight! Influence of discriminative features on deep network boundaries This is the source code to reproduce the experiments of the NeurIPS 202

EPFL LTS4 19 Dec 10, 2021
AVD Quickstart Containerlab

AVD Quickstart Containerlab WARNING This repository is still under construction. It's fully functional, but has number of limitations. For example: RE

Carl Buchmann 3 Apr 10, 2022
An implementation of based on pytorch and mmcv

FisherPruning-Pytorch An implementation of Group Fisher Pruning for Practical Network Compression based on pytorch and mmcv Main Functions Pruning f

Peng Lu 15 Dec 17, 2022
A unified framework for machine learning with time series

Welcome to sktime A unified framework for machine learning with time series We provide specialized time series algorithms and scikit-learn compatible

The Alan Turing Institute 6k Jan 08, 2023
Multi-label classification of retinal disorders

Multi-label classification of retinal disorders This is a deep learning course project. The goal is to develop a solution, using computer vision techn

Sundeep Bhimireddy 1 Jan 29, 2022
Official PyTorch implementation and pretrained models of the paper Self-Supervised Classification Network

Self-Classifier: Self-Supervised Classification Network Official PyTorch implementation and pretrained models of the paper Self-Supervised Classificat

Elad Amrani 24 Dec 21, 2022
Motion Reconstruction Code and Data for Skills from Videos (SFV)

Motion Reconstruction Code and Data for Skills from Videos (SFV) This repo contains the data and the code for motion reconstruction component of the S

268 Dec 01, 2022
PyTorch implementation for "HyperSPNs: Compact and Expressive Probabilistic Circuits", NeurIPS 2021

HyperSPN This repository contains code for the paper: HyperSPNs: Compact and Expressive Probabilistic Circuits "HyperSPNs: Compact and Expressive Prob

8 Nov 08, 2022
A hifiasm fork for metagenome assembly using Hifi reads.

hifiasm_meta - de novo metagenome assembler, based on hifiasm, a haplotype-resolved de novo assembler for PacBio Hifi reads.

44 Jul 10, 2022
Rank1 Conversation Emotion Detection Task

Rank1-Conversation_Emotion_Detection_Task accuracy macro-f1 recall 0.826 0.7544 0.719 基于预训练模型和时序预测模型的对话情感探测任务 1 摘要 针对对话情感探测任务,本文将其分为文本分类和时间序列预测两个子任务,分

Yuchen Han 2 Nov 28, 2021
Hashformers is a framework for hashtag segmentation with transformers.

Hashtag segmentation is the task of automatically inserting the missing spaces between the words in a hashtag. Hashformers applies Transformer models

Ruan Chaves 41 Nov 09, 2022
Sequence modeling benchmarks and temporal convolutional networks

Sequence Modeling Benchmarks and Temporal Convolutional Networks (TCN) This repository contains the experiments done in the work An Empirical Evaluati

CMU Locus Lab 3.5k Jan 01, 2023
FastReID is a research platform that implements state-of-the-art re-identification algorithms.

FastReID is a research platform that implements state-of-the-art re-identification algorithms.

JDAI-CV 2.8k Jan 07, 2023
Joint-task Self-supervised Learning for Temporal Correspondence (NeurIPS 2019)

Joint-task Self-supervised Learning for Temporal Correspondence Project | Paper Overview Joint-task Self-supervised Learning for Temporal Corresponden

Sifei Liu 167 Dec 14, 2022
DLL: Direct Lidar Localization

DLL: Direct Lidar Localization Summary This package presents DLL, a direct map-based localization technique using 3D LIDAR for its application to aeri

Service Robotics Lab 127 Dec 16, 2022
NuPIC Studio is an all­-in-­one tool that allows users create a HTM neural network from scratch

NuPIC Studio is an all­-in-­one tool that allows users create a HTM neural network from scratch, train it, collect statistics, and share it among the members of the community. It is not just a visual

HTM Community 93 Sep 30, 2022
This is the official implementation of "One Question Answering Model for Many Languages with Cross-lingual Dense Passage Retrieval".

CORA This is the official implementation of the following paper: Akari Asai, Xinyan Yu, Jungo Kasai and Hannaneh Hajishirzi. One Question Answering Mo

Akari Asai 59 Dec 28, 2022
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (published in ICLR2018)

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models Pouya Samangouei*, Maya Kabkab*, Rama Chellappa [*: authors co

Maya Kabkab 212 Dec 07, 2022
[ICCV2021] Official code for "Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition"

CTR-GCN This repo is the official implementation for Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition. The pap

Yuxin Chen 148 Dec 16, 2022
GndNet: Fast ground plane estimation and point cloud segmentation for autonomous vehicles using deep neural networks.

GndNet: Fast Ground plane Estimation and Point Cloud Segmentation for Autonomous Vehicles. Authors: Anshul Paigwar, Ozgur Erkent, David Sierra Gonzale

Anshul Paigwar 114 Dec 29, 2022