Image inpainting using Gaussian Mixture Models

Overview

dmfa_inpainting

Source code for:

Requirements

Python 3.8 or higher is required. Models have been implemented with PyTorch.

To install the requirements, running:

pip install -r requirements.txt

should suffice.

Running

To train the DMFA model, see the script:

python scripts/train_inpainter.py --h

To run classifier / WAE experiments, see the scripts:

python scripts/train_classifier_v2.py --h
python scripts/train_wae_v2.py --h

respectively.

Moreover, in the scripts/ directory we provide the *.sh scripts which run the model trainings with the same parameters as used in the paper.

All experiments are runnable on a single Nvidia GPU.

Inpainters used with classifiers and WAE

In order to run a classifier / WAE with DMFA, one must train the DMFA model first with the above script.

For some of the inpainters we compare our approach to, additional repositories must be cloned or installed:

DMFA Weights

We provide DMFA training results (among which are JSONs, weights and training arguments) here.

We provide results for following models, trained on complete and incomplete data:

  • MNIST - linear heads
  • SVHN - fully convolutional
  • CIFAR-10 - fully convolutional
  • CelebA - fully convolutional, trained on 64x64 images

Notebooks

There are several Jupyter Notebooks in the notebooks directory. They were used for initial experiments with the DMFA models, as well as analysis of the results and calculating metrics reported in the paper.

The notebooks are not guaranteed to run 100% correctly due to the subsequent code refactor.

Citation

If you find our work useful, please consider citing us!

@misc{przewięźlikowski2021misconv,
      title={MisConv: Convolutional Neural Networks for Missing Data}, 
      author={Marcin Przewięźlikowski and Marek Śmieja and Łukasz Struski and Jacek Tabor},
      year={2021},
      eprint={2110.14010},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
@article{Przewiezlikowski_2020,
   title={Estimating Conditional Density of Missing Values Using Deep Gaussian Mixture Model},
   ISBN={9783030638368},
   ISSN={1611-3349},
   url={http://dx.doi.org/10.1007/978-3-030-63836-8_19},
   DOI={10.1007/978-3-030-63836-8_19},
   journal={Lecture Notes in Computer Science},
   publisher={Springer International Publishing},
   author={Przewięźlikowski, Marcin and Śmieja, Marek and Struski, Łukasz},
   year={2020},
   pages={220–231}
}
Owner
Marcin Przewięźlikowski
https://mprzewie.github.io/
Marcin Przewięźlikowski
Article Reranking by Memory-enhanced Key Sentence Matching for Detecting Previously Fact-checked Claims.

MTM This is the official repository of the paper: Article Reranking by Memory-enhanced Key Sentence Matching for Detecting Previously Fact-checked Cla

ICTMCG 13 Sep 17, 2022
"Exploring Vision Transformers for Fine-grained Classification" at CVPRW FGVC8

FGVC8 Exploring Vision Transformers for Fine-grained Classification paper presented at the CVPR 2021, The Eight Workshop on Fine-Grained Visual Catego

Marcos V. Conde 19 Dec 06, 2022
LibMTL: A PyTorch Library for Multi-Task Learning

LibMTL LibMTL is an open-source library built on PyTorch for Multi-Task Learning (MTL). See the latest documentation for detailed introductions and AP

765 Jan 06, 2023
Spectral Tensor Train Parameterization of Deep Learning Layers

Spectral Tensor Train Parameterization of Deep Learning Layers This repository is the official implementation of our AISTATS 2021 paper titled "Spectr

Anton Obukhov 12 Oct 23, 2022
PyTorch implementation of the NIPS-17 paper "Poincaré Embeddings for Learning Hierarchical Representations"

Poincaré Embeddings for Learning Hierarchical Representations PyTorch implementation of Poincaré Embeddings for Learning Hierarchical Representations

Facebook Research 1.6k Dec 25, 2022
No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consistency

This repository contains the implementation for the paper: No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consiste

Alireza Golestaneh 75 Dec 30, 2022
[NeurIPS 2020] This project provides a strong single-stage baseline for Long-Tailed Classification, Detection, and Instance Segmentation (LVIS).

A Strong Single-Stage Baseline for Long-Tailed Problems This project provides a strong single-stage baseline for Long-Tailed Classification (under Ima

Kaihua Tang 514 Dec 23, 2022
fcn by tensorflow

Update An example on how to integrate this code into your own semantic segmentation pipeline can be found in my KittiSeg project repository. tensorflo

9 May 22, 2022
An Artificial Intelligence trying to drive a car by itself on a user created map

An Artificial Intelligence trying to drive a car by itself on a user created map

Akhil Sahukaru 17 Jan 13, 2022
A highly efficient, fast, powerful and light-weight anime downloader and streamer for your favorite anime.

AnimDL - Download & Stream Your Favorite Anime AnimDL is an incredibly powerful tool for downloading and streaming anime. Core features Abuses the dev

KR 759 Jan 08, 2023
Official implementation of Representer Point Selection via Local Jacobian Expansion for Post-hoc Classifier Explanation of Deep Neural Networks and Ensemble Models at NeurIPS 2021

Representer Point Selection via Local Jacobian Expansion for Classifier Explanation of Deep Neural Networks and Ensemble Models This repository is the

Yi(Amy) Sui 2 Dec 01, 2021
This is a library for training and applying sparse fine-tunings with torch and transformers.

This is a library for training and applying sparse fine-tunings with torch and transformers. Please refer to our paper Composable Sparse Fine-Tuning f

Cambridge Language Technology Lab 37 Dec 30, 2022
Alias-Free Generative Adversarial Networks (StyleGAN3) Official PyTorch implementation

Alias-Free Generative Adversarial Networks (StyleGAN3) Official PyTorch implementation

NVIDIA Research Projects 4.8k Jan 09, 2023
Local Similarity Pattern and Cost Self-Reassembling for Deep Stereo Matching Networks

Local Similarity Pattern and Cost Self-Reassembling for Deep Stereo Matching Networks Contributions A novel pairwise feature LSP to extract structural

31 Dec 06, 2022
Contains code for Deep Kernelized Dense Geometric Matching

DKM - Deep Kernelized Dense Geometric Matching Contains code for Deep Kernelized Dense Geometric Matching We provide pretrained models and code for ev

Johan Edstedt 83 Dec 23, 2022
Gated-Shape CNN for Semantic Segmentation (ICCV 2019)

GSCNN This is the official code for: Gated-SCNN: Gated Shape CNNs for Semantic Segmentation Towaki Takikawa, David Acuna, Varun Jampani, Sanja Fidler

859 Dec 26, 2022
RuDOLPH: One Hyper-Modal Transformer can be creative as DALL-E and smart as CLIP

[Paper] [Хабр] [Model Card] [Colab] [Kaggle] RuDOLPH 🦌 🎄 ☃️ One Hyper-Modal Tr

Sber AI 230 Dec 31, 2022
A deep learning framework for historical document image analysis

DIVA-DAF Description A deep learning framework for historical document image analysis. How to run Install dependencies # clone project git clone https

9 Aug 04, 2022
This MVP data web app uses the Streamlit framework and Facebook's Prophet forecasting package to generate a dynamic forecast from your own data.

📈 Automated Time Series Forecasting Background: This MVP data web app uses the Streamlit framework and Facebook's Prophet forecasting package to gene

Zach Renwick 42 Jan 04, 2023
Generic image compressor for machine learning. Pytorch code for our paper "Lossy compression for lossless prediction".

Lossy Compression for Lossless Prediction Using: Training: This repostiory contains our implementation of the paper: Lossy Compression for Lossless Pr

Yann Dubois 84 Jan 02, 2023