OptNet: Differentiable Optimization as a Layer in Neural Networks

Overview

OptNet: Differentiable Optimization as a Layer in Neural Networks

This repository is by Brandon Amos and J. Zico Kolter and contains the PyTorch source code to reproduce the experiments in our ICML 2017 paper OptNet: Differentiable Optimization as a Layer in Neural Networks.

If you find this repository helpful in your publications, please consider citing our paper.

@InProceedings{amos2017optnet,
  title = {{O}pt{N}et: Differentiable Optimization as a Layer in Neural Networks},
  author = {Brandon Amos and J. Zico Kolter},
  booktitle = {Proceedings of the 34th International Conference on Machine Learning},
  pages = {136--145},
  year = {2017},
  volume = {70},
  series = {Proceedings of Machine Learning Research},
  publisher ={PMLR},
}

Informal Introduction

Mathematical optimization is a well-studied language of expressing solutions to many real-life problems that come up in machine learning and many other fields such as mechanics, economics, EE, operations research, control engineering, geophysics, and molecular modeling. As we build our machine learning systems to interact with real data from these fields, we often cannot (but sometimes can) simply ``learn away'' the optimization sub-problems by adding more layers in our network. Well-defined optimization problems may be added if you have a thorough understanding of your feature space, but oftentimes we don't have this understanding and resort to automatic feature learning for our tasks.

Until this repository, no modern deep learning library has provided a way of adding a learnable optimization layer (other than simply unrolling an optimization procedure, which is inefficient and inexact) into our model formulation that we can quickly try to see if it's a nice way of expressing our data.

See our paper OptNet: Differentiable Optimization as a Layer in Neural Networks and code at locuslab/optnet if you are interested in learning more about our initial exploration in this space of automatically learning quadratic program layers for signal denoising and sudoku.

Setup and Dependencies

  • Python/numpy/PyTorch
  • qpth: Our fast QP solver for PyTorch released in conjunction with this paper.
  • bamos/block: Our intelligent block matrix library for numpy, PyTorch, and beyond.
  • Optional: bamos/setGPU: A small library to set CUDA_VISIBLE_DEVICES on multi-GPU systems.

Denoising Experiments

denoising
├── create.py - Script to create the denoising dataset.
├── plot.py - Plot the results from any experiment.
├── main.py - Run the FC baseline and OptNet denoising experiments. (See arguments.)
├── main.tv.py - Run the TV baseline denoising experiment.
└── run-exps.sh - Run all experiments. (May need to uncomment some lines.)

Sudoku Experiments

  • The dataset we used in our experiments is available in sudoku/data.
sudoku
├── create.py - Script to create the dataset.
├── plot.py - Plot the results from any experiment.
├── main.py - Run the FC baseline and OptNet Sudoku experiments. (See arguments.)
└── models.py - Models used for Sudoku.

Classification Experiments

cls
├── train.py - Run the FC baseline and OptNet classification experiments. (See arguments.)
├── plot.py - Plot the results from any experiment.
└── models.py - Models used for classification.

Acknowledgments

The rapid development of this work would not have been possible without the immense amount of help from the PyTorch team, particularly Soumith Chintala and Adam Paszke.

Licensing

Unless otherwise stated, the source code is copyright Carnegie Mellon University and licensed under the Apache 2.0 License.

Owner
CMU Locus Lab
Zico Kolter's Research Group
CMU Locus Lab
PyTorch wrappers for using your model in audacity!

PyTorch wrappers for using your model in audacity!

130 Dec 14, 2022
A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision

🤗 Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boilerplate code needed to use multi-GPUs/TPU/fp16.

Hugging Face 3.5k Jan 08, 2023
Use Jax functions in Pytorch with DLPack

Use Jax functions in Pytorch with DLPack

Phil Wang 106 Dec 17, 2022
PyTorch Lightning Optical Flow models, scripts, and pretrained weights.

PyTorch Lightning Optical Flow models, scripts, and pretrained weights.

Henrique Morimitsu 105 Dec 16, 2022
The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.

News March 3: v0.9.97 has various bug fixes and improvements: Bug fixes for NTXentLoss Efficiency improvement for AccuracyCalculator, by using torch i

Kevin Musgrave 5k Jan 02, 2023
PyTorch extensions for fast R&D prototyping and Kaggle farming

Pytorch-toolbelt A pytorch-toolbelt is a Python library with a set of bells and whistles for PyTorch for fast R&D prototyping and Kaggle farming: What

Eugene Khvedchenya 1.3k Jan 05, 2023
A PyTorch implementation of L-BFGS.

PyTorch-LBFGS: A PyTorch Implementation of L-BFGS Authors: Hao-Jun Michael Shi (Northwestern University) and Dheevatsa Mudigere (Facebook) What is it?

Hao-Jun Michael Shi 478 Dec 27, 2022
Pytorch bindings for Fortran

Pytorch bindings for Fortran

Dmitry Alexeev 46 Dec 29, 2022
lookahead optimizer (Lookahead Optimizer: k steps forward, 1 step back) for pytorch

lookahead optimizer for pytorch PyTorch implement of Lookahead Optimizer: k steps forward, 1 step back Usage: base_opt = torch.optim.Adam(model.parame

Liam 318 Dec 09, 2022
This is an differentiable pytorch implementation of SIFT patch descriptor.

This is an differentiable pytorch implementation of SIFT patch descriptor. It is very slow for describing one patch, but quite fast for batch. It can

Dmytro Mishkin 150 Dec 24, 2022
A PyTorch implementation of EfficientNet

EfficientNet PyTorch Quickstart Install with pip install efficientnet_pytorch and load a pretrained EfficientNet with: from efficientnet_pytorch impor

Luke Melas-Kyriazi 7.2k Jan 06, 2023
torch-optimizer -- collection of optimizers for Pytorch

torch-optimizer torch-optimizer -- collection of optimizers for PyTorch compatible with optim module. Simple example import torch_optimizer as optim

Nikolay Novik 2.6k Jan 03, 2023
Learning Sparse Neural Networks through L0 regularization

Example implementation of the L0 regularization method described at Learning Sparse Neural Networks through L0 regularization, Christos Louizos, Max W

AMLAB 202 Nov 10, 2022
Bunch of optimizer implementations in PyTorch

Bunch of optimizer implementations in PyTorch

Hyeongchan Kim 76 Jan 03, 2023
PyTorch toolkit for biomedical imaging

farabio is a minimal PyTorch toolkit for out-of-the-box deep learning support in biomedical imaging. For further information, see Wikis and Docs.

San Askaruly 47 Dec 28, 2022
On the Variance of the Adaptive Learning Rate and Beyond

RAdam On the Variance of the Adaptive Learning Rate and Beyond We are in an early-release beta. Expect some adventures and rough edges. Table of Conte

Liyuan Liu 2.5k Dec 27, 2022
High-fidelity performance metrics for generative models in PyTorch

High-fidelity performance metrics for generative models in PyTorch

Vikram Voleti 5 Oct 24, 2021
Implementation of LambdaNetworks, a new approach to image recognition that reaches SOTA with less compute

Lambda Networks - Pytorch Implementation of λ Networks, a new approach to image recognition that reaches SOTA on ImageNet. The new method utilizes λ l

Phil Wang 1.5k Jan 07, 2023
An optimizer that trains as fast as Adam and as good as SGD.

AdaBound An optimizer that trains as fast as Adam and as good as SGD, for developing state-of-the-art deep learning models on a wide variety of popula

LoLo 2.9k Dec 27, 2022
Differentiable SDE solvers with GPU support and efficient sensitivity analysis.

PyTorch Implementation of Differentiable SDE Solvers This library provides stochastic differential equation (SDE) solvers with GPU support and efficie

Google Research 1.2k Jan 04, 2023