An end-to-end framework for mixed-integer optimization with data-driven learned constraints.

Overview

OptiCL

OptiCL is an end-to-end framework for mixed-integer optimization (MIO) with data-driven learned constraints. We address a problem setting in which a practitioner wishes to optimize decisions according to some objective and constraints, but that we have no known functions relating our decisions to the outcomes of interest. We propose to learn predictive models for these outcomes using machine learning, and to subsequently optimize decisions by embedding the learned models in a larger MIO formulation.

The framework and full methodology are detailed in our manuscript, Mixed-Integer Optimization with Constraint Learning.

How to use OptiCL

You can install the OptiCL package locally by cloning the repository and running pip install . within the home directory of the repo. This will allow you to load opticl in Python; see the example notebooks for specific usage of the functions.

The OptiCL pipeline

Our pipeline requires two inputs from a user:

  • Training data, with features classified as contextual variables, decisions, and outcomes.
  • An initial conceptual model, which is defined by specifying the decision variables and any domain-driven fixed constraints or deterministic objective terms.

Given these inputs, we implement a pipeline that:

  1. Learns predictive models for the outcomes of interest by using a moel training and selection pipeline with cross-validation.
  2. Efficiently charactertizes the feasible decision space, or "trust region," using the convex hull of the observed data.
  3. Embeds the learned models and trust region into a MIO formulation, which can then be solved using a Pyomo-supported MIO solver (e.g., Gurobi).

OptiCL requires no manual specification of a trained ML model, although the end-user can optionally restrict to a subset of model types to be considered in the selection pipeline. Furthermore, we expose the underlying trained models within the pipeline, providing transparency and allowing for the predictive models to be externally evaluated.

Examples

We illustrate the full OptiCL pipeline in three notebooks:

  • A case study on food basket optimization for the World Food Programme (notebooks/WFP/The Palatable Diet Problem.ipynb): This notebook presents a simplified version of the case study in the manuscript. It shows how to train and select models for a single learned outcome, define a conceptual model with a known objective and constraints, and solve the MIO with an additional learned constraint.
  • A general pipeline overview (notebooks/Pipeline/Model_embedding.ipynb): This notebook demonstrates the general features of the pipleine, including the procedure for training and embedding models for multiple outcomes, the specification of each outcome as either a constraint or objective term, and the incorporation of contextual features and domain-driven constraints.
  • Model verification (notebooks/Pipeline/Model_Verification_Regression.ipynb, notebooks/Pipeline/Model_Verification_Classification.ipynb): These notebooks shows the training and embedding of a single model and compares the sklearn predictions to the MIO predictions to verify the MIO embeddings. The classification notebook also provides details on how we linearize constraints for the binary classification setting.

The package currently fully supports model training and embedding for continuous outcomes across all ML methods, as demonstrated in the example notebooks. Binary classification is fully supported for learned constraints. Multi-class classification support is in development.

Citation

Our software can be cited as:

  @misc{OptiCL,
    author = "Donato Maragno and Holly Wiberg",
    title = "OptiCL: Mixed-integer optimization with constraint learning",
    year = 2021,
    url = "https://github.com/hwiberg/OptiCL/"
  }

Get in touch!

Our package is under active development. We welcome any questions or suggestions. Please submit an issue on Github, or reach us at [email protected] and [email protected].

Owner
Holly Wiberg
Holly Wiberg
Layered Neural Atlases for Consistent Video Editing

Layered Neural Atlases for Consistent Video Editing Project Page | Paper This repository contains an implementation for the SIGGRAPH Asia 2021 paper L

Yoni Kasten 353 Dec 27, 2022
Script that receives an Image (original) and a set of images to be used as "pixels" in reconstruction of the Original image using the set of images as "pixels"

picinpics Script that receives an Image (original) and a set of images to be used as "pixels" in reconstruction of the Original image using the set of

RodrigoCMoraes 1 Oct 24, 2021
Diverse Image Generation via Self-Conditioned GANs

Diverse Image Generation via Self-Conditioned GANs Project | Paper Diverse Image Generation via Self-Conditioned GANs Steven Liu, Tongzhou Wang, David

Steven Liu 147 Dec 03, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code here will be included in upstream Pytorch eventually. The intenti

NVIDIA Corporation 6.9k Jan 03, 2023
Pytorch implementation of SenFormer: Efficient Self-Ensemble Framework for Semantic Segmentation

SenFormer: Efficient Self-Ensemble Framework for Semantic Segmentation Efficient Self-Ensemble Framework for Semantic Segmentation by Walid Bousselham

61 Dec 26, 2022
PyTorch code for the paper "FIERY: Future Instance Segmentation in Bird's-Eye view from Surround Monocular Cameras"

FIERY This is the PyTorch implementation for inference and training of the future prediction bird's-eye view network as described in: FIERY: Future In

Wayve 406 Dec 24, 2022
pytorchのスライス代入操作をonnxに変換する際にScatterNDならないようにするサンプル

pytorch_remove_ScatterND pytorchのスライス代入操作をonnxに変換する際にScatterNDならないようにするサンプル。 スライスしたtensorにそのまま代入してしまうとScatterNDになるため、計算結果をcatで新しいtensorにする。 python ver

2 Dec 01, 2022
PyTorch Implementation of Sparse DETR

Sparse DETR By Byungseok Roh*, Jaewoong Shin*, Wuhyun Shin*, and Saehoon Kim at Kakao Brain. (*: Equal contribution) This repository is an official im

Kakao Brain 113 Dec 28, 2022
Crab is a flexible, fast recommender engine for Python that integrates classic information filtering recommendation algorithms in the world of scientific Python packages (numpy, scipy, matplotlib).

Crab - A Recommendation Engine library for Python Crab is a flexible, fast recommender engine for Python that integrates classic information filtering r

python-recsys 1.2k Dec 21, 2022
ICRA 2021 "Towards Precise and Efficient Image Guided Depth Completion"

PENet: Precise and Efficient Depth Completion This repo is the PyTorch implementation of our paper to appear in ICRA2021 on "Towards Precise and Effic

232 Dec 25, 2022
Pytorch implementation of the paper Improving Text-to-Image Synthesis Using Contrastive Learning

T2I_CL This is the official Pytorch implementation of the paper Improving Text-to-Image Synthesis Using Contrastive Learning Requirements Linux Python

42 Dec 31, 2022
Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic Scenes", ICCV 2021.

Deep 3D Mask Volume for View Synthesis of Dynamic Scenes Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic S

Ken Lin 17 Oct 12, 2022
Code & Data for Enhancing Photorealism Enhancement

Code & Data for Enhancing Photorealism Enhancement

Intel ISL (Intel Intelligent Systems Lab) 1.1k Jan 08, 2023
Semantic Segmentation of images using PixelLib with help of Pascalvoc dataset trained with Deeplabv3+ framework.

CARscan- Approach 1 - Segmentation of images by detecting contours. It failed because in images with elements along with cars were also getting detect

Padmanabha Banerjee 5 Jul 29, 2021
clustering moroccan stocks time series data using k-means with dtw (dynamic time warping)

Moroccan Stocks Clustering Context Hey! we don't always have to forecast time series am I right ? We use k-means to cluster about 70 moroccan stock pr

Ayman Lafaz 7 Oct 18, 2022
This is the pytorch implementation for the paper: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation, which is accepted to ICCV2021.

GMPQ: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation This is the pytorch implementation for the paper: Generalizable Mix

18 Sep 02, 2022
Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning"

A Unified Framework for Parameter-Efficient Transfer Learning This is the official implementation of the paper: Towards a Unified View of Parameter-Ef

Junxian He 216 Dec 29, 2022
Distilling Motion Planner Augmented Policies into Visual Control Policies for Robot Manipulation (CoRL 2021)

Distilling Motion Planner Augmented Policies into Visual Control Policies for Robot Manipulation [Project website] [Paper] This project is a PyTorch i

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 6 Feb 28, 2022
LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)

LV-BERT Introduction In this repo, we introduce LV-BERT by exploiting layer variety for BERT. For detailed description and experimental results, pleas

Weihao Yu 14 Aug 24, 2022
Decorator for PyMC3

sampled Decorator for reusable models in PyMC3 Provides syntactic sugar for reusable models with PyMC3. This lets you separate creating a generative m

Colin 50 Oct 08, 2021