A Lightweight Hyperparameter Optimization Tool ๐Ÿš€

Overview

Lightweight Hyperparameter Optimization ๐Ÿš€

Pyversions PyPI version Code style: black Colab

The mle-hyperopt package provides a simple and intuitive API for hyperparameter optimization of your Machine Learning Experiment (MLE) pipeline. It supports real, integer & categorical search variables and single- or multi-objective optimization.

Core features include the following:

  • API Simplicity: strategy.ask(), strategy.tell() interface & space definition.
  • Strategy Diversity: Grid, random, coordinate search, SMBO & wrapping around FAIR's nevergrad.
  • Search Space Refinement based on the top performing configs via strategy.refine(top_k=10).
  • Export of configurations to execute via e.g. python train.py --config_fname config.yaml.
  • Storage & reload search logs via strategy.save(<log_fname>), strategy.load(<log_fname>).

For a quickstart check out the notebook blog ๐Ÿ“– .

The API ๐ŸŽฎ

from mle_hyperopt import RandomSearch

# Instantiate random search class
strategy = RandomSearch(real={"lrate": {"begin": 0.1,
                                        "end": 0.5,
                                        "prior": "log-uniform"}},
                        integer={"batch_size": {"begin": 32,
                                                "end": 128,
                                                "prior": "uniform"}},
                        categorical={"arch": ["mlp", "cnn"]})

# Simple ask - eval - tell API
configs = strategy.ask(5)
values = [train_network(**c) for c in configs]
strategy.tell(configs, values)

Implemented Search Types ๐Ÿ”ญ

Search Type Description search_config
drawing GridSearch Search over list of discrete values -
drawing RandomSearch Random search over variable ranges refine_after, refine_top_k
drawing CoordinateSearch Coordinate-wise optimization with fixed defaults order, defaults
drawing SMBOSearch Sequential model-based optimization base_estimator, acq_function, n_initial_points
drawing NevergradSearch Multi-objective nevergrad wrapper optimizer, budget_size, num_workers

Variable Types & Hyperparameter Spaces ๐ŸŒ

Variable Type Space Specification
drawing real Real-valued Dict: begin, end, prior/bins (grid)
drawing integer Integer-valued Dict: begin, end, prior/bins (grid)
drawing categorical Categorical List: Values to search over

Installation โณ

A PyPI installation is available via:

pip install mle-hyperopt

Alternatively, you can clone this repository and afterwards 'manually' install it:

git clone https://github.com/mle-infrastructure/mle-hyperopt.git
cd mle-hyperopt
pip install -e .

Further Options ๐Ÿšด

Saving & Reloading Logs ๐Ÿช

# Storing & reloading of results from .pkl
strategy.save("search_log.json")
strategy = RandomSearch(..., reload_path="search_log.json")

# Or manually add info after class instantiation
strategy = RandomSearch(...)
strategy.load("search_log.json")

Search Decorator ๐Ÿงถ

from mle_hyperopt import hyperopt

@hyperopt(strategy_type="grid",
          num_search_iters=25,
          real={"x": {"begin": 0., "end": 0.5, "bins": 5},
                "y": {"begin": 0, "end": 0.5, "bins": 5}})
def circle(config):
    distance = abs((config["x"] ** 2 + config["y"] ** 2))
    return distance

strategy = circle()

Storing Configuration Files ๐Ÿ“‘

# Store 2 proposed configurations - eval_0.yaml, eval_1.yaml
strategy.ask(2, store=True)
# Store with explicit configuration filenames - conf_0.yaml, conf_1.yaml
strategy.ask(2, store=True, config_fnames=["conf_0.yaml", "conf_1.yaml"])

Retrieving Top Performers & Visualizing Results ๐Ÿ“‰

# Get the top k best performing configurations
id, configs, values = strategy.get_best(top_k=4)

# Plot timeseries of best performing score over search iterations
strategy.plot_best()

# Print out ranking of best performers
strategy.print_ranking(top_k=3)

Refining the Search Space of Your Strategy ๐Ÿช“

# Refine the search space after 5 & 10 iterations based on top 2 configurations
strategy = RandomSearch(real={"lrate": {"begin": 0.1,
                                        "end": 0.5,
                                        "prior": "log-uniform"}},
                        integer={"batch_size": {"begin": 1,
                                                "end": 5,
                                                "prior": "uniform"}},
                        categorical={"arch": ["mlp", "cnn"]},
                        search_config={"refine_after": [5, 10],
                                       "refine_top_k": 2})

# Or do so manually using `refine` method
strategy.tell(...)
strategy.refine(top_k=2)

Note that the search space refinement is only implemented for random, SMBO and nevergrad-based search strategies.

Development & Milestones for Next Release

You can run the test suite via python -m pytest -vv tests/. If you find a bug or are missing your favourite feature, feel free to contact me @RobertTLange or create an issue ๐Ÿค— .

  • Robust type checking with isinstance(self.log[0]["objective"], (float, int, np.integer, np.float))
  • Add improvement method indicating if score is better than best stored one
  • Fix logging message when log is stored
  • Add save option for best plot
  • Make json serializer more robust for numpy data types
  • Make sure search space refinement works for different batch sizes
  • Add args, kwargs into decorator
  • Check why SMBO can propose same config multiple times. Add Hutter reference.
Comments
  • [FEATURE] Hyperband

    [FEATURE] Hyperband

    Hi! I was wondering if the Hyperband hyperparameter algorithm is something you want implemented.

    I'm willing to spend some time working on it if there's interest.

    opened by colligant 5
  • [FEATURE] Option to pickle the whole strategy

    [FEATURE] Option to pickle the whole strategy

    Right now strategy.save produces a JSON with the log. Any reason you didn't opt for (or have an option of) pickling the whole strategy? Two motivations for this:

    1. Not having to re-init the strategy with all the args/kwargs
    2. Not having to loop through tell! SMBO can take quite some time to do this.
    opened by alexander-soare 4
  • Type checking strategy.log could be made more flexible?

    Type checking strategy.log could be made more flexible?

    Yay first issue! Congrats Robert, this is a great interface. Haven't used a hyperopt library in a while and this felt so easy to pick up.


    For example https://github.com/RobertTLange/mle-hyperopt/blob/57eb806e95c854f48f8faac2b2dc182d2180d393/mle_hyperopt/search.py#L251

    raises an error if my objective is numpy.float64. Also noticed https://github.com/RobertTLange/mle-hyperopt/blob/57eb806e95c854f48f8faac2b2dc182d2180d393/mle_hyperopt/search.py#L206

    Could we just have

    isinstance(strategy.log[0]['objective'], (float, int))
    

    which would cover the numpy types?

    opened by alexander-soare 4
  • Successive Halving, Hyperband, PBT

    Successive Halving, Hyperband, PBT

    • [x] Robust type checking with isinstance(self.log[0]["objective"], (float, int, np.integer, np.float))
    • [x] Add improvement method indicating if score is better than best stored one
    • [x] Fix logging message when log is stored
    • [x] Add save option for best plot
    • [x] Make json serializer more robust for numpy data types
    • [x] Add possibility to save as .pkl file by providing filename in .save method ending with .pkl (issue #2)
    • [x] Add args, kwargs into decorator
    • [x] Adds synchronous Successive Halving (SuccessiveHalvingSearch - issue #3)
    • [x] Adds synchronous HyperBand (HyperbandSearch - issue #3)
    • [x] Adds synchronous PBT (PBTSearch - issue #4 )
    opened by RobertTLange 1
  • [Feature] Synchronous PBT

    [Feature] Synchronous PBT

    Move PBT ask/tell functionality from mle-toolbox experimental to mle-hyperopt. Is there any literature/empirical evidence for the importance of being asynchronous?

    enhancement 
    opened by RobertTLange 1
Releases(v0.0.7)
  • v0.0.7(Feb 20, 2022)

    Added

    • Log reloading helper for post-processing.

    Fixed

    • Bug fix in mle-search with imports of dependencies. Needed to append path.
    • Bug fix with cleaning nested dictionaries. Have to make sure not to delete entire sub-dictionary.
    Source code(tar.gz)
    Source code(zip)
  • v0.0.6(Feb 20, 2022)

    Added

    • Adds a command line interface for running a sequential search given a python script <script>.py containing a function main(config), a default configuration file <base>.yaml & a search configuration <search>.yaml. The main function should return a single scalar performance score. You can then start the search via:

      mle-search <script>.py --base_config <base>.yaml --search_config <search>.yaml --num_iters <search_iters>
      

      Or short via:

      mle-search <script>.py -base <base>.yaml -search <search>.yaml -iters <search_iters>
      
    • Adds doc-strings to all functionalities.

    Changed

    • Make it possible to optimize parameters in nested dictionaries. Added helpers flatten_config and unflatten_config. For shaping 'sub1/sub2/vname' <-> {sub1: {sub2: {vname: v}}}
    • Make start-up message also print fixed parameter settings.
    • Cleaned up decorator with the help of Strategies wrapper.
    Source code(tar.gz)
    Source code(zip)
  • v0.0.5(Jan 5, 2022)

    Added

    • Adds possibility to store and reload entire strategies as pkl file (as asked for in issue #2).
    • Adds improvement method indicating if score is better than best stored one
    • Adds save option for best plot
    • Adds args, kwargs into decorator
    • Adds synchronous Successive Halving (SuccessiveHalvingSearch - issue #3)
    • Adds synchronous HyperBand (HyperbandSearch - issue #3)
    • Adds synchronous PBT (PBTSearch - issue #4)
    • Adds option to save log in tell method
    • Adds small torch mlp example for SH/Hyperband/PBT w. logging/scheduler
    • Adds print welcome/update message for strategy specific info

    Changed

    • Major internal restructuring:
      • clean_data: Get rid of extra data provided in configuration file
      • tell_search: Update model of search strategy (e.g. SMBO/Nevergrad)
      • log_search: Add search specific log data to evaluation log
      • update_search: Refine search space/change active strategy etc.
    • Also allow to store checkpoint of trained models in tell method.
    • Fix logging message when log is stored
    • Make json serializer more robust for numpy data types
    • Robust type checking with isinstance(self.log[0]["objective"], (float, int, np.integer, np.float))
    • Update NB to include mle-scheduler example
    • Make PBT explore robust for integer/categorical valued hyperparams
    • Calculate total batches & their sizes for hyperband
    Source code(tar.gz)
    Source code(zip)
  • v0.0.3(Oct 24, 2021)

    • Fixes CoordinateSearch active grid search dimension updating. We have to account for the fact that previous coordinates are not evaluated again after switching the active variable.
    • Generalizes NevergradSearch to wrap around all search strategies.
    • Adds rich logging to all console print statements.
    • Updates documentation and adds text to getting_started.ipynb.
    Source code(tar.gz)
    Source code(zip)
  • v0.0.2(Oct 20, 2021)

    • Fixes import bug when using PyPi installation.
    • Enhances documentation and test coverage.
    • Adds search space refinement for nevergrad and smbo search strategies via refine_after and refine_top_k:
    strategy = SMBOSearch(
            real={"lrate": {"begin": 0.1, "end": 0.5, "prior": "uniform"}},
            integer={"batch_size": {"begin": 1, "end": 5, "prior": "uniform"}},
            categorical={"arch": ["mlp", "cnn"]},
            search_config={
                "base_estimator": "GP",
                "acq_function": "gp_hedge",
                "n_initial_points": 5,
                "refine_after": 5,
                "refine_top_k": 2,
            },
            seed_id=42,
            verbose=True
        )
    
    • Adds additional strategy boolean option maximize_objective to maximize instead of performing default black-box minimization.
    Source code(tar.gz)
    Source code(zip)
  • v0.0.1(Oct 16, 2021)

    Base API implementation:

    from mle_hyperopt import RandomSearch
    
    # Instantiate random search class
    strategy = RandomSearch(real={"lrate": {"begin": 0.1,
                                            "end": 0.5,
                                            "prior": "log-uniform"}},
                            integer={"batch_size": {"begin": 32,
                                                    "end": 128,
                                                    "prior": "uniform"}},
                            categorical={"arch": ["mlp", "cnn"]})
    
    # Simple ask - eval - tell API
    configs = strategy.ask(5)
    values = [train_network(**c) for c in configs]
    strategy.tell(configs, values)
    
    Source code(tar.gz)
    Source code(zip)
Library for 8-bit optimizers and quantization routines.

bitsandbytes Bitsandbytes is a lightweight wrapper around CUDA custom functions, in particular 8-bit optimizers and quantization functions. Paper -- V

Facebook Research 687 Jan 04, 2023
Eff video representation - Efficient video representation through neural fields

Neural Residual Flow Fields for Efficient Video Representations 1. Download MPI

41 Jan 06, 2023
Skipgram Negative Sampling in PyTorch

PyTorch SGNS Word2Vec's SkipGramNegativeSampling in Python. Yet another but quite general negative sampling loss implemented in PyTorch. It can be use

Jamie J. Seol 287 Dec 14, 2022
Updated for TTS(CE) = Also Known as TTN V3. The code requires the first server to be 'ttn' protocol.

Updated Updated for TTS(CE) = Also Known as TTN V3. The code requires the first server to be 'ttn' protocol. Introduction This balenaCloud (previously

Remko 1 Oct 17, 2021
MegEngine implementation of YOLOX

Introduction YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and ind

ๆ—ท่ง†ๅคฉๅ…ƒ MegEngine 77 Nov 22, 2022
Generative Models for Graph-Based Protein Design

Graph-Based Protein Design This repo contains code for Generative Models for Graph-Based Protein Design by John Ingraham, Vikas Garg, Regina Barzilay

John Ingraham 159 Dec 15, 2022
๐ŸŒณ A Python-inspired implementation of the Optimum-Path Forest classifier.

OPFython: A Python-Inspired Optimum-Path Forest Classifier Welcome to OPFython. Note that this implementation relies purely on the standard LibOPF. Th

Gustavo Rosa 30 Jan 04, 2023
Controlling a game using mediapipe hand tracking

These scripts use the Google mediapipe hand tracking solution in combination with a webcam in order to send game instructions to a racing game. It features 2 methods of control

3 May 17, 2022
Implementation of the bachelor's thesis "Real-time stock predictions with deep learning and news scraping".

Real-time stock predictions with deep learning and news scraping This repository contains a partial implementation of my bachelor's thesis "Real-time

David รlvarez de la Torre 0 Feb 09, 2022
O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning (CoRL 2021)

O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning Object-object Interaction Affordance Learning. For a given object-object int

Kaichun Mo 26 Nov 04, 2022
Pytorch domain adaptation package

DomainAdaptation This package is created to tackle the problem of domain shifts when dealing with two domains of different feature distributions. In d

Institute of Computational Perception 7 Oct 22, 2022
Mind the Trade-off: Debiasing NLU Models without Degrading the In-distribution Performance

Models for natural language understanding (NLU) tasks often rely on the idiosyncratic biases of the dataset, which make them brittle against test cases outside the training distribution.

Ubiquitous Knowledge Processing Lab 22 Jan 02, 2023
CoRe: Contrastive Recurrent State-Space Models

CoRe: Contrastive Recurrent State-Space Models This code implements the CoRe model and reproduces experimental results found in Robust Robotic Control

Apple 21 Aug 11, 2022
Learn about quantum computing and algorithm on quantum computing

quantum_computing this repo contains everything i learn about quantum computing and algorithm on quantum computing what is aquantum computing quantum

arfy slowy 8 Dec 25, 2022
In the AI for TSP competition we try to solve optimization problems using machine learning.

AI for TSP Competition Goal In the AI for TSP competition we try to solve optimization problems using machine learning. The competition will be hosted

Paulo da Costa 11 Nov 27, 2022
An attempt at the implementation of GLOM, Geoffrey Hinton's paper for emergent part-whole hierarchies from data

GLOM TensorFlow This Python package attempts to implement GLOM in TensorFlow, which allows advances made by several different groups transformers, neu

Rishit Dagli 32 Feb 21, 2022
Quantify the difference between two arbitrary curves in space

similaritymeasures Quantify the difference between two arbitrary curves Curves in this case are: discretized by inidviudal data points ordered from a

Charles Jekel 175 Jan 08, 2023
Code to accompany the paper "Finding Bipartite Components in Hypergraphs", which is published in NeurIPS'21.

Finding Bipartite Components in Hypergraphs This repository contains code to accompany the paper "Finding Bipartite Components in Hypergraphs", publis

Peter Macgregor 5 May 06, 2022
PyTorch implementation of the end-to-end coreference resolution model with different higher-order inference methods.

End-to-End Coreference Resolution with Different Higher-Order Inference Methods This repository contains the implementation of the paper: Revealing th

Liyan 52 Jan 04, 2023
Neural Re-rendering for Full-frame Video Stabilization

NeRViS: Neural Re-rendering for Full-frame Video Stabilization Project Page | Video | Paper | Google Colab Setup Setup environment for [Yu and Ramamoo

Yu-Lun Liu 9 Jun 17, 2022