An AutoML survey focusing on practical systems.

Overview

AutoML Survey

An (in-progress) AutoML survey focusing on practical systems.


This project is a community effort in constructing and maintaining an up-to-date beginner-friendly introduction to AutoML, focusing on practical systems. AutoML is a big field, and continues to grow daily. Hence, we cannot hope to provide a comprehensive description of every interesting idea or approach available. Thus, we decided to focus on practical AutoML systems, and spread outwards from there into the methodologies and theoretical concepts that power these systems. Our intuition is that, even though there are a lot of interesting ideas still in research stage, the most mature and battle-tested concepts are those that have been succesfully applied to construct practical AutoML systems.

To this end, we are building a database of qualitative criteria for all AutoML systems we've heard of. We define an AutoML system as a software project that can be used by non-experts in machine learning to build effective ML pipelines on at least some common domains and tasks. It doesn't matter if its open-source and/or commercial, a library or an application with a GUI, or a cloud service. What matters is that it is intended to be used in practice, as opposed to, say, a reference implementation of a novel AutoML strategy in a Jupyter Notebook.

Features of an AutoML system

For each of them we are creating a system card that describes, in our opinion, the most relevant features of the system, both from the scientific and the engineering points of view. To describe an AutoML system, we use a YAML-based definition. Most of the features are self-explanatory.

๐Ÿ’ก Check data/systems/_template.yml for a starting template.

Basic information

Characteristics about the basic information of the system as a software product.

  • name (str): Name of the system.
  • description (str): A short (2-4 sentences) description of the sytem.
  • website (str): The URL of the main website or documentation.
  • open_source (bool): Whether the system is open-source.
  • institutions (list[str]): List of businesses or academic institutions that directly support the development of the system, and/or hold intellectual property over it.
  • repository (str): If it's open-source, link of a public source code repository, otherwise null.
  • license (str): If it's open-source, a license key, otherwise null.
  • references (list[str]): List of links to relevant papers, preferably DOIs or other universal handlers, but can also be links to arxiv.org or other repositories sorted by most relevant papers, not date.

User interfaces

Characteristics describing how the users interact with the system.

  • cli (bool): Whether the system has a command line interface
  • gui (bool): Whether the system has a graphic user interface
  • http (bool): Whether the system can used from an HTTP RESTful API
  • library (bool): Whether the system can be linked as a code library
  • programming_languages (list[str]): List of programming languages in which the system can be used, i.e., it is either natively coded in that language or there are maintained bindings (as opposed to using language X's standard way to call code from language Y).

Domains

Characteristics describing the domains in which the system can be applied, which roughly correspond to the types of input data that the system can handle.

  • domains (list[str]): Domains in which the system can be deployed. Valid values are:
    • images
    • nlp
    • tabular
    • time_series
  • multi_domain (bool): Whether the system supports multiple domains for a single workflow, e.g., by allowing multiple inputs of different types simultaneously

Techniques

Characteristics describing the actual models and techniques used in the system, and the underlying ML libraries where those techniques are implemented.

  • techniques (list[str]): List of high-level techniques that are available in the systems, broadly classified according to model families. Valid values are:
    • linear_models
    • trees
    • bayesian
    • kernel_machines
    • graphical_models
    • mlp
    • cnn
    • rnn
    • pretrained
    • ensembles
    • ad_hoc ( ๐Ÿ“ indicates non-ML algorithms, e.g., tokenizers)
  • distillation (bool): Whether the system supports model distillation
  • ml_libraries (list[str]): List of ML libraries that support the system, i.e., where the techniques are actually implemented, if any. Valid values are lists of strings. Some examples are:
    • scikit-learn
    • keras
    • pytorch
    • nltk
    • spacy
    • transformers

Tasks

Characteristics describing the types of tasks, or problems, in which the system can be applied, which roughly correspond to the types of outputs supported.

  • tasks (list[str]): List of high-level tasks the system can perform automatically. Valid values are:
    • classification
    • structured_prediction
    • structured_generation
    • unstructured_generation
    • regression
    • clustering
    • imputation
    • segmentation
    • feature_preprocessing
    • feature_selection
    • data_augmentation
    • dimensionality_reduction
    • data_preprocessing ( ๐Ÿ“ domain-agonostic data preprocessing such as normalization and scaling)
    • domain_preprocessing ( ๐Ÿ“ refers to domain-specific preprocessing, e.g., stemming)
  • multi_task: Whether the system supports multiple tasks in a single workflow, e.g., by allowing multiple output heads from the same neural network

Search strategies

Characteristics describing the optimizaction/search strategies used for model search and/or hyperparameter tunning.

  • search_strategies (list[str]): List of high-level search strategies that are available in the system. Valid values are:
    • random
    • evolutionary
    • gradient_descent
    • hill_climbing
    • bayesian
    • grid
    • hyperband
    • reinforcement_learning
    • constructive
    • monte_carlo
  • meta_learning (list[str]): If the system includes meta-learning, list of broadly classified techniques used. Valid values are:
    • portfolio
    • warm_start

Search space

Characteristics describing the search space, the types of hyperparameters that can be optimized, and the types of ML pipelines that can be represented in this space.

  • search_space: High-level characteristics of the hyperparameter search space.
    • hierarchical (bool): If there are hyperparameters that only make sense conditioned to others.
    • probabilistic (bool): If the hyperparameter space has an associated probabilistic model.
    • differentiable (bool): If the hyperameter space can be used for gradient descent.
    • automatic (bool): If the global structure of the hyperparameter space is inferred automatically from, e.g., type annotations or model's documentation, as opposed to explicitely defined by the developers or the user.
    • hyperparameters (list[str]): Types of hyperparameters that can be optimized. Valid values are:
      • continuous
      • discrete
      • categorical
      • conditional
    • pipelines: Types of pipelines that can be discovered by the AutoML process. Each of the following keys is boolean.
      • single (bool): A single estimator (or model in general)
      • fixed (bool): A fixed pipeline with several, but predefined, steps
      • linear (bool): A variable-length pipeline where each step feeds on the immediately previous output
      • graph (bool): An arbitrarily graph-shaped pipeline where each step can feed on any of the previous outputs
    • robust (bool): Whether the seach space contains potentially invalid pipelines that are only discovered when evaluated, e.g., allowing a dense-only estimator to precede a sparse transformer.

Software architecture

Other characteristics describing general features of the system as a software product.

  • extensible (bool): Whether the system is designed to be extensible, in the sense that a user can add a single new type of model, or search algorithm, etc., in an easy manner, not needing to modify any part of the system/
  • accessible (bool): Whether the models obtained from the AutoML process can be freely inspected by the user up to the level of individual parameters (e.g., neural network weights).
  • portable (bool): Whether the models obtained can be exported out of the AutoML system, either on a standard format, or, at least, in a format native of the underlying ML library,such that they can be deployed on another platform without depending on the AutoML system itself.
  • computational_resources: Computational resources that, if available, can be leveraged by the system.
    • gpu (bool): Whether the system supports GPUs.
    • tpu (bool): Whether the system supports TPUs.
    • cluster (bool): Whether the system supports cluster-based parallelism.

How to contribute

If you are an author or a user of any practical AutoML system that roughly fits the previous criteria, we would love to have your contributions. You can add new systems, add information for existing ones, or fix anything that is incorrect.

To do this, either create a new or modify an existing file in data/systems. Once done, you can run make check to ensure that the modifications are valid with respect to the schema defined in scripts/models.py. If you need to add new fields, or new values to any of the enumerations defined, feel free to modify the corresponding schema as well (and modify both data/systems/_template.yml and this README).

Once validated, you can open a pull request.

License

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Owner
AutoGOAL
Democratizing Machine Learning
AutoGOAL
Production Grade Machine Learning Service

This project is made to help you scale from a basic Machine Learning project for research purposes to a production grade Machine Learning web service

Abdullah Zaiter 10 Apr 04, 2022
Visualize classified time series data with interactive Sankey plots in Google Earth Engine

sankee Visualize changes in classified time series data with interactive Sankey plots in Google Earth Engine Contents Description Installation Using P

Aaron Zuspan 76 Dec 15, 2022
K-Means clusternig example with Python and Scikit-learn

Unsupervised-Machine-Learning Flat Clustering K-Means clusternig example with Python and Scikit-learn Flat clustering Clustering algorithms group a se

Emin 1 Dec 13, 2021
A Tools that help Data Scientists and ML engineers train and deploy ML models.

Domino Research This repo contains projects under active development by the Domino R&D team. We build tools that help Data Scientists and ML engineers

Domino Data Lab 73 Oct 17, 2022
LinearRegression2 Tvads and CarSales

LinearRegression2_Tvads_and_CarSales This project infers the insight that how the TV ads for cars and car Sales are being linked with each other. It i

Ashish Kumar Yadav 1 Dec 29, 2021
PennyLane is a cross-platform Python library for differentiable programming of quantum computers

PennyLane is a cross-platform Python library for differentiable programming of quantum computers. Train a quantum computer the same way as a neural ne

PennyLaneAI 1.6k Jan 01, 2023
ml4h is a toolkit for machine learning on clinical data of all kinds including genetics, labs, imaging, clinical notes, and more

ml4h is a toolkit for machine learning on clinical data of all kinds including genetics, labs, imaging, clinical notes, and more

Broad Institute 65 Dec 20, 2022
Interactive Parallel Computing in Python

Interactive Parallel Computing with IPython ipyparallel is the new home of IPython.parallel. ipyparallel is a Python package and collection of CLI scr

IPython 2.3k Dec 30, 2022
JMP is a Mixed Precision library for JAX.

Mixed precision training [0] is a technique that mixes the use of full and half precision floating point numbers during training to reduce the memory bandwidth requirements and improve the computatio

DeepMind 108 Dec 31, 2022
Python/Sage Tool for deriving Scattering Matrices for WDF R-Adaptors

R-Solver A Python tools for deriving R-Type adaptors for Wave Digital Filters. This code is not quite production-ready. If you are interested in contr

8 Sep 19, 2022
Kaggle Competition using 15 numerical predictors to predict a continuous outcome.

Kaggle-Comp.-Data-Mining Kaggle Competition using 15 numerical predictors to predict a continuous outcome as part of a final project for a stats data

moisey alaev 1 Dec 28, 2021
Customers Segmentation with RFM Scores and K-means

Customer Segmentation with RFM Scores and K-means RFM Segmentation table: K-Means Clustering: Business Problem Rule-based customer segmentation machin

5 Aug 10, 2022
Code base of KU AIRS: SPARK Autonomous Vehicle Team

KU AIRS: SPARK Autonomous Vehicle Project Check this link for the blog post describing this project and the video of SPARK in simulation and on parkou

Mehmet Enes Erciyes 1 Nov 23, 2021
Simple linear model implementations from scratch.

Hand Crafted Models Simple linear model implementations from scratch. Table of contents Overview Project Structure Getting started Citing this project

Jonathan Sadighian 2 Sep 13, 2021
SPCL 48 Dec 12, 2022
Anomaly Detection and Correlation library

luminol Overview Luminol is a light weight python library for time series data analysis. The two major functionalities it supports are anomaly detecti

LinkedIn 1.1k Jan 01, 2023
Scikit-learn compatible wrapper of the Random Bits Forest program written by (Wang et al., 2016)

sklearn-compatible Random Bits Forest Scikit-learn compatible wrapper of the Random Bits Forest program written by Wang et al., 2016, available as a b

Tamas Madl 8 Jul 24, 2021
2021 Machine Learning Security Evasion Competition

2021 Machine Learning Security Evasion Competition This repository contains code samples for the 2021 Machine Learning Security Evasion Competition. P

Fabrรญcio Ceschin 8 May 01, 2022
Kaggler is a Python package for lightweight online machine learning algorithms and utility functions for ETL and data analysis.

Kaggler is a Python package for lightweight online machine learning algorithms and utility functions for ETL and data analysis. It is distributed under the MIT License.

Jeong-Yoon Lee 720 Dec 25, 2022