Transpile trained scikit-learn estimators to C, Java, JavaScript and others.

Overview

sklearn-porter

GitHub license Stack Overflow Join the chat at https://gitter.im/nok/sklearn-porter Twitter

Transpile trained scikit-learn estimators to C, Java, JavaScript and others.
It's recommended for limited embedded systems and critical applications where performance matters most.

Important

We're hard working on the first major release of sklearn-porter.
Until that we will just release bugfixes to the stable version.

Estimators

Estimator Programming language
Classifier Java * JS C Go PHP Ruby
svm.SVC , ✓ ᴵ
svm.NuSVC , ✓ ᴵ
svm.LinearSVC , ✓ ᴵ
tree.DecisionTreeClassifier , ✓ ᴱ, ✓ ᴵ , ✓ ᴱ , ✓ ᴱ , ✓ ᴱ , ✓ ᴱ , ✓ ᴱ
ensemble.RandomForestClassifier ✓ ᴱ, ✓ ᴵ ✓ ᴱ ✓ ᴱ ✓ ᴱ ✓ ᴱ ✓ ᴱ
ensemble.ExtraTreesClassifier ✓ ᴱ, ✓ ᴵ ✓ ᴱ ✓ ᴱ ✓ ᴱ ✓ ᴱ
ensemble.AdaBoostClassifier ✓ ᴱ, ✓ ᴵ ✓ ᴱ, ✓ ᴵ ✓ ᴱ
neighbors.KNeighborsClassifier , ✓ ᴵ , ✓ ᴵ
naive_bayes.GaussianNB , ✓ ᴵ
naive_bayes.BernoulliNB , ✓ ᴵ
neural_network.MLPClassifier , ✓ ᴵ , ✓ ᴵ
Regressor Java * JS C Go PHP Ruby
neural_network.MLPRegressor

✓ = is full-featured, ᴱ = with embedded model data, ᴵ = with imported model data, * = default language

Installation

Stable

Build Status stable branch PyPI PyPI

$ pip install sklearn-porter

Development

Build Status master branch

If you want the latest changes, you can install this package from the master branch:

$ pip uninstall -y sklearn-porter
$ pip install --no-cache-dir https://github.com/nok/sklearn-porter/zipball/master

Usage

Export

The following example demonstrates how you can transpile a decision tree estimator to Java:

from sklearn.datasets import load_iris
from sklearn.tree import tree
from sklearn_porter import Porter

# Load data and train the classifier:
samples = load_iris()
X, y = samples.data, samples.target
clf = tree.DecisionTreeClassifier()
clf.fit(X, y)

# Export:
porter = Porter(clf, language='java')
output = porter.export(embed_data=True)
print(output)

The exported result matches the official human-readable version of the decision tree.

Integrity

You should always check and compute the integrity between the original and the transpiled estimator:

# ...
porter = Porter(clf, language='java')

# Compute integrity score:
integrity = porter.integrity_score(X)
print(integrity)  # 1.0

Prediction

You can compute the prediction(s) in the target programming language:

# ...
porter = Porter(clf, language='java')

# Prediction(s):
Y_java = porter.predict(X)
y_java = porter.predict(X[0])
y_java = porter.predict([1., 2., 3., 4.])

Notebooks

You can run and test all notebooks by starting a Jupyter notebook server locally:

$ make open.examples
$ make stop.examples

CLI

In general you can use the porter on the command line:

$ porter 
   
   
    
     [--to 
    
    
     
     ]
         [--class_name 
     
     
      
      ] [--method_name 
      
      
       
       ]
         [--export] [--checksum] [--data] [--pipe]
         [--c] [--java] [--js] [--go] [--php] [--ruby]
         [--version] [--help]

      
      
     
     
    
    
   
   

The following example shows how you can save a trained estimator to the pickle format:

# ...

# Extract estimator:
joblib.dump(clf, 'estimator.pkl', compress=0)

After that the estimator can be transpiled to JavaScript by using the following command:

$ porter estimator.pkl --js

The target programming language is changeable on the fly:

$ porter estimator.pkl --c
$ porter estimator.pkl --java
$ porter estimator.pkl --php
$ porter estimator.pkl --java
$ porter estimator.pkl --ruby

For further processing the argument --pipe can be used to pass the result:

$ porter estimator.pkl --js --pipe > estimator.js

For instance the result can be minified by using UglifyJS:

$ porter estimator.pkl --js --pipe | uglifyjs --compress -o estimator.min.js

Development

Environment

You have to install required modules for broader development:

$ make install.environment  # conda environment (optional)
$ make install.requirements.development  # pip requirements

Independently, the following compilers and intepreters are required to cover all tests:

Name Version Command
GCC >=4.2 gcc --version
Java >=1.6 java -version
PHP >=5.6 php --version
Ruby >=2.4.1 ruby --version
Go >=1.7.4 go version
Node.js >=6 node --version

Testing

The tests cover module functions as well as matching predictions of transpiled estimators. Start all tests with:

$ make test

The test files have a specific pattern: '[Algorithm][Language]Test.py':

$ pytest tests -v -o python_files='RandomForest*Test.py'
$ pytest tests -v -o python_files='*JavaTest.py'

While you are developing new features or fixes, you can reduce the test duration by changing the number of tests:

$ N_RANDOM_FEATURE_SETS=5 N_EXISTING_FEATURE_SETS=10 \
  pytest tests -v -o python_files='*JavaTest.py'

Quality

It's highly recommended to ensure the code quality. For that Pylint is used. Start the linter with:

$ make lint

Citation

If you use this implementation in you work, please add a reference/citation to the paper. You can use the following BibTeX entry:

@unpublished{skpodamo,
  author = {Darius Morawiec},
  title = {sklearn-porter},
  note = {Transpile trained scikit-learn estimators to C, Java, JavaScript and others},
  url = {https://github.com/nok/sklearn-porter}
}

License

The module is Open Source Software released under the MIT license.

Questions?

Don't be shy and feel free to contact me on Twitter or Gitter.

Comments
  • Naive Bayes predicting the same label

    Naive Bayes predicting the same label

    I am figuring out which is the best machine learning approach to use for a pedometer project. So far I have gyro and accelerometer data of walking and no walking. When I train and test a Naive Bayes model in my machine I get nearly 70 of accuracy. However, when I port to java and add it to my android app and start using the implementation it is just predicting the same label. Several question arise from this: why is this happening?... Do I need to use an online learning algorithm for this scenario?, the balance of my classes is wrong?

    question 
    opened by alonsopg 9
  • Errors when porting LinearSVC model

    Errors when porting LinearSVC model

    Sorry to bother you again, but when attempting to run: python3 -m sklearn_porter -i model_notokenizer.pkl -l javaI get:

    Traceback (most recent call last):
      File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
        "__main__", mod_spec)
      File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/usr/local/lib/python3.5/dist-packages/sklearn_porter/__main__.py", line 71, in <module>
        main()
      File "/usr/local/lib/python3.5/dist-packages/sklearn_porter/__main__.py", line 49, in main
        porter = Porter(model, language=language)
      File "/usr/local/lib/python3.5/dist-packages/sklearn_porter/Porter.py", line 65, in __init__
        raise ValueError(error)
    ValueError: The given model 'Pipeline(memory=None,
         steps=[('vect', TfidfVectorizer(analyzer='word', binary=False, decode_error='strict',
            dtype=<class 'numpy.int64'>, encoding='utf-8', input='content',
            lowercase=True, max_df=0.5, max_features=None, min_df=0.001,
            ngram_range=(1, 1), norm='l2', preprocessor=None, smooth_idf=True...ax_iter=1000,
         multi_class='ovr', penalty='l2', random_state=None, tol=0.0001,
         verbose=0))])' isn't supported.
    
    

    I'm running python 3.5.2, numpy 1.13.1, and sklearn 0.19.0.

    bug 
    opened by FakeNameSE 9
  • SVC predict_proba not supported

    SVC predict_proba not supported

    when i try to export "porter = Porter(model, language='java', method= 'predict_proba')" from SVC model it returns "Currently the chosen model method 'predict_proba' isn't supported." is there any solution to get the probability of the predicted class ?!

    enhancement question new feature 
    opened by IslamSabdelazez 8
  • Java Error:

    Java Error: "Too many constants"

    Attempted to port a somewhat large random forest classifier (7.2 MB) for Java and compiling the Java class ended up giving a "too many constants" error, because of the number of hardcoded values to compose the tree. I circumvented this by using a simple script to separate out all (static) methods into individual classes and files. Is there a cleaner way internally to get around this problem or achieve this effect?

    bug enhancement high priority 
    opened by lichard49 8
  • Error when installing

    Error when installing

    The command: pip install sklearn-porter produces the following:

    Collecting sklearn-porter
      Using cached sklearn-porter-0.3.2.tar.gz
        Complete output from command python setup.py egg_info:
        Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "/tmp/pip-build-2k6e9qlh/sklearn-porter/setup.py", line 6, in <module>
            from sklearn_porter import Porter
          File "/tmp/pip-build-2k6e9qlh/sklearn-porter/sklearn_porter/__init__.py", line 3, in <module>
            from Porter import Porter
        ImportError: No module named 'Porter'
        
        ----------------------------------------
    Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-2k6e9qlh/sklearn-porter/
    
    bug 
    opened by magicorlan 6
  • How to read the multi-layer perceptrons model in Java written using python

    How to read the multi-layer perceptrons model in Java written using python

    I am using the wrapper of scikit-learn Multilayer Perceptron in Python https://github.com/aigamedev/scikit-neuralnetwork to train the neural network and save it to a file. Now, I want to expose it on production to predict in real time. So, I was thinking to use Java for better concurrency than Python. Hence, my question is whether can we read the model using this library written using Python or above wrapper? The code below I am using for training the model and last three lines I want to port to Java to expose it on production

    import pickle
    import numpy as np
    import pandas as pd
    from sknn.mlp import Classifier, Layer
    from sklearn.model_selection import train_test_split
    from sklearn.metrics import accuracy_score
    
    f = open("TrainLSDataset.csv")
    data = np.loadtxt(f,delimiter = ',')
    
    x = data[:, 1:]
    y = data[:, 0]
    X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.3)
    
    nn = Classifier(
        layers=[            	    
            Layer("Rectifier", units=5),
            Layer("Softmax")],
        learning_rate=0.001,
        n_iter=100)
    
    nn.fit(X_train, y_train)
    filename = 'finalized_model.txt'
    pickle.dump(nn, open(filename, 'wb')) 
    
    **Below code i want to write in Java/GoLang for exposing it on Production** :
    
    loaded_model = pickle.load(open(filename, 'rb'))
    result = loaded_model.score(X_test, y_test)
    y_pred = loaded_model.predict(X_test)
    
    
    
    enhancement question 
    opened by palaiya 4
  • Using a nuSVC Model

    Using a nuSVC Model

    Hi, I know sklearn-porter doesn't support nu-SVCs, but those are mathematically equivalent to svm.SVC models (see http://scikit-learn.org/stable/modules/svm.html#nusvc). I was wondering if there was a workaround for this ? Thank you

    opened by knseir 4
  • Error when trying to convert RandomForestClassifier to javascript

    Error when trying to convert RandomForestClassifier to javascript

    Hi,

    I am trying to run the command: python -m sklearn_porter -i estimator.pkl --js as instructed on the github readme, with a sklearn random forest classifier that I saved into estimator.pkl as instructed. I am using Python 3.6 from Anaconda on a Ubuntu 16.04 LTS. But it fails with following error: Traceback (most recent call last): File "/home/user/anaconda3/lib/python3.6/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/user/anaconda3/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/user/anaconda3/lib/python3.6/site-packages/sklearn_porter/main.py", line 153, in main() File "/home/user/anaconda3/lib/python3.6/site-packages/sklearn_porter/main.py", line 105, in main estimator = joblib.load(input_path) File "/home/user/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 578, in load obj = _unpickle(fobj, filename, mmap_mode) File "/home/user/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 508, in _unpickle obj = unpickler.load() File "/home/user/anaconda3/lib/python3.6/pickle.py", line 1050, in load dispatchkey[0] KeyError: 239

    opened by aflugge 3
  • SVC (kernel=linear) JS prediction logic

    SVC (kernel=linear) JS prediction logic

    Hi @nok, the libsvm implementation seems to be using subtraction while the sklearn-porter's JavaScript predict method is using addition in the same place. I'm guessing both are the same if the intercepts are having opposite sign, but, I'm not sure. Could you please shed some light on this?

    question 
    opened by anmolshkl 3
  • Problems installing

    Problems installing

    I am unable to install sklearn-porter for python3 through pip.

    Collecting sklearn-porter
      Using cached sklearn-porter-0.5.0.tar.gz
        Complete output from command python setup.py egg_info:
        Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "/tmp/pip-build-bdmsnkqx/sklearn-porter/setup.py", line 18, in <module>
            with open(requirements_path) as f:
        FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-build-bdmsnkqx/sklearn-porter/requirements.txt'
        
        ----------------------------------------
    Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-bdmsnkqx/sklearn-porter/
    

    I am running python 3.6 on Arch Linux.

    bug 
    opened by FakeNameSE 3
  • FIXED bug in the implem. of AdaBoostClassifier

    FIXED bug in the implem. of AdaBoostClassifier

    Functions predict_* were named predict_1, predict_2, ..., predict_9, but when they were called the format was the following _01, _02, ... _09. This patch unifies the naming structure. I don't understand why the developer wished to name them _01 and not directly _1.

    enhancement 
    opened by mesarpe 3
  • Include jinja2 templates in package manifest

    Include jinja2 templates in package manifest

    I was having difficulties with some of the template combinations and realized that the .jinja2 files were not being included in the package. This change fixes the examples on my machine when running with a pip installed porter cli

    opened by vkhougaz-vifive 0
  • Can't use port or save functions

    Can't use port or save functions

    Here are my versions : The scikit-learn 0.21.3 Python 3.7.13

    And here is my code. I used the same example as in the notebook.

    # 1. Load data and train a dummy classifier:
    X, y = load_iris(return_X_y=True)
    clf = tree.DecisionTreeClassifier()
    clf.fit(X, y)
    

    I tried this code:

    # 2. Port or transpile an estimator:
    est = Estimator(clf, language='java', template='combined')
    output = est.port()
    print(output)
    

    This is my error:

    ---------------------------------------------------------------------------
    TemplateNotFound                          Traceback (most recent call last)
    [<ipython-input-38-6d9e6b3013b2>](https://localhost:8080/#) in <module>()
          1 # 2. Port or transpile an estimator:
          2 est = Estimator(clf, language='java', template='combined')
    ----> 3 output = est.port()
          4 print(output)
    
    5 frames
    [/usr/local/lib/python3.7/dist-packages/jinja2/loaders.py](https://localhost:8080/#) in get_source(self, environment, template)
        305             source = self.mapping[template]
        306             return source, None, lambda: source == self.mapping.get(template)
    --> 307         raise TemplateNotFound(template)
        308 
        309     def list_templates(self):
    
    TemplateNotFound: combined.class
    

    Can someone help me ?

    opened by GMSL1706 2
  • Updated scikit learn version >= 1.0.1

    Updated scikit learn version >= 1.0.1

    This pull request updates the version of scikit-learn used by sklearn-porter to the current stable release (>-1.0.1).

    Changes comprise:

    • updating the specified version of scikit-learn in requirements.txt
    • fixed broken imports due to changes to scikit-learn
    • commenting out (minimal) broken tests (which may be due to changes in joblib, but I'm not 100% sure)

    All tests (other than the few mentions above) pass on my local system:

    • platform linux -- Python 3.8.10, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 -- /usr/bin/python3
    • gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
    • java openjdk version "17.0.1" 2021-10-19
    • javac 17.0.1
    • PHP 7.4.3 (cli) (built: Nov 25 2021 23:16:22) ( NTS )
    • ruby 2.7.0p0 (2019-12-25 revision 647ee6f091) [x86_64-linux-gnu]
    • go version go1.13.8 linux/amd64
    • node v16.13.2

    Let be know if any details are required.

    opened by AndrewJamesTurner 0
  • new branch for transfer multilabel of decision tree to cpp

    new branch for transfer multilabel of decision tree to cpp

    I have implemented the feature of transforming multilabel/multi-output decision tree Classifier into c language. Each output of decision tree or each dimension of predicted vector is binary-class (0 or 1). "examples/estimator/classifier/DecisionTreeClassifier/c/basics.pct.multilabel.py" file is test.

    opened by HannanKan 0
  • ImportError: cannot import name 'Porter'

    ImportError: cannot import name 'Porter'

    I'm using porter 0.7.4. and scikit-learn 0.20.0 (using conda) and When I used “from sklearn_porter import Porter” I got the error: ImportError: cannot import name 'Porter' Why about it?

    question 
    opened by elimsjxr 1
Releases(v0.7.2)
  • v0.7.2(Jan 20, 2019)

    0.7.1 and 0.7.2

    These patches solve build problems in version 0.7.0.

    Fixed

    • Fix installation issues with the centralised meta information and the build process
    • Fix missed package and add six to the requirements.txt file

    0.7.0

    This is a minor update before the next major release 1.0.0.

    Fixed

    • Fix indices format in RandomForestClassifier (#41, thanks @apasanen)

    Added

    • Add Python 3.7 with Xenial to CI for testing
    • Add PyTest for extended testing (see pytest.ini)
    • Add useful Makefile tasks with dependency handling (see Makefile):
      • install.environment to install a conda environment
      • install.requirements to install all pip requirements
      • make link to install porter (cli) to the command line
      • make open.examples to start a local jupyter server
      • make stop.examples to stop the started jupyter server
      • make test to run all unittests in tests
      • make lint to run pylint over sklearn_porter
      • make jupytext to generate the notebooks from Python sources

    Changed

    Source code(tar.gz)
    Source code(zip)
  • v0.6.2(Feb 3, 2018)

  • v0.6.1(Jan 3, 2018)

  • v0.6.0(Dec 4, 2017)

    Added

    Changed

    Removed

    • Hide the command-line argument --language and -l for the choice of the target programming language (#fc14a3b).

    Fixed

    • Fix inaccuracies in neural_network.MLPRegressor and neural_network.MLPClassifier occurred by the transpiled tanh and identity activation functions (#6696410).
    • Fix installation problems with pip and Python 3 (#2935828, issue: #17)
    • Fix dynamic class name in the MLPClassifier template (#b988f57)
    Source code(tar.gz)
    Source code(zip)
  • v0.5.2(Aug 26, 2017)

  • v0.5.1(Aug 26, 2017)

  • v0.5.0(May 26, 2017)

    Bugfix:

    Algorithms:

    Changes:

    • Refactor tests and add new generic tests
    • Add breast_cancer, digits and iris dataset to all classifier tests
    • Add new environment variables N_RANDOM_FEATURE_SETS and N_EXISTING_FEATURE_SETS
    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Apr 16, 2017)

  • v0.4.0(Mar 24, 2017)

    New features:

    • Prediction in the target programming language from Python
    • Computation of the accuracy between the ported and original estimator
    Source code(tar.gz)
    Source code(zip)
  • v0.3.2(Jan 29, 2017)

    Add changes:

    Package

    • Extend backwards compatibility (scikit-learn>=0.14.1) by refactoring the dependency handling, tests and examples

    Fixed bugs:

    AdaBoostClassifier, DecisionTreeClassifier & RandomForestClassifier

    • Fix error with negative indices (https://github.com/nok/sklearn-porter/issues/5)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Jan 28, 2017)

  • v0.3.0(Jan 8, 2017)

    New algorithm support:

    Java

    • sklearn.neighbors.KNeighborsClassifier
    • sklearn.naive_bayes.GaussianNB
    • sklearn.naive_bayes.BernoulliNB

    JavaScript

    • sklearn.svm.SVC
    • sklearn.neighbors.KNeighborsClassifier

    PHP

    • sklearn.svm.SVC
    • sklearn.svm.LinearSVC
    • sklearn.tree.DecisionTreeClassifier

    Ruby

    • sklearn.svm.LinearSVC
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Dec 10, 2016)

  • v0.2.0(Nov 18, 2016)

    New algorithm support:

    C

    • sklearn.svm.SVC
    • sklearn.tree.DecisionTreeClassifier
    • sklearn.ensemble.RandomForestClassifier
    • sklearn.ensemble.ExtraTreesClassifier
    • sklearn.ensemble.AdaBoostClassifier
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Nov 7, 2016)

    Description:

    Release first stable version.

    New algorithm support:

    C

    • sklearn.svm.LinearSVC

    Java

    • sklearn.svm.SVC
    • sklearn.svm.LinearSVC
    • sklearn.tree.DecisionTreeClassifier
    • sklearn.ensemble.RandomForestClassifier
    • sklearn.ensemble.ExtraTreesClassifier
    • sklearn.ensemble.AdaBoostClassifier
    • sklearn.neural_network.MLPClassifier

    JavaScript

    • sklearn.svm.LinearSVC
    • sklearn.tree.DecisionTreeClassifier
    • sklearn.ensemble.RandomForestClassifier
    • sklearn.ensemble.ExtraTreesClassifier
    • sklearn.ensemble.AdaBoostClassifier
    • sklearn.neural_network.MLPClassifier

    Go

    • sklearn.svm.LinearSVC
    Source code(tar.gz)
    Source code(zip)
Owner
Darius Morawiec
Darius Morawiec
Applied Machine Learning for Graduate Program in Computer Science (PPGCC)

Applied Machine Learning for Graduate Program in Computer Science (PPGCC) - Federal University of Santa Catarina

Jônatas Negri Grandini 1 Dec 22, 2021
A Collection of Conference & School Notes in Machine Learning 🦄📝🎉

Machine Learning Conference & Summer School Notes. 🦄📝🎉

558 Dec 28, 2022
Napari sklearn decomposition

napari-sklearn-decomposition A simple plugin to use with napari This napari plug

1 Sep 01, 2022
Lightning ⚡️ fast forecasting with statistical and econometric models.

Nixtla Statistical ⚡️ Forecast Lightning fast forecasting with statistical and econometric models StatsForecast offers a collection of widely used uni

Nixtla 2.1k Dec 29, 2022
Learn Machine Learning Algorithms by doing projects in Python and R Programming Language

Learn Machine Learning Algorithms by doing projects in Python and R Programming Language. This repo covers all aspect of Machine Learning Algorithms.

Ravi Chaubey 6 Oct 20, 2022
MLFlow in a Dockercontainer based on Azurite and Postgres

mlflow-azurite-postgres docker This is a MLFLow image which works with a postgres DB and a local Azure Blob Storage Instance (Azurite). This image is

2 May 29, 2022
Stats, linear algebra and einops for xarray

xarray-einstats Stats, linear algebra and einops for xarray ⚠️ Caution: This project is still in a very early development stage Installation To instal

ArviZ 30 Dec 28, 2022
Uplift modeling and causal inference with machine learning algorithms

Disclaimer This project is stable and being incubated for long-term support. It may contain new experimental code, for which APIs are subject to chang

Uber Open Source 3.7k Jan 07, 2023
Neighbourhood Retrieval (Nearest Neighbours) with Distance Correlation.

Neighbourhood Retrieval with Distance Correlation Assign Pseudo class labels to datapoints in the latent space. NNDC is a slim wrapper around FAISS. N

The Learning Machines 1 Jan 16, 2022
A python library for Bayesian time series modeling

PyDLM Welcome to pydlm, a flexible time series modeling library for python. This library is based on the Bayesian dynamic linear model (Harrison and W

Sam 438 Dec 17, 2022
CD) in machine learning projectsImplementing continuous integration & delivery (CI/CD) in machine learning projects

CML with cloud compute This repository contains a sample project using CML with Terraform (via the cml-runner function) to launch an AWS EC2 instance

Iterative 19 Oct 03, 2022
Simulation of early COVID-19 using SIR model and variants (SEIR ...).

COVID-19-simulation Simulation of early COVID-19 using SIR model and variants (SEIR ...). Made by the Laboratory of Sustainable Life Assessment (GYRO)

José Paulo Pereira das Dores Savioli 1 Nov 17, 2021
Nevergrad - A gradient-free optimization platform

Nevergrad - A gradient-free optimization platform nevergrad is a Python 3.6+ library. It can be installed with: pip install nevergrad More installati

Meta Research 3.4k Jan 08, 2023
Machine Learning Course with Python:

A Machine Learning Course with Python Table of Contents Download Free Deep Learning Resource Guide Slack Group Introduction Motivation Machine Learnin

Instill AI 6.9k Jan 03, 2023
Retrieve annotated intron sequences and classify them as minor (U12-type) or major (U2-type)

(intron I nterrogator and C lassifier) intronIC is a program that can be used to classify intron sequences as minor (U12-type) or major (U2-type), usi

Graham Larue 4 Jul 26, 2022
Python 3.6+ toolbox for submitting jobs to Slurm

Submit it! What is submitit? Submitit is a lightweight tool for submitting Python functions for computation within a Slurm cluster. It basically wraps

Facebook Incubator 768 Jan 03, 2023
Open source time series library for Python

PyFlux PyFlux is an open source time series library for Python. The library has a good array of modern time series models, as well as a flexible array

Ross Taylor 2k Jan 02, 2023
Uses WiFi signals :signal_strength: and machine learning to predict where you are

Uses WiFi signals and machine learning (sklearn's RandomForest) to predict where you are. Even works for small distances like 2-10 meters.

Pascal van Kooten 5k Jan 09, 2023
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

eXtreme Gradient Boosting Community | Documentation | Resources | Contributors | Release Notes XGBoost is an optimized distributed gradient boosting l

Distributed (Deep) Machine Learning Community 23.6k Jan 03, 2023
All-in-one web-based development environment for machine learning

All-in-one web-based development environment for machine learning Getting Started • Features & Screenshots • Support • Report a Bug • FAQ • Known Issu

3 Feb 03, 2021