Python implementation of Weng-Lin Bayesian ranking, a better, license-free alternative to TrueSkill

Overview

Tests codecov PyPI - Downloads Documentation Status

Python implementation of Weng-Lin Bayesian ranking, a better, license-free alternative to TrueSkill

This is a port of the amazing openskill.js package.

Installation

pip install openskill

Usage

>>> from openskill import Rating, rate
>>> a1 = Rating()
>>> a1
Rating(mu=25, sigma=8.333333333333334)
>>> a2 = Rating(mu=32.444, sigma=5.123)
>>> a2
Rating(mu=32.444, sigma=5.123)
>>> b1 = Rating(43.381, 2.421)
>>> b1
Rating(mu=43.381, sigma=2.421)
>>> b2 = Rating(mu=25.188, sigma=6.211)
>>> b2
Rating(mu=25.188, sigma=6.211)

If a1 and a2 are on a team, and wins against a team of b1 and b2, send this into rate:

>>> [[x1, x2], [y1, y2]] = rate([[a1, a2], [b1, b2]])
>>> x1, x2, y1, y2
([28.669648436582808, 8.071520788025197], [33.83086971107981, 5.062772998705765], [43.071274808241974, 2.4166900452721256], [23.149503312339064, 6.1378606973362135])

You can also create Rating objects by importing create_rating:

>>> from openskill import create_rating
>>> x1 = create_rating(x1)
>>> x1
Rating(mu=28.669648436582808, sigma=8.071520788025197)

Ranks

When displaying a rating, or sorting a list of ratings, you can use ordinal:

>>> from openskill import ordinal
>>> ordinal(mu=43.07, sigma=2.42)
35.81

By default, this returns mu - 3 * sigma, showing a rating for which there's a 99.7% likelihood the player's true rating is higher, so with early games, a player's ordinal rating will usually go up and could go up even if that player loses.

Artificial Ranks

If your teams are listed in one order but your ranking is in a different order, for convenience you can specify a ranks option, such as:

>>> a1 = b1 = c1 = d1 = Rating()
>>> result = [[a2], [b2], [c2], [d2]] = rate([[a1], [b1], [c1], [d1]], rank=[4, 1, 3, 2])
>>> result
[[[20.96265504062538, 8.083731307186588]], [[27.795084971874736, 8.263160757613477]], [[24.68943500312503, 8.083731307186588]], [[26.552824984374855, 8.179213704945203]]]

It's assumed that the lower ranks are better (wins), while higher ranks are worse (losses). You can provide a score instead, where lower is worse and higher is better. These can just be raw scores from the game, if you want.

Ties should have either equivalent rank or score.

>>> a1 = b1 = c1 = d1 = Rating()
>>> result = [[a2], [b2], [c2], [d2]] = rate([[a1], [b1], [c1], [d1]], score=[37, 19, 37, 42])
>>> result
[[[24.68943500312503, 8.179213704945203]], [[22.826045021875203, 8.179213704945203]], [[24.68943500312503, 8.179213704945203]], [[27.795084971874736, 8.263160757613477]]]

Choosing Models

The default model is PlackettLuce. You can import alternate models from openskill.models like so:

>>> from openskill.models import BradleyTerryFull
>>> a1 = b1 = c1 = d1 = Rating()
>>> rate([[a1], [b1], [c1], [d1]], rank=[4, 1, 3, 2], model=BradleyTerryFull)
[[[17.09430584957905, 7.5012190693964005]], [[32.90569415042095, 7.5012190693964005]], [[22.36476861652635, 7.5012190693964005]], [[27.63523138347365, 7.5012190693964005]]]

Predicting Winners

You can compare two or more teams to get the probabilities of each team winning.

>>> from openskill import predict_win
>>> a1 = Rating()
>>> a2 = Rating(mu=33.564, sigma=1.123)
>>> predictions = predict_win(teams=[[a1], [a2]])
>>> predictions
[0.45110901512761536, 0.5488909848723846]
>>> sum(predictions)
1.0

Available Models

  • BradleyTerryFull: Full Pairing for Bradley-Terry
  • BradleyTerryPart: Partial Pairing for Bradley-Terry
  • PlackettLuce: Generalized Bradley-Terry
  • ThurstoneMostellerFull: Full Pairing for Thurstone-Mosteller
  • ThurstoneMostellerPart: Partial Pairing for Thurstone-Mosteller

Which Model Do I Want?

  • Bradley-Terry rating models follow a logistic distribution over a player's skill, similar to Glicko.
  • Thurstone-Mosteller rating models follow a gaussian distribution, similar to TrueSkill. Gaussian CDF/PDF functions differ in implementation from system to system (they're all just chebyshev approximations anyway). The accuracy of this model isn't usually as great either, but tuning this with an alternative gamma function can improve the accuracy if you really want to get into it.
  • Full pairing should have more accurate ratings over partial pairing, however in high k games (like a 100+ person marathon race), Bradley-Terry and Thurstone-Mosteller models need to do a calculation of joint probability which involves is a k-1 dimensional integration, which is computationally expensive. Use partial pairing in this case, where players only change based on their neighbors.
  • Plackett-Luce (default) is a generalized Bradley-Terry model for k ≥ 3 teams. It scales best.

Implementations in other Languages

Comments
  • Support for partial play/weighting/player performance

    Support for partial play/weighting/player performance

    Would it be possible to add some sort of system to weight player performance like how the official trueskill module does it? I'm trying to create a system that weights players overall performance compared to their teams to get a more accurate skill rating.

    enhancement help wanted 
    opened by spookybear0 7
  • Support for Python 3.7+

    Support for Python 3.7+

    I've been using openskill in my local Python 3.9 environment for a while now. I've been liking it a lot and I wanted to add it to one of my projects for use in production. I was surprised to find that the latest version on pypi said it only supported 3.10+, which is pretty rough from a compatibility perspective. I wanted to check how much it actually depended on features in 3.10, so I switched to a 3.6 virtual env and tried running the tests. They failed, and indicated that isinstance(var, Union[T1, T2]) calls were an issue. This is a fairly easy thing to express in older pythons with: isinstance(var, (T1, T2)). I did a search and replace on these, and then the tests passed. That was the only incompatibility I could find.

    If the maintainers are open to supporting older Pythons for a new release, that would be very helpful to me. I would also be OK with the minimum version being something like 3.7 or 3.8.

    I haven't used towncrier before, and it wasn't immediately obvious how to impute the changelog, so I figure I'll wait until a maintainer greenlights this before I continue any work on details like that.

    Affirmation

    -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

    By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under openskill.py's MIT license.

    I certify the above statement is true and correct: Erotemic -----BEGIN PGP SIGNATURE-----

    iHUEARYIAB0WIQT3wcl6TvLPlkPZTGZswF8VWxKrNgUCYnUyfgAKCRBswF8VWxKr Nv5kAQCy8UMZZYCX3+dG5sLn1UmF/SZ1qnGwsMAUWsVgTdJvFAEAzOCgQwWOpIma wPXZMrEOpp8S9IAgNpeFetms+kp9lQ0= =7IXK -----END PGP SIGNATURE-----

    enhancement 
    opened by Erotemic 6
  • let score difference be reflected in rating

    let score difference be reflected in rating

    When you enter scores into rate(), the difference between the scores have no effect on the rating - meaning: rate([team1,team2],score(1,0)) == rate([team1,team2],score(100,0)) is true. They have exactly the same rating effect on team1 and team2.

    I don't know if it is mathematical possible and how it would look like. But it would be great if the difference could be somehow factored into the calculation, as it is (if your game has a score) quite an important datapoint for skill evaluation.

    enhancement 
    opened by jonathan-scholz 4
  • Are `predict_win` and `predict_draw` functions accidentally using Thurstone-Mosteller specific calculations?

    Are `predict_win` and `predict_draw` functions accidentally using Thurstone-Mosteller specific calculations?

    If I understand it correctly, those two functions seem to perform calculations using equations numbered (65) in the paper. However, those equations seems to be specific to Thurstone-Mosteller model and as far as I can tell, the proper way to calculate probabilities for Bradley-Terry model would be to use equations (48) and (51) (also seen as p_iq in equation (49)). Is this intended? Or am I misunderstanding either the paper or the code of these functions?

    question 
    opened by asyncth 2
  • Update link

    Update link

    Description of Changes

    • [ ] Wrote at least one-line docstrings (for any new functions)
    • [ ] Added test(s) covering the changes (if testable)

    Updated link to repository

    Issue(s) Resolved

    Fixes #

    Affirmation

    By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under openskill.py's MIT license.

    I certify the above statement is true and correct: bstummer

    opened by bstummer 2
  • Unable to install on Google Colab

    Unable to install on Google Colab

    Describe the bug I can't install openskill on Google Colab via pip. What should I do?

    Screenshots スクリーンショット 2022-03-31 23 14 18

    Platform Information

    • Google Colab
    • Python Version: 3.7.13
    wontfix 
    opened by toshi71 2
  • Implement additive dynamics factor 'tau'.

    Implement additive dynamics factor 'tau'.

    Description of Changes

    • [x] Wrote at least one-line docstrings (for any new functions)
    • [x] Added test(s) covering the changes (if testable)

    Ports https://github.com/philihp/openskill.js/pull/233

    Affirmation

    By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under openskill.py's MIT license.

    I certify the above statement is true and correct: daegontaven

    opened by daegontaven 2
  • Faster runtime of predict_win and predict_draw

    Faster runtime of predict_win and predict_draw

    Description of Changes

    • [x] Adds some unit tests that I have for the Javascript library, to prevent any regressions.
    • [x] Tells itertools.permutations to give permutation pairs, rather than all full permutations
    • [x] Additionally, the length of permutation_pairs is shorter, and that simplifies the denominator to just a triangular number.

    I noticed that for teams of ABCD, your code as written was finding all permutations (ABCD, ABDC, ACBD, ACDB, ...) and then only using the first two from the permutation, which for n>=4 teams, this causes the same pairings to be calculated multiple times.

    This should reduce runtime from O(n^n) to O(n^2).

    Affirmation

    By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under openskill.py's MIT license.

    I certify the above statement is true and correct: Philihp Busby

    opened by philihp 2
  • Bump prompt-toolkit from 3.0.26 to 3.0.27

    Bump prompt-toolkit from 3.0.26 to 3.0.27

    Bumps prompt-toolkit from 3.0.26 to 3.0.27.

    Changelog

    Sourced from prompt-toolkit's changelog.

    3.0.27: 2022-02-07

    New features:

    • Support for cursor shapes. The cursor shape for prompts/applications can now be configured, either as a fixed cursor shape, or in case of Vi input mode, according to the current input mode.
    • Handle "cursor forward" command in ANSI formatted text. This makes it possible to render many kinds of generated ANSI art.
    • Accept align attribute in Label widget.
    • Added PlainTextOutput: an output implementation that doesn't render any ANSI escape sequences. This will be used by default when redirecting stdout to a file.
    • Added create_app_session_from_tty: a context manager that enforces input/output to go to the current TTY, even if stdin/stdout are attached to pipes.
    • Added to_plain_text utility for converting formatted text into plain text.

    Fixes:

    • Don't automatically use sys.stderr for output when sys.stdout is not a TTY, but sys.stderr is. The previous behavior was confusing, especially when rendering formatted text to the output, and we expect it to follow redirection.
    Commits
    • 6ac867a Release 3.0.27
    • 96ec6fb Removed unused imports.
    • b4d728e Added support for cursor shapes.
    • 4a66820 Added to_plain_text utility. A function to turn formatted text into a string.
    • 7dd8435 Added create_app_session_from_tty.
    • 2c96fe2 Stop preferring a TTY output when creating the default output.
    • 402b6a3 Added PlainTextOutput: an output that doesn't write ANSI escape sequences to ...
    • 57b42c4 Added ansi-art-and-textarea.py example.
    • a71de3c Accept 'align' attribute in Label widget.
    • 64d870a Added ptk-logo-ansi-art.py example.
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • Bump scipy from 1.7.3 to 1.8.0

    Bump scipy from 1.7.3 to 1.8.0

    Bumps scipy from 1.7.3 to 1.8.0.

    Release notes

    Sourced from scipy's releases.

    SciPy 1.8.0 Release Notes

    SciPy 1.8.0 is the culmination of 6 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Before upgrading, we recommend that users check that their own code does not use deprecated SciPy functionality (to do so, run your code with python -Wd and check for DeprecationWarning s). Our development attention will now shift to bug-fix releases on the 1.8.x branch, and on adding new features on the master branch.

    This release requires Python 3.8+ and NumPy 1.17.3 or greater.

    For running on PyPy, PyPy3 6.0+ is required.

    Highlights of this release

    • A sparse array API has been added for early testing and feedback; this work is ongoing, and users should expect minor API refinements over the next few releases.
    • The sparse SVD library PROPACK is now vendored with SciPy, and an interface is exposed via scipy.sparse.svds with solver='PROPACK'. It is currently default-off due to potential issues on Windows that we aim to resolve in the next release, but can be optionally enabled at runtime for friendly testing with an environment variable setting of USE_PROPACK=1.
    • A new scipy.stats.sampling submodule that leverages the UNU.RAN C library to sample from arbitrary univariate non-uniform continuous and discrete distributions
    • All namespaces that were private but happened to miss underscores in their names have been deprecated.

    New features

    scipy.fft improvements

    Added an orthogonalize=None parameter to the real transforms in scipy.fft which controls whether the modified definition of DCT/DST is used without changing the overall scaling.

    scipy.fft backend registration is now smoother, operating with a single

    ... (truncated)

    Commits
    • b5d8bab REL: 1.8.0 release commit.
    • d84f731 Merge pull request #15521 from tylerjereddy/treddy_prep_180_final
    • 315dd53 DOC: update 1.8.0 relnotes.
    • b54b7ae MAINT: fix broken link and remove CI badges
    • 920e27b REL: 1.8.0 unreleased.
    • ea004bd REL: 1.8.0rc4 released.
    • 4f3969d Merge pull request #15479 from tylerjereddy/treddy_180rc4
    • 8ed6aa9 DOC: update 1.8.0 relnotes.
    • efe4ca5 MAINT: PR 15479 revisions
    • 1803913 MAINT: remove non-default settings (except shallow) in .gitmodules
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • Bump twine from 3.7.1 to 3.8.0

    Bump twine from 3.7.1 to 3.8.0

    Bumps twine from 3.7.1 to 3.8.0.

    Release notes

    Sourced from twine's releases.

    3.8.0

    https://pypi.org/project/twine/3.8.0/

    Changelog

    Changelog

    Sourced from twine's changelog.

    Twine 3.8.0 (2022-02-02)

    Features ^^^^^^^^

    • Add --verbose logging for querying keyring credentials. ([#849](https://github.com/pypa/twine/issues/849) <https://github.com/pypa/twine/issues/849>_)
    • Log all upload responses with --verbose. ([#859](https://github.com/pypa/twine/issues/859) <https://github.com/pypa/twine/issues/859>_)
    • Show more helpful error message for invalid metadata. ([#861](https://github.com/pypa/twine/issues/861) <https://github.com/pypa/twine/issues/861>_)

    Bugfixes ^^^^^^^^

    • Require a recent version of urllib3. ([#858](https://github.com/pypa/twine/issues/858) <https://github.com/pypa/twine/issues/858>_)
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • Tournament Interface

    Tournament Interface

    Is your feature request related to a problem? Please describe. Creating models of tournaments is hard since you have to parse the data using another library (depending on the format) and then pass everything into rate and predict manually. It's a lot of effort to predict the entire outcome of say, "2022 FIFA World Cup" easily.

    Describe the solution you'd like it would be nice if there was a tournament class of some kind that allowed us to pass in rounds which themselves contained matches. Then using an exhaustive approach predict winners and move them along each bracket/round. Especially now that #74 has landed it would be easier to predict whole matches and in turn tournaments.

    The classes should be customizable to allow our own logic. For instance, allow using the munkres algorithm and other such methods.

    Describe alternatives you've considered I don't know any other libraries that do this already.

    enhancement help wanted 
    opened by daegontaven 0
  • Improve win predictions for 1v1 teams

    Improve win predictions for 1v1 teams

    First of all, congrats and thanks for the great repo!

    In a scenario that Player A has 2x the rating of Player B, the predicted win probability is 60% vs 40%. This seems strange.

    players = [ [Rating(50)], [Rating(25)] ]
    
    predict_win(teams=players)
    
    [ 1 ]: [0.6002914159316424, 0.39970858406835763]
    
    

    If I use this function implementation, I get 97% vs 3% which sounds more reasonable to me.

    Maybe the predict_win function has some flaw?

    enhancement 
    opened by nthypes 1
Releases(v4.0.0)
Owner
Open Debates Project
Debate the way it's meant to be.
Open Debates Project
A simple python program which predicts the success of a movie based on it's type, actor, actress and director

Movie-Success-Prediction A simple python program which predicts the success of a movie based on it's type, actor, actress and director. The program us

Mahalinga Prasad R N 1 Dec 17, 2021
Distributed Tensorflow, Keras and PyTorch on Apache Spark/Flink & Ray

A unified Data Analytics and AI platform for distributed TensorFlow, Keras and PyTorch on Apache Spark/Flink & Ray What is Analytics Zoo? Analytics Zo

2.5k Dec 28, 2022
A Collection of Conference & School Notes in Machine Learning 🦄📝🎉

Machine Learning Conference & Summer School Notes. 🦄📝🎉

558 Dec 28, 2022
TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters.

TensorFlowOnSpark TensorFlowOnSpark brings scalable deep learning to Apache Hadoop and Apache Spark clusters. By combining salient features from the T

Yahoo 3.8k Jan 04, 2023
Decision tree is the most powerful and popular tool for classification and prediction

Diabetes Prediction Using Decision Tree Introduction Decision tree is the most powerful and popular tool for classification and prediction. A Decision

Arjun U 1 Jan 23, 2022
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective. 10x Larger Models 10x Faster Trainin

Microsoft 8.4k Dec 30, 2022
We have a dataset of user performances. The project is to develop a machine learning model that will predict the salaries of baseball players.

Salary-Prediction-with-Machine-Learning 1. Business Problem Can a machine learning project be implemented to estimate the salaries of baseball players

Ayşe Nur Türkaslan 9 Oct 14, 2022
Painless Machine Learning for python based on scikit-learn

PlainML Painless Machine Learning Library for python based on scikit-learn. Install pip install plainml Example from plainml import KnnModel, load_ir

1 Aug 06, 2022
Winning solution for the Galaxy Challenge on Kaggle

Winning solution for the Galaxy Challenge on Kaggle

Sander Dieleman 483 Jan 02, 2023
ZenML 🙏: MLOps framework to create reproducible ML pipelines for production machine learning.

ZenML is an extensible, open-source MLOps framework to create production-ready machine learning pipelines. It has a simple, flexible syntax, is cloud and tool agnostic, and has interfaces/abstraction

ZenML 2.6k Jan 08, 2023
The project's goal is to show a real world application of image segmentation using k means algorithm

The project's goal is to show a real world application of image segmentation using k means algorithm

2 Jan 22, 2022
A machine learning project that predicts the price of used cars in the UK

Car Price Prediction Image Credit: AA Cars Project Overview Scraped 3000 used cars data from AA Cars website using Python and BeautifulSoup. Cleaned t

Victor Umunna 7 Oct 13, 2022
Esse é o meu primeiro repo tratando de fim a fim, uma pipeline de dados abertos do governo brasileiro relacionado a compras de contrato e cronogramas anuais com spark, em pyspark e SQL!

Olá! Esse é o meu primeiro repo tratando de fim a fim, uma pipeline de dados abertos do governo brasileiro relacionado a compras de contrato e cronogr

Henrique de Paula 10 Apr 04, 2022
hgboost - Hyperoptimized Gradient Boosting

hgboost is short for Hyperoptimized Gradient Boosting and is a python package for hyperparameter optimization for xgboost, catboost and lightboost using cross-validation, and evaluating the results o

Erdogan Taskesen 34 Jan 03, 2023
scikit-learn: machine learning in Python

scikit-learn is a Python module for machine learning built on top of SciPy and is distributed under the 3-Clause BSD license. The project was started

neurodata 3 Dec 16, 2022
To design and implement the Identification of Iris Flower species using machine learning using Python and the tool Scikit-Learn.

To design and implement the Identification of Iris Flower species using machine learning using Python and the tool Scikit-Learn.

Astitva Veer Garg 1 Jan 11, 2022
Forecast dynamically at scale with this unique package. pip install scalecast

🌄 Scalecast: Dynamic Forecasting at Scale About This package uses a scaleable forecasting approach in Python with common scikit-learn and statsmodels

Michael Keith 158 Jan 03, 2023
A Pythonic framework for threat modeling

pytm: A Pythonic framework for threat modeling Introduction Traditional threat modeling too often comes late to the party, or sometimes not at all. In

Izar Tarandach 644 Dec 20, 2022
Lingtrain Alignment Studio is an ML based app for texts alignment on different languages.

Lingtrain Alignment Studio Intro Lingtrain Alignment Studio is the ML based app for accurate texts alignment on different languages. Extracts parallel

Sergei Averkiev 186 Jan 03, 2023
A library of extension and helper modules for Python's data analysis and machine learning libraries.

Mlxtend (machine learning extensions) is a Python library of useful tools for the day-to-day data science tasks. Sebastian Raschka 2014-2021 Links Doc

Sebastian Raschka 4.2k Dec 29, 2022