zoofs is a Python library for performing feature selection using an variety of nature inspired wrapper algorithms. The algorithms range from swarm-intelligence to physics based to Evolutionary. It's easy to use ,flexible and powerful tool to reduce your feature size.

Related tags

Algorithmszoofs
Overview

zoofs ( Zoo Feature Selection )

zoofs Logo Header

zoofs is a Python library for performing feature selection using an variety of nature inspired wrapper algorithms. The algorithms range from swarm-intelligence to physics based to Evolutionary. It's easy to use ,flexible and powerful tool to reduce your feature size.

Installation

PyPI version

Using pip

Use the package manager to install zoofs.

pip install zoofs

Available Algorithms

Algorithm Name Class Name Description References doi
Particle Swarm Algorithm ParticleSwarmOptimization Utilizes swarm behaviour 10.1007/978-3-319-13563-2_51
Grey Wolf Algorithm GreyWolfOptimization Utilizes wolf hunting behaviour https://doi.org/10.1016/j.neucom.2015.06.083
Dragon Fly Algorithm DragonFlyOptimization Utilizes dragonfly swarm behaviour 10.1016/j.knosys.2020.106131
Genetic Algorithm Algorithm GeneticOptimization Utilizes genetic mutation behaviour 10.1109/ICDAR.2001.953980
Gravitational Algorithm GravitationalOptimization Utilizes newtons gravitational behaviour 10.1109/ICASSP.2011.5946916

More algos soon, stay tuned !

  • [Try It Now?] Open In Colab

Usage

Define your own objective function for optimization !

from sklearn.metrics import log_loss
# define your own objective function, make sure the function receives four parameters,
#  fit your model and return the objective value ! 
def objective_function_topass(model,X_train, y_train, X_valid, y_valid):      
    model.fit(X_train,y_train)  
    P=log_loss(y_valid,model.predict_proba(X_valid))
    return P
    
# import an algorithm !  
from zoofs import ParticleSwarmOptimization
# create object of algorithm
algo_object=ParticleSwarmOptimization(objective_function_topass,n_iteration=20,
                                       population_size=20,minimize=True)
import lightgbm as lgb
lgb_model = lgb.LGBMClassifier()                                       
# fit the algorithm
algo_object.fit(lgb_model,X_train, y_train, X_valid, y_valid,verbose=True)
#plot your results
algo_object.plot_history()
   

Suggestions for Usage

  • As available algorithms are wrapper algos. It is better to use ml models that build quicker, e.g lightgbm, catboost.
  • Take sufficient amount for 'population_size' , as this will determine the extent of exploration and exploitation of the algo.
  • Ensure that your ml model has its hyperparamters optimized before passing it to zoofs algos.

objective score plot

objective score Header



Algorithms

Particle Swarm Algorithm

Particle Swarm


class zoofs.ParticleSwarmOptimization(objective_function,n_iteration=50,population_size=50,minimize=True,c1=2,c2=2,w=0.9)


Parameters objective_function : user made function of the signature 'func(model,X_train,y_train,X_test,y_test)'.
The function must return a value, that needs to be minimized/maximized.
n_iteration : int, default=50
Number of time the algorithm will run
population_size : int, default=50
Total size of the population
minimize : bool, default=True
Defines if the objective value is to be maximized or minimized
c1 : float, default=2.0
first acceleration coefficient of particle swarm
c2 : float, default=2.0
second acceleration coefficient of particle swarm
w : float, default=0.9
weight parameter
Attributes best_feature_list : array-like
Final best set of features

Methods

Methods Class Name
fit Run the algorithm
plot_history Plot results achieved across iteration

fit(model,X_train, y_train, X_test, y_test,verbose=True)

Parameters model :
machine learning model's object
X_train : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Training input samples to be used for machine learning model
y_train : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The target values (class labels in classification, real numbers in regression).
X_valid : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Validation input samples
y_valid : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The Validation target values .
verbose : bool,default=True
Print results for iterations
Returns best_feature_list : array-like
Final best set of features

plot_history()

Plot results across iterations

Example

from sklearn.metrics import log_loss
# define your own objective function, make sure the function receives four parameters,
#  fit your model and return the objective value ! 
def objective_function_topass(model,X_train, y_train, X_valid, y_valid):      
    model.fit(X_train,y_train)  
    P=log_loss(y_valid,model.predict_proba(X_valid))
    return P
    
# import an algorithm !  
from zoofs import ParticleSwarmOptimization
# create object of algorithm
algo_object=ParticleSwarmOptimization(objective_function_topass,n_iteration=20,
                                       population_size=20,minimize=True,c1=2,c2=2,w=0.9)
import lightgbm as lgb
lgb_model = lgb.LGBMClassifier()                      
# fit the algorithm
algo_object.fit(lgb_model,X_train, y_train, X_valid, y_valid,verbose=True)
#plot your results
algo_object.plot_history()


Grey Wolf Algorithm

Grey Wolf


class zoofs.GreyWolfOptimization(objective_function,n_iteration=50,population_size=50,minimize=True)


Parameters objective_function : user made function of the signature 'func(model,X_train,y_train,X_test,y_test)'.
The function must return a value, that needs to be minimized/maximized.
n_iteration : int, default=50
Number of time the algorithm will run
population_size : int, default=50
Total size of the population
minimize : bool, default=True
Defines if the objective value is to be maximized or minimized
Attributes best_feature_list : array-like
Final best set of features

Methods

Methods Class Name
fit Run the algorithm
plot_history Plot results achieved across iteration

fit(model,X_train,y_train,X_valid,y_valid,method=1,verbose=True)

Parameters model :
machine learning model's object
X_train : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Training input samples to be used for machine learning model
y_train : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The target values (class labels in classification, real numbers in regression).
X_valid : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Validation input samples
y_valid : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The Validation target values .
method : {1, 2}, default=1
Choose the between the two methods of grey wolf optimization
verbose : bool,default=True
Print results for iterations
Returns best_feature_list : array-like
Final best set of features

plot_history()

Plot results across iterations

Example

from sklearn.metrics import log_loss
# define your own objective function, make sure the function receives four parameters,
#  fit your model and return the objective value ! 
def objective_function_topass(model,X_train, y_train, X_valid, y_valid):      
    model.fit(X_train,y_train)  
    P=log_loss(y_valid,model.predict_proba(X_valid))
    return P
    
# import an algorithm !  
from zoofs import GreyWolfOptimization
# create object of algorithm
algo_object=GreyWolfOptimization(objective_function_topass,n_iteration=20,
                                    population_size=20,minimize=True)
import lightgbm as lgb
lgb_model = lgb.LGBMClassifier()                                       
# fit the algorithm
algo_object.fit(lgb_model,X_train, y_train, X_valid, y_valid,method=1,verbose=True)
#plot your results
algo_object.plot_history()


Dragon Fly Algorithm

Dragon Fly


class zoofs.DragonFlyOptimization(objective_function,n_iteration=50,population_size=50,minimize=True)


Parameters objective_function : user made function of the signature 'func(model,X_train,y_train,X_test,y_test)'.
The function must return a value, that needs to be minimized/maximized.
n_iteration : int, default=50
Number of time the algorithm will run
population_size : int, default=50
Total size of the population
minimize : bool, default=True
Defines if the objective value is to be maximized or minimized
Attributes best_feature_list : array-like
Final best set of features

Methods

Methods Class Name
fit Run the algorithm
plot_history Plot results achieved across iteration

fit(model,X_train,y_train,X_valid,y_valid,method='sinusoidal',verbose=True)

Parameters model :
machine learning model's object
X_train : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Training input samples to be used for machine learning model
y_train : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The target values (class labels in classification, real numbers in regression).
X_valid : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Validation input samples
y_valid : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The Validation target values .
method : {'linear','random','quadraic','sinusoidal'}, default='sinusoidal'
Choose the between the three methods of Dragon Fly optimization
verbose : bool,default=True
Print results for iterations
Returns best_feature_list : array-like
Final best set of features

plot_history()

Plot results across iterations

Example

from sklearn.metrics import log_loss
# define your own objective function, make sure the function receives four parameters,
#  fit your model and return the objective value ! 
def objective_function_topass(model,X_train, y_train, X_valid, y_valid):      
    model.fit(X_train,y_train)  
    P=log_loss(y_valid,model.predict_proba(X_valid))
    return P
    
# import an algorithm !  
from zoofs import DragonFlyOptimization
# create object of algorithm
algo_object=DragonFlyOptimization(objective_function_topass,n_iteration=20,
                                    population_size=20,minimize=True)
import lightgbm as lgb
lgb_model = lgb.LGBMClassifier()                                     
# fit the algorithm
algo_object.fit(lgb_model,X_train, y_train, X_valid, y_valid, method='sinusoidal', verbose=True)
#plot your results
algo_object.plot_history()


Genetic Algorithm

Dragon Fly


class zoofs.GeneticOptimization(objective_function,n_iteration=20,population_size=20,selective_pressure=2,elitism=2,mutation_rate=0.05,minimize=True)


Parameters objective_function : user made function of the signature 'func(model,X_train,y_train,X_test,y_test)'.
The function must return a value, that needs to be minimized/maximized.
n_iteration: int, default=50
Number of time the algorithm will run
population_size : int, default=50
Total size of the population
selective_pressure: int, default=2
measure of reproductive opportunities for each organism in the population
elitism: int, default=2
number of top individuals to be considered as elites
mutation_rate: float, default=0.05
rate of mutation in the population's gene
minimize: bool, default=True
Defines if the objective value is to be maximized or minimized
Attributes best_feature_list : array-like
Final best set of features

Methods

Methods Class Name
fit Run the algorithm
plot_history Plot results achieved across iteration

fit(model,X_train,y_train,X_valid,y_valid,verbose=True)

Parameters model :
machine learning model's object
X_train : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Training input samples to be used for machine learning model
y_train : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The target values (class labels in classification, real numbers in regression).
X_valid : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Validation input samples
y_valid : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The Validation target values .
verbose : bool,default=True
Print results for iterations
Returns best_feature_list : array-like
Final best set of features

plot_history()

Plot results across iterations

Example

from sklearn.metrics import log_loss
# define your own objective function, make sure the function receives four parameters,
#  fit your model and return the objective value ! 
def objective_function_topass(model,X_train, y_train, X_valid, y_valid):      
    model.fit(X_train,y_train)  
    P=log_loss(y_valid,model.predict_proba(X_valid))
    return P
    
# import an algorithm !  
from zoofs import GeneticOptimization
# create object of algorithm
algo_object=GeneticOptimization(objective_function_topass,n_iteration=20,
                            population_size=20,selective_pressure=2,elitism=2,
                            mutation_rate=0.05,minimize=True)
import lightgbm as lgb
lgb_model = lgb.LGBMClassifier()                            
# fit the algorithm
algo_object.fit(lgb_model,X_train, y_train,X_valid, y_valid, verbose=True)
#plot your results
algo_object.plot_history()

Gravitational Algorithm

Gravitational Algorithm


class zoofs.GravitationalOptimization(self,objective_function,n_iteration=50,population_size=50,g0=100,eps=0.5,minimize=True)


Parameters objective_function : user made function of the signature 'func(model,X_train,y_train,X_test,y_test)'.
The function must return a value, that needs to be minimized/maximized.
n_iteration: int, default=50
Number of time the algorithm will run
population_size : int, default=50
Total size of the population
g0: float, default=100
gravitational strength constant
eps: float, default=0.5
distance constant
minimize: bool, default=True
Defines if the objective value is to be maximized or minimized
Attributes best_feature_list : array-like
Final best set of features

Methods

Methods Class Name
fit Run the algorithm
plot_history Plot results achieved across iteration

fit(model,X_train,y_train,X_valid,y_valid,verbose=True)

Parameters model :
machine learning model's object
X_train : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Training input samples to be used for machine learning model
y_train : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The target values (class labels in classification, real numbers in regression).
X_valid : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Validation input samples
y_valid : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The Validation target values .
verbose : bool,default=True
Print results for iterations
Returns best_feature_list : array-like
Final best set of features

plot_history()

Plot results across iterations

Example

from sklearn.metrics import log_loss
# define your own objective function, make sure the function receives four parameters,
#  fit your model and return the objective value ! 
def objective_function_topass(model,X_train, y_train, X_valid, y_valid):      
    model.fit(X_train,y_train)  
    P=log_loss(y_valid,model.predict_proba(X_valid))
    return P
    
# import an algorithm !  
from zoofs import GravitationalOptimization
# create object of algorithm
algo_object=GravitationalOptimization(objective_function,n_iteration=50,
                                population_size=50,g0=100,eps=0.5,minimize=True) 
import lightgbm as lgb
lgb_model = lgb.LGBMClassifier()                                
# fit the algorithm
algo_object.fit(lgb_model,X_train, y_train, X_valid, y_valid, verbose=True)
#plot your results
algo_object.plot_history()

Support zoofs

The development of zoofs relies completely on contributions.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

First roll out

18,08,2021

License

apache-2.0

Comments
  • Looking for integrated Harris Haw Optimization in the zoofs

    Looking for integrated Harris Haw Optimization in the zoofs

    Additional context Harris Haw Optimization (HHO) is a novel meta-heuristic optimization algorithm released in 2019 with an increasing of applied research papers. It would be great if the team can add the HHO to the zoofs which will be potential for further testing and make the zoofs more popular.

    enhancement 
    opened by hanamthang 9
  • [Snyk] Security upgrade mistune from 0.8.4 to 2.0.1

    [Snyk] Security upgrade mistune from 0.8.4 to 2.0.1

    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • docs/requirement.txt
    ⚠️ Warning
    notebook 5.7.13 requires terminado, which is not installed.
    nbformat 4.4.0 requires jsonschema, which is not installed.
    nbconvert 5.6.1 has requirement mistune<2,>=0.8.1, but you have mistune 2.0.2.
    mkdocs-material 8.0.1 requires mkdocs, which is not installed.
    mkdocs-material 8.0.1 requires pymdown-extensions, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs-material-extensions, which is not installed.
    mkdocs-material 8.0.1 requires markdown, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | Cross-site Scripting (XSS)
    SNYK-PYTHON-MISTUNE-2328096 | mistune:
    0.8.4 -> 2.0.1
    | No | No Known Exploit

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the effected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings

    📚 Read more about Snyk's upgrade and patch logic

    opened by snyk-bot 2
  • [Snyk] Fix for 3 vulnerabilities

    [Snyk] Fix for 3 vulnerabilities

    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • docs/requirement.txt
    ⚠️ Warning
    notebook 5.7.13 requires terminado, which is not installed.
    nbformat 4.4.0 requires jsonschema, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs, which is not installed.
    mkdocs-material 8.0.1 requires pymdown-extensions, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs-material-extensions, which is not installed.
    mkdocs-material 8.0.1 requires markdown, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- high severity | 624/1000
    Why? Has a fix available, CVSS 8.2 | Arbitrary Code Execution
    SNYK-PYTHON-IPYTHON-2348630 | ipython:
    5.10.0 -> 7.16.3
    | No | No Known Exploit high severity | 696/1000
    Why? Proof of Concept exploit, Has a fix available, CVSS 7.5 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-PYGMENTS-1086606 | pygments:
    2.5.2 -> 2.7.4
    | No | Proof of Concept high severity | 589/1000
    Why? Has a fix available, CVSS 7.5 | Denial of Service (DoS)
    SNYK-PYTHON-PYGMENTS-1088505 | pygments:
    2.5.2 -> 2.7.4
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the effected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings

    📚 Read more about Snyk's upgrade and patch logic

    opened by snyk-bot 2
  • Feature importance

    Feature importance

    Hi, Thanks for the great repo. I would like to know whether we can get the ranking of the selected features after using one of your algorithm (ex: particle swarm optimization)

    opened by veeresh-dammur 2
  • Add a Gitter chat badge to README.md

    Add a Gitter chat badge to README.md

    jaswinder9051998/zoofs now has a Chat Room on Gitter

    @jaswinder9051998 has just created a chat room. You can visit it here: https://gitter.im/zooFeatureSelection/general.

    This pull-request adds this badge to your README.md:

    Gitter

    If my aim is a little off, please let me know.

    Happy chatting.

    PS: Click here if you would prefer not to receive automatic pull-requests from Gitter in future.

    opened by gitter-badger 1
  • Hyperparameter optimization for algorithms in zoofs

    Hyperparameter optimization for algorithms in zoofs

    Hi Jaswinder,

    Would you consider to add the function like GridSearch for hyper-parameter optimization of the algorithm, such as GWO, in the zoofs? This library, PySwarm (https://github.com/tisimst/pyswarm) for instance, they provide a GridSearch to find the best combination of the parameters c, w1, w2.

    For now, I have to do the trial and error to test which ranges of parameters in the GWO (population, iteration, method) deliver the best result for my dataset.

    Many thanks, Thang

    opened by hanamthang 1
  • Disabling verbose still prints logs

    Disabling verbose still prints logs

    Setting verbose=False still produces output at every iteration. This is problematic since the JSON file can get very large when the fit function runs for prolonged period of time.

    opened by aigarspetresevics 0
  • [Snyk] Fix for 2 vulnerabilities

    [Snyk] Fix for 2 vulnerabilities

    This PR was automatically created by Snyk using the credentials of a real user.


    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • docs/requirement.txt
    ⚠️ Warning
    mkdocs-material 8.0.1 requires pygments, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs-material-extensions, which is not installed.
    mkdocs-material 8.0.1 requires markdown, which is not installed.
    mkdocs-material 8.0.1 requires pymdown-extensions, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs, which is not installed.
    jupyter-nbextensions-configurator 0.6.1 requires notebook, which is not installed.
    jupyter-contrib-nbextensions 0.7.0 requires nbconvert, which is not installed.
    jupyter-contrib-nbextensions 0.7.0 requires notebook, which is not installed.
    jupyter-contrib-core 0.4.2 requires notebook, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | 551/1000
    Why? Recently disclosed, Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-SETUPTOOLS-3180412 | setuptools:
    39.0.1 -> 65.5.1
    | No | No Known Exploit medium severity | 551/1000
    Why? Recently disclosed, Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-WHEEL-3180413 | wheel:
    0.30.0 -> 0.38.0
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings

    📚 Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Regular Expression Denial of Service (ReDoS) 🦉 Regular Expression Denial of Service (ReDoS)

    opened by jaswinder9051998 2
  • [Snyk] Security upgrade setuptools from 39.0.1 to 65.5.1

    [Snyk] Security upgrade setuptools from 39.0.1 to 65.5.1

    This PR was automatically created by Snyk using the credentials of a real user.


    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • docs/requirement.txt
    ⚠️ Warning
    notebook 5.7.16 requires terminado, which is not installed.
    nbformat 4.4.0 requires jsonschema, which is not installed.
    nbconvert 5.6.1 has requirement mistune<2,>=0.8.1, but you have mistune 2.0.4.
    mkdocs-material 8.0.1 requires mkdocs, which is not installed.
    mkdocs-material 8.0.1 requires pymdown-extensions, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs-material-extensions, which is not installed.
    mkdocs-material 8.0.1 requires markdown, which is not installed.
    jupyter-nbextensions-configurator 0.5.0 has requirement notebook>=6.0, but you have notebook 5.7.16.
    ipython 5.10.0 requires simplegeneric, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- low severity | 441/1000
    Why? Recently disclosed, Has a fix available, CVSS 3.1 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-SETUPTOOLS-3113904 | setuptools:
    39.0.1 -> 65.5.1
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings

    📚 Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Regular Expression Denial of Service (ReDoS)

    opened by jaswinder9051998 1
  • [Snyk] Security upgrade wheel from 0.30.0 to 0.38.0

    [Snyk] Security upgrade wheel from 0.30.0 to 0.38.0

    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • docs/requirement.txt
    ⚠️ Warning
    notebook 5.7.16 requires pyzmq, which is not installed.
    notebook 5.7.16 requires terminado, which is not installed.
    nbformat 4.4.0 requires jsonschema, which is not installed.
    nbconvert 5.6.1 has requirement mistune<2,>=0.8.1, but you have mistune 2.0.4.
    mkdocs-material 8.0.1 requires mkdocs, which is not installed.
    mkdocs-material 8.0.1 requires markdown, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs-material-extensions, which is not installed.
    mkdocs-material 8.0.1 requires pymdown-extensions, which is not installed.
    jupyter-nbextensions-configurator 0.5.0 has requirement notebook>=6.0, but you have notebook 5.7.16.
    jupyter-client 5.3.5 requires pyzmq, which is not installed.
    ipython 5.10.0 requires simplegeneric, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | 551/1000
    Why? Recently disclosed, Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-WHEEL-3092128 | wheel:
    0.30.0 -> 0.38.0
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings

    📚 Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Regular Expression Denial of Service (ReDoS)

    opened by snyk-bot 1
  • Speed-up suggestions

    Speed-up suggestions

    It doesn't accept numpy arrays and so numba is out of question. Any suggestions to improve speed? When you have 100+ feature columns it takes atleast 2 weeks running 24/7

    opened by aigarspetresevics 0
  • Number of features

    Number of features

    First of all i want to thank you for this amazing library , just i want to ask can the size of best_feature_list can be declared before starting the algorithm ??

    opened by klil21 2
Releases(v0.1.24)
Owner
Jaswinder Singh
Associate Software Engineer - Data Science
Jaswinder Singh
A command line tool for memorizing algorithms in Python by typing them.

Algo Drills A command line tool for memorizing algorithms in Python by typing them. In alpha and things will change. How it works Type out an algorith

Travis Jungroth 43 Dec 02, 2022
Python based framework providing a simple and intuitive framework for algorithmic trading

Harvest is a Python based framework providing a simple and intuitive framework for algorithmic trading. Visit Harvest's website for details, tutorials

100 Jan 03, 2023
BCI datasets and algorithms

Brainda Welcome! First and foremost, Welcome! Thank you for visiting the Brainda repository which was initially released at this repo and reorganized

52 Jan 04, 2023
Optimal skincare partition finder using graph theory

Pigment The problem of partitioning up a skincare regime into parts such that each part does not interfere with itself is equivalent to the minimal cl

Jason Nguyen 1 Nov 22, 2021
Machine Learning algorithms implementation.

Machine Learning Algorithms Machine Learning algorithms implementation. What can I find here? ML Algorithms KNN K-Means-Clustering SVM (MultiClass) Pe

David Levin 1 Dec 10, 2021
An open source algorithm and dataset for finding poop in pictures.

The shitspotter module is where I will be work on the "shitspotter" poop-detection algorithm and dataset. The primary goal of this work is to allow for the creation of a phone app that finds where yo

Jon Crall 29 Nov 29, 2022
Algorithms and data structures for educational, demonstrational and experimental purposes.

Algorithms and Data Structures (ands) Introduction This project was created for personal use mostly while studying for an exam (starting in the month

50 Dec 06, 2022
N Queen Problem using Genetic Algorithm

The N Queen is the problem of placing N chess queens on an N×N chessboard so that no two queens attack each other.

Mahdi Hassanzadeh 2 Nov 11, 2022
Gnat - GNAT is NOT Algorithmic Trading

GNAT GNAT is NOT Algorithmic Trading! GNAT is a financial tool with two goals in

Sher Shah 2 Jan 09, 2022
Visualisation for sorting algorithms. Version 2.0

Visualisation for sorting algorithms v2. Upped a notch from version 1. This program provides animates simple, common and popular sorting algorithms, t

Ben Woo 7 Nov 08, 2022
Leveraging Unique CPS Properties to Design Better Privacy-Enhancing Algorithms

Differential_Privacy_CPS Python implementation of the research paper Leveraging Unique CPS Properties to Design Better Privacy-Enhancing Algorithms Re

Shubhesh Anand 2 Dec 14, 2022
Python Package for Reflection Ultrasound Computed Tomography (RUCT) Delay And Sum (DAS) Algorithm

pyruct Python Package for Reflection Ultrasound Computed Tomography (RUCT) Delay And Sum (DAS) Algorithm The imaging setup is explained in these paper

Berkan Lafci 21 Dec 12, 2022
How on earth can I ever think of a solution like that in an interview?!

fuck-coding-interviews This repository is created by an awkward programmer who always struggles with coding problems on LeetCode, even with some Easy

Vinta Chen 613 Jan 08, 2023
A simple python implementation of A* and bfs algorithm solving Eight-Puzzle

A simple python implementation of A* and bfs algorithm solving Eight-Puzzle

2 May 22, 2022
Apriori - An algorithm for frequent item set mining and association rule learning over relational databases

Apriori Apriori is an algorithm for frequent item set mining and association rul

Mohammad Nazari 8 Jan 10, 2022
This repository is an individual project made at BME with the topic of self-driving car simulator and control algorithm.

BME individual project - NEAT based self-driving car This repository is an individual project made at BME with the topic of self-driving car simulator

NGO ANH TUAN 1 Dec 13, 2021
All algorithms implemented in Python for education

The Algorithms - Python All algorithms implemented in Python - for education Implementations are for learning purposes only. As they may be less effic

1 Oct 20, 2021
A fast, pure python implementation of the MuyGPs Gaussian process realization and training algorithm.

Fast implementation of the MuyGPs Gaussian process hyperparameter estimation algorithm MuyGPs is a GP estimation method that affords fast hyperparamet

Lawrence Livermore National Laboratory 13 Dec 02, 2022
The test data, code and detailed description of the AW t-SNE algorithm

AW-t-SNE The test data, code and result of the AW t-SNE algorithm Structure of the folder Datasets: This folder contains two datasets, the MNIST datas

1 Mar 09, 2022
:computer: Data Structures and Algorithms in Python

Algorithms in Python Implementations of a few algorithms and datastructures for fun and profit! Completed Karatsuba Multiplication Basic Sorting Rabin

Prakhar Srivastav 2.9k Jan 01, 2023