PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.

Related tags

Deep Learningphotonai
Overview

PHOTON LOGO

GitHub Workflow Status Coverage Status Github Contributors Github Commits PyPI Version License Twitter URL

PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.

We've created a system in which you can easily select and combine both pre-processing and learning algorithms from state-of-the-art machine learning toolboxes, and arrange them in simple or parallel pipeline data streams.

In addition, you can parametrize your training and testing workflow choosing cross-validation schemes, performance metrics and hyperparameter optimization metrics from a list of pre-registered options.

Importantly, you can integrate custom solutions into your data processing pipeline, but also for any part of the model training and evaluation process including custom hyperparameter optimization strategies.

For a detailed description, visit our website and read the documentation

or you can read our paper in PLOS ONE


Getting Started

In order to use PHOTONAI you only need to have your favourite Python IDE ready. Then install the latest stable version simply via pip

pip install photonai
# Or try out the latest features if you don't rely on a stable version, using:
pip install --upgrade git+https://github.com/wwu-mmll/[email protected]

You can setup a full stack machine learning pipeline in a few lines of code:

from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import KFold

from photonai.base import Hyperpipe, PipelineElement
from photonai.optimization import FloatRange, Categorical, IntegerRange

# DESIGN YOUR PIPELINE
my_pipe = Hyperpipe('basic_svm_pipe',  # the name of your pipeline
                    # which optimizer PHOTONAI shall use
                    optimizer='sk_opt',
                    optimizer_params={'n_configurations': 25},
                    # the performance metrics of your interest
                    metrics=['accuracy', 'precision', 'recall', 'balanced_accuracy'],
                    # after hyperparameter optimization, this metric declares the winner config
                    best_config_metric='accuracy',
                    # repeat hyperparameter optimization three times
                    outer_cv=KFold(n_splits=3),
                    # test each configuration five times respectively,
                    inner_cv=KFold(n_splits=5),
                    verbosity=1,
                    project_folder='./tmp/')


# first normalize all features
my_pipe.add(PipelineElement('StandardScaler'))

# then do feature selection using a PCA
my_pipe += PipelineElement('PCA', 
                           hyperparameters={'n_components': IntegerRange(5, 20)}, 
                           test_disabled=True)

# engage and optimize the good old SVM for Classification
my_pipe += PipelineElement('SVC', 
                           hyperparameters={'kernel': Categorical(['rbf', 'linear']),
                                            'C': FloatRange(0.5, 2)}, gamma='scale')

# train pipeline
X, y = load_breast_cancer(return_X_y=True)
my_pipe.fit(X, y)

Features

Easy access to established ML implementations

We pre-registered diverse preprocessing and learning algorithms from state-of-the-art toolboxes e.g. scikit-learn, keras and imbalanced learn to rapidly build custom pipelines

Hyperparameter Optimization

With PHOTONAI you can seamlessly switch between diverse hyperparameter optimization strategies, such as (random) grid-search or bayesian optimization (scikit-optimize, smac3).

Extended ML Pipeline

You can build custom sequences of processing and learning algorithms with a simple syntax. PHOTONAI offers extended pipeline functionality such as parallel sequences, custom callbacks in-between pipeline elements, AND- and OR- Operations, as well as the possibility to flexibly position data augmentation, class balancing or learning algorithms anywhere in the pipeline.

Model Sharing

PHOTONAI provides a standardized format for sharing and loading optimized pipelines across platforms with only one line of code.

Automation

While you concentrate on selecting appropriate processing steps, learning algorithms, hyperparameters and training parameters, PHOTONAI automates the nested cross-validated optimization and evaluation loop for any custom pipeline.

Results Visualization

PHOTONAI comes with extensive logging of all information in the training, testing and hyperparameter optimization process. In addition, optimum performances and the hyperparameter optimization progress are visualized in the PHOTONAI Explorer.

For more use cases, examples, contribution guidelines and API details visit our website

www.photon-ai.com

Comments
  • Question on the Over/Under sampling on validation and test splits

    Question on the Over/Under sampling on validation and test splits

    Hi, I defined a classification hyperpipe that involves a PipelineElement that Oversamples or Undersamples the input dataset. I would like to know if this step is done only on the training split of the nested cross validation or also on the validation and test splits ? Actually, I would like to know if the metrics computed to select the best models and to evaluate them are only computed on the "real" samples and not on the "real + fake" ones (in case of an oversampling), and if it is computed on all the samples and not only the selected ones in case of an undersampling. Do you know the answer or maybe a document where I can search for the answer ? I have not found it on the documentation but maybe I searched #badly. Thanks a lot ! Clément

    opened by brosscle 2
  • Bump dask from 2.30.0 to 2021.10.0

    Bump dask from 2.30.0 to 2021.10.0

    Bumps dask from 2.30.0 to 2021.10.0.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 2
  • FIX #44 ImbalancedDataTransformer

    FIX #44 ImbalancedDataTransformer

    Associated to #44.

    ImbalancedDataTransformer did not subsequently adjust the method name. This should be fixed by this PR. Furthermore the input of the kwargs was replaced by a config parameter. This can now specifically set the setting for each strategy. It is important that this is not used within the hyperparameters, as it is not certain which parameter will be set first. I have added this to the comments in the relevant places. Is there a better way to do this in this context?

    opened by lucasplagwitz 1
  • Imbalanced Data Transform always set to 'RandomUnderSampler' method

    Imbalanced Data Transform always set to 'RandomUnderSampler' method

    Hi, and thanks for your work on PhotonAI ! I created an hyperpipe and added a PipelineElement about imbalanced data transformations, like explained on https://wwu-mmll.github.io/photonai/examples/imbalanced_data/ . Unfortunately, when I look at my hyperpipe elements after the creation, I get the element PipelineElement(method_name='RandomUnderSampler', name='ImbalancedDataTransformer') for every selected method, event when selecting an oversampling method, which is quite embarrassing... Do you have any idea about how to solve this issue, and effectively add the selected method as element to my hyperpipe ? I am using version 2.1.0, maybe this issue has been addressed on version 2.2.0 ? Thanks in advance for your help ! Clément

    opened by brosscle 1
  • Test-Adaptations for Dask 2020.12.0-2021.2.0

    Test-Adaptations for Dask 2020.12.0-2021.2.0

    Move function self.create_hyperpipe() to create_hyperpipe() to enable a pickle-based serialization for newer versions of dask/distributed.

    Works only for versions != 2021.3.x -> is probably related to dask/distributed#4645 and solved with 2021.04.0.

    Small skopt adaptations: use defaults (base_estimator: ET->GP)

    opened by lucasplagwitz 1
  • Permutation test: (1) document to large to save to MongoDB (2) _calculate_results: mongodb path set to trap-umbriel

    Permutation test: (1) document to large to save to MongoDB (2) _calculate_results: mongodb path set to trap-umbriel

    While running a permutation test with Photon, the following issues occurred.

    Issue 1: While running the test (implemented as suggested in the docu https://www.photon-ai.com/documentation/permutation_test), the script is unable to write the results of the very first permutation (y=ytrue) into the MongoDB since the file size is too large. Since the PermutationTest relies on the MonoDB entries the process finishes with code 1.

    As a workaround, reducing the number of cv folds or the number of hyperparameter configurations (hence reducing the data to be stored in the MongoDB) solves the problem and the test will continue to run without any issues. The most obvious solution to me would be to forgo saving certain data into the MongoDB (e.g. feature importances, predictions). To my understanding this had been implemented in the past but was discontinued (commit d5ecd1b, 24.09.2019).

    Issue 2: While calculating the permutation results, the server cannot be found. It seems as in the def _calculate_results function (line 181), the server is set to mongodb_path="mongodb://trap-umbriel:27017/photon_results" and not to the server which has been set by the user.

    photonai version: 1.1.0 (develop tree) OS: MAC OS 10.15.4 n samples: 1650 features (predictors): 55

    Error log: Issue 1: File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 90, in fit self.pipe.results.save() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymodm/base/models.py", line 476, in save self.to_son(), upsert=True) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/collection.py", line 930, in replace_one collation=collation, session=session), File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/collection.py", line 856, in _update_retryable _update, session) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1491, in _retryable_write return self._retry_with_session(retryable, func, s, None) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1384, in _retry_with_session return func(session, sock_info, retryable) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/collection.py", line 852, in _update retryable_write=retryable_write) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/collection.py", line 822, in _update retryable_write=retryable_write).copy() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/pool.py", line 618, in command self._raise_connection_failure(error) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/pool.py", line 613, in command user_fields=user_fields) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/network.py", line 143, in command name, size, max_bson_size + message._COMMAND_OVERHEAD) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/message.py", line 1077, in _raise_document_too_large raise DocumentTooLarge("%r command document too large" % (operation,)) pymongo.errors.DocumentTooLarge: 'update' command document too large

    Issue 2: Traceback (most recent call last): perm_tester.fit(X, y) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 143, in fit perm_result = self._calculate_results(self.permutation_id) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 185, in _calculate_results mother_permutation = PermutationTest.find_reference(mongodb_path, permutation_id) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 291, in find_reference mother_permutation = _find_mummy(permutation_id) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 284, in _find_mummy 'computation_completed': True}).order_by([('computation_start_time', DESCENDING)]).first() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymodm/queryset.py", line 127, in first return next(iter(self.limit(-1))) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymodm/queryset.py", line 543, in return (to_instance(doc) for doc in self._get_raw_cursor()) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/cursor.py", line 1156, in next if len(self.__data) or self._refresh(): File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/cursor.py", line 1050, in _refresh self.__session = self.__collection.database.client._ensure_session() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1810, in _ensure_session return self.__start_session(True, causal_consistency=False) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1763, in __start_session server_session = self._get_server_session() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1796, in _get_server_session return self._topology.get_server_session() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/topology.py", line 485, in get_server_session None) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/topology.py", line 209, in _select_servers_loop self._error_message(selector)) pymongo.errors.ServerSelectionTimeoutError: trap-umbriel:27017: [Errno 8] nodename nor servname provided, or not known

    opened by MSchmitt-git 1
  • Could not load meta information for optimum pipe

    Could not load meta information for optimum pipe

    I am trying to use a model I was given by a colleague but keep coming up against an error finding base.PhotonBase.

    Here is the code I ran:

    from photonai.base import Hyperpipe best_model_file = 'mymodel.photon' my_model = Hyperpipe.load_optimum_pipe(best_model_file)

    where mymodel.photon sits in a folder that also includes a "photon_best_model" folder with __optimum_pipe_0_SimpleImputer.pkl, _optimum_pipe_1_StandardScaler.pkl, _optimum_pipe_2_Ridge.pkl, and optimum_pipe_blueprint.pkl' I requested these files from my colleague because the errors I was getting indicated they needed to be there to load the model.

    Running this I get the following error: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. defaults = yaml.load(f) /Users/lee_jollans/anaconda3/lib/python3.7/site-packages/sklearn/externals/joblib/__init__.py:15: FutureWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+. warnings.warn(msg, category=FutureWarning) Could not load meta information for optimum pipe Traceback (most recent call last): File "tryphoton.py", line 10, in <module> my_model = Hyperpipe.load_optimum_pipe(best_model_file) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/photonai/base/hyperpipe.py", line 1105, in load_optimum_pipe return PhotonModelPersistor.load_optimum_pipe(file, password) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/photonai/base/hyperpipe.py", line 1444, in load_optimum_pipe element_list = PhotonModelPersistor.load_elements(folder=load_folder) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/photonai/base/hyperpipe.py", line 1410, in load_elements loaded_pipeline_element = joblib.load(os.path.join(folder, element_info['filename'] + '.pkl')) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 605, in load obj = _unpickle(fobj, filename, mmap_mode) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 529, in _unpickle obj = unpickler.load() File "/Users/lee_jollans/anaconda3/lib/python3.7/pickle.py", line 1085, in load dispatch[key[0]](self) File "/Users/lee_jollans/anaconda3/lib/python3.7/pickle.py", line 1373, in load_global klass = self.find_class(module, name) File "/Users/lee_jollans/anaconda3/lib/python3.7/pickle.py", line 1423, in find_class __import__(module, level=0) ModuleNotFoundError: No module named 'photonai.base.PhotonBase'

    My colleague had originally noted that Hyperpipe is imported using from photonai.base.PhotonBase import Hyperpipe, which also did not work because I got the PhotonBase error.

    Note, I am running macOS Mojave and python 3.7.3

    Any help is greatly appreciated!

    opened by ljollans 8
  • Suggestions  in fabolas.py

    Suggestions in fabolas.py

    Hi, I’m a student and learning about BayesianOptimization rencently. I’m trying to make fabolas compactible to George 0.3.1 and I think I did it. And I hope I can give you some suggestions:

    1. I suggest that using stationary kernel (E.g. SE kernel) instead of non-stationary kernel (LinearKernel in Fabolas.py), because when you run get_incumbent() (in Fabolas.py), you will project the environment variables to 1, and then change to 0 because of _quadratic_bf(). Then you will run predict() in get_incumbent(), and the parameters of predict() will be matrix with env=0 (E.g. (a1,b1,0) ,(a2,b2,0) ,(a3,b3,0)...) If you use LinearKernel, the var of predict() will be a zeroes, and mean will of predict() will be a vector with same elements.As a result, the epmgp.py cannot work

    2. The parameter of EnvPrior() “n_lr=degree+1” can change to “n_lr= len(env_kernel) “

    opened by cjfcsjt 1
Releases(2.2.1)
  • 2.2.1(Aug 3, 2022)

  • 2.2.0(Nov 5, 2021)

  • 2.1.0(Mar 5, 2021)

    Documentation: https://wwu-mmll.github.io/photonai/

    Changelog Features:

    • enable integration of custom metrics
    • integrate automatic generation of learning curves
    • integrate nevergrad hyperparameter optimization strategy
    • add new hyperparameter optimizer designed to compare different (learning) algorithms in a Switch (OR-element)
    • add functionality to automatically find, analyze and compare the best config for each estimator (Switch) per outer fold
    • added scorer method to hyperpipe that scores with best_config_metric, therefore the Hyperpipe object can be used with scikit-learn functions.
    • integrated sklearn permutation feature importances into the workflow
    • disable usage of test samples with the parameter use_test_set in hyperpipe
    • removed the need to import Output Settings class to declare the project_folder -> moved to Hyperpipe constructor
    • added inverse_transform methods to several PHOTONAI algorithm implementations

    Development:

    • integrate documentation into github repo based on mkdocs and material theme: https://wwu-mmll.github.io/photonai/
    • switch continuos integration protocol to github actions: https://github.com/wwu-mmll/photonai/actions
    • code clean ups
    Source code(tar.gz)
    Source code(zip)
  • 2.0.0(Jul 14, 2020)

    • removed investigator, instead we offer the Explorer, a Javascript web application to visualize and analyze the results
    • moved photon.neuro module to an own package called photonai_neuro
    • updated repository structure, moved tests and examples to root directory
    • consistently named the repository photonai everywhere
    • included continuous integration pipeline with travis-ci
    Source code(tar.gz)
    Source code(zip)
  • 0.4.0(Feb 21, 2019)

    Starting with this release, you should be able to install PHOTON via pip. Issues with installing the requirements have been resolved. This release includes the PHOTON Investigator to visualize the results.

    Source code(tar.gz)
    Source code(zip)
Owner
Medical Machine Learning Lab - University of Münster
Medical Machine Learning Lab - University of Münster
Voxel-based Network for Shape Completion by Leveraging Edge Generation (ICCV 2021, oral)

Voxel-based Network for Shape Completion by Leveraging Edge Generation This is the PyTorch implementation for the paper "Voxel-based Network for Shape

10 Dec 04, 2022
A Python multilingual toolkit for Sentiment Analysis and Social NLP tasks

pysentimiento: A Python toolkit for Sentiment Analysis and Social NLP tasks A Transformer-based library for SocialNLP classification tasks. Currently

298 Jan 07, 2023
As-ViT: Auto-scaling Vision Transformers without Training

As-ViT: Auto-scaling Vision Transformers without Training [PDF] Wuyang Chen, Wei Huang, Xianzhi Du, Xiaodan Song, Zhangyang Wang, Denny Zhou In ICLR 2

VITA 68 Sep 05, 2022
Official PyTorch implementation of MAAD: A Model and Dataset for Attended Awareness

MAAD: A Model for Attended Awareness in Driving Install // Datasets // Training // Experiments // Analysis // License Official PyTorch implementation

7 Oct 16, 2022
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.

CycleGAN PyTorch | project page | paper Torch implementation for learning an image-to-image translation (i.e. pix2pix) without input-output pairs, for

Jun-Yan Zhu 11.5k Dec 30, 2022
Sentinel-1 vessel detection model used in the xView3 challenge

sar_vessel_detect Code for the AI2 Skylight team's submission in the xView3 competition (https://iuu.xview.us) for vessel detection in Sentinel-1 SAR

AI2 6 Sep 10, 2022
[AAAI22] Reliable Propagation-Correction Modulation for Video Object Segmentation

Reliable Propagation-Correction Modulation for Video Object Segmentation (AAAI22) Preview version paper of this work is available at: https://arxiv.or

Xiaohao Xu 70 Dec 04, 2022
[AAAI 2022] Sparse Structure Learning via Graph Neural Networks for Inductive Document Classification

Sparse Structure Learning via Graph Neural Networks for inductive document classification Make graph dataset create co-occurrence graph for datasets.

16 Dec 22, 2022
Medical-Image-Triage-and-Classification-System-Based-on-COVID-19-CT-and-X-ray-Scan-Dataset

Medical-Image-Triage-and-Classification-System-Based-on-COVID-19-CT-and-X-ray-Sc

2 Dec 26, 2021
DeepLearning Anomalies Detection with Bluetooth Sensor Data

Final Year Project. Constructing models to create offline anomalies detection using Travel Time Data collected from Bluetooth sensors along the route.

1 Jan 10, 2022
Source code for CVPR2022 paper "Abandoning the Bayer-Filter to See in the Dark"

Abandoning the Bayer-Filter to See in the Dark (CVPR 2022) Paper: https://arxiv.org/abs/2203.04042 (Arxiv version) This code includes the training and

74 Dec 15, 2022
Jupyter notebooks for the code samples of the book "Deep Learning with Python"

Jupyter notebooks for the code samples of the book "Deep Learning with Python"

François Chollet 16.2k Dec 30, 2022
Deploy optimized transformer based models on Nvidia Triton server

🤗 Hugging Face Transformer submillisecond inference 🤯 and deployment on Nvidia Triton server Yes, you can perfom inference with transformer based mo

Lefebvre Sarrut Services 1.2k Jan 05, 2023
A framework for attentive explainable deep learning on tabular data

🧠 kendrite A framework for attentive explainable deep learning on tabular data 💨 Quick start kedro run 🧱 Built upon Technology Description Links ke

Marnix Koops 3 Nov 06, 2021
ICLR 2021: Pre-Training for Context Representation in Conversational Semantic Parsing

SCoRe: Pre-Training for Context Representation in Conversational Semantic Parsing This repository contains code for the ICLR 2021 paper "SCoRE: Pre-Tr

Microsoft 28 Oct 02, 2022
Training and Evaluation Code for Neural Volumes

Neural Volumes This repository contains training and evaluation code for the paper Neural Volumes. The method learns a 3D volumetric representation of

Meta Research 370 Dec 08, 2022
Implementation of SwinTransformerV2 in TensorFlow.

SwinTransformerV2-TensorFlow A TensorFlow implementation of SwinTransformerV2 by Microsoft Research Asia, based on their official implementation of Sw

Phan Nguyen 2 May 30, 2022
This is an unofficial implementation of the paper “Student-Teacher Feature Pyramid Matching for Unsupervised Anomaly Detection”.

This is an unofficial implementation of the paper “Student-Teacher Feature Pyramid Matching for Unsupervised Anomaly Detection”.

haifeng xia 32 Oct 26, 2022
Exploit ILP to learn symmetry breaking constraints of ASP programs.

ILP Symmetry Breaking Overview This project aims to exploit inductive logic programming to lift symmetry breaking constraints of ASP programs. Given a

Research Group Production Systems 1 Apr 13, 2022
[NeurIPS-2021] Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation

Efficient Graph Similarity Computation - (EGSC) This repo contains the source code and dataset for our paper: Slow Learning and Fast Inference: Effici

23 Nov 11, 2022