PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.

Related tags

Deep Learningphotonai
Overview

PHOTON LOGO

GitHub Workflow Status Coverage Status Github Contributors Github Commits PyPI Version License Twitter URL

PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.

We've created a system in which you can easily select and combine both pre-processing and learning algorithms from state-of-the-art machine learning toolboxes, and arrange them in simple or parallel pipeline data streams.

In addition, you can parametrize your training and testing workflow choosing cross-validation schemes, performance metrics and hyperparameter optimization metrics from a list of pre-registered options.

Importantly, you can integrate custom solutions into your data processing pipeline, but also for any part of the model training and evaluation process including custom hyperparameter optimization strategies.

For a detailed description, visit our website and read the documentation

or you can read our paper in PLOS ONE


Getting Started

In order to use PHOTONAI you only need to have your favourite Python IDE ready. Then install the latest stable version simply via pip

pip install photonai
# Or try out the latest features if you don't rely on a stable version, using:
pip install --upgrade git+https://github.com/wwu-mmll/[email protected]

You can setup a full stack machine learning pipeline in a few lines of code:

from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import KFold

from photonai.base import Hyperpipe, PipelineElement
from photonai.optimization import FloatRange, Categorical, IntegerRange

# DESIGN YOUR PIPELINE
my_pipe = Hyperpipe('basic_svm_pipe',  # the name of your pipeline
                    # which optimizer PHOTONAI shall use
                    optimizer='sk_opt',
                    optimizer_params={'n_configurations': 25},
                    # the performance metrics of your interest
                    metrics=['accuracy', 'precision', 'recall', 'balanced_accuracy'],
                    # after hyperparameter optimization, this metric declares the winner config
                    best_config_metric='accuracy',
                    # repeat hyperparameter optimization three times
                    outer_cv=KFold(n_splits=3),
                    # test each configuration five times respectively,
                    inner_cv=KFold(n_splits=5),
                    verbosity=1,
                    project_folder='./tmp/')


# first normalize all features
my_pipe.add(PipelineElement('StandardScaler'))

# then do feature selection using a PCA
my_pipe += PipelineElement('PCA', 
                           hyperparameters={'n_components': IntegerRange(5, 20)}, 
                           test_disabled=True)

# engage and optimize the good old SVM for Classification
my_pipe += PipelineElement('SVC', 
                           hyperparameters={'kernel': Categorical(['rbf', 'linear']),
                                            'C': FloatRange(0.5, 2)}, gamma='scale')

# train pipeline
X, y = load_breast_cancer(return_X_y=True)
my_pipe.fit(X, y)

Features

Easy access to established ML implementations

We pre-registered diverse preprocessing and learning algorithms from state-of-the-art toolboxes e.g. scikit-learn, keras and imbalanced learn to rapidly build custom pipelines

Hyperparameter Optimization

With PHOTONAI you can seamlessly switch between diverse hyperparameter optimization strategies, such as (random) grid-search or bayesian optimization (scikit-optimize, smac3).

Extended ML Pipeline

You can build custom sequences of processing and learning algorithms with a simple syntax. PHOTONAI offers extended pipeline functionality such as parallel sequences, custom callbacks in-between pipeline elements, AND- and OR- Operations, as well as the possibility to flexibly position data augmentation, class balancing or learning algorithms anywhere in the pipeline.

Model Sharing

PHOTONAI provides a standardized format for sharing and loading optimized pipelines across platforms with only one line of code.

Automation

While you concentrate on selecting appropriate processing steps, learning algorithms, hyperparameters and training parameters, PHOTONAI automates the nested cross-validated optimization and evaluation loop for any custom pipeline.

Results Visualization

PHOTONAI comes with extensive logging of all information in the training, testing and hyperparameter optimization process. In addition, optimum performances and the hyperparameter optimization progress are visualized in the PHOTONAI Explorer.

For more use cases, examples, contribution guidelines and API details visit our website

www.photon-ai.com

Comments
  • Question on the Over/Under sampling on validation and test splits

    Question on the Over/Under sampling on validation and test splits

    Hi, I defined a classification hyperpipe that involves a PipelineElement that Oversamples or Undersamples the input dataset. I would like to know if this step is done only on the training split of the nested cross validation or also on the validation and test splits ? Actually, I would like to know if the metrics computed to select the best models and to evaluate them are only computed on the "real" samples and not on the "real + fake" ones (in case of an oversampling), and if it is computed on all the samples and not only the selected ones in case of an undersampling. Do you know the answer or maybe a document where I can search for the answer ? I have not found it on the documentation but maybe I searched #badly. Thanks a lot ! Clément

    opened by brosscle 2
  • Bump dask from 2.30.0 to 2021.10.0

    Bump dask from 2.30.0 to 2021.10.0

    Bumps dask from 2.30.0 to 2021.10.0.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 2
  • FIX #44 ImbalancedDataTransformer

    FIX #44 ImbalancedDataTransformer

    Associated to #44.

    ImbalancedDataTransformer did not subsequently adjust the method name. This should be fixed by this PR. Furthermore the input of the kwargs was replaced by a config parameter. This can now specifically set the setting for each strategy. It is important that this is not used within the hyperparameters, as it is not certain which parameter will be set first. I have added this to the comments in the relevant places. Is there a better way to do this in this context?

    opened by lucasplagwitz 1
  • Imbalanced Data Transform always set to 'RandomUnderSampler' method

    Imbalanced Data Transform always set to 'RandomUnderSampler' method

    Hi, and thanks for your work on PhotonAI ! I created an hyperpipe and added a PipelineElement about imbalanced data transformations, like explained on https://wwu-mmll.github.io/photonai/examples/imbalanced_data/ . Unfortunately, when I look at my hyperpipe elements after the creation, I get the element PipelineElement(method_name='RandomUnderSampler', name='ImbalancedDataTransformer') for every selected method, event when selecting an oversampling method, which is quite embarrassing... Do you have any idea about how to solve this issue, and effectively add the selected method as element to my hyperpipe ? I am using version 2.1.0, maybe this issue has been addressed on version 2.2.0 ? Thanks in advance for your help ! Clément

    opened by brosscle 1
  • Test-Adaptations for Dask 2020.12.0-2021.2.0

    Test-Adaptations for Dask 2020.12.0-2021.2.0

    Move function self.create_hyperpipe() to create_hyperpipe() to enable a pickle-based serialization for newer versions of dask/distributed.

    Works only for versions != 2021.3.x -> is probably related to dask/distributed#4645 and solved with 2021.04.0.

    Small skopt adaptations: use defaults (base_estimator: ET->GP)

    opened by lucasplagwitz 1
  • Permutation test: (1) document to large to save to MongoDB (2) _calculate_results: mongodb path set to trap-umbriel

    Permutation test: (1) document to large to save to MongoDB (2) _calculate_results: mongodb path set to trap-umbriel

    While running a permutation test with Photon, the following issues occurred.

    Issue 1: While running the test (implemented as suggested in the docu https://www.photon-ai.com/documentation/permutation_test), the script is unable to write the results of the very first permutation (y=ytrue) into the MongoDB since the file size is too large. Since the PermutationTest relies on the MonoDB entries the process finishes with code 1.

    As a workaround, reducing the number of cv folds or the number of hyperparameter configurations (hence reducing the data to be stored in the MongoDB) solves the problem and the test will continue to run without any issues. The most obvious solution to me would be to forgo saving certain data into the MongoDB (e.g. feature importances, predictions). To my understanding this had been implemented in the past but was discontinued (commit d5ecd1b, 24.09.2019).

    Issue 2: While calculating the permutation results, the server cannot be found. It seems as in the def _calculate_results function (line 181), the server is set to mongodb_path="mongodb://trap-umbriel:27017/photon_results" and not to the server which has been set by the user.

    photonai version: 1.1.0 (develop tree) OS: MAC OS 10.15.4 n samples: 1650 features (predictors): 55

    Error log: Issue 1: File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 90, in fit self.pipe.results.save() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymodm/base/models.py", line 476, in save self.to_son(), upsert=True) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/collection.py", line 930, in replace_one collation=collation, session=session), File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/collection.py", line 856, in _update_retryable _update, session) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1491, in _retryable_write return self._retry_with_session(retryable, func, s, None) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1384, in _retry_with_session return func(session, sock_info, retryable) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/collection.py", line 852, in _update retryable_write=retryable_write) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/collection.py", line 822, in _update retryable_write=retryable_write).copy() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/pool.py", line 618, in command self._raise_connection_failure(error) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/pool.py", line 613, in command user_fields=user_fields) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/network.py", line 143, in command name, size, max_bson_size + message._COMMAND_OVERHEAD) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/message.py", line 1077, in _raise_document_too_large raise DocumentTooLarge("%r command document too large" % (operation,)) pymongo.errors.DocumentTooLarge: 'update' command document too large

    Issue 2: Traceback (most recent call last): perm_tester.fit(X, y) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 143, in fit perm_result = self._calculate_results(self.permutation_id) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 185, in _calculate_results mother_permutation = PermutationTest.find_reference(mongodb_path, permutation_id) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 291, in find_reference mother_permutation = _find_mummy(permutation_id) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 284, in _find_mummy 'computation_completed': True}).order_by([('computation_start_time', DESCENDING)]).first() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymodm/queryset.py", line 127, in first return next(iter(self.limit(-1))) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymodm/queryset.py", line 543, in return (to_instance(doc) for doc in self._get_raw_cursor()) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/cursor.py", line 1156, in next if len(self.__data) or self._refresh(): File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/cursor.py", line 1050, in _refresh self.__session = self.__collection.database.client._ensure_session() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1810, in _ensure_session return self.__start_session(True, causal_consistency=False) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1763, in __start_session server_session = self._get_server_session() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1796, in _get_server_session return self._topology.get_server_session() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/topology.py", line 485, in get_server_session None) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/topology.py", line 209, in _select_servers_loop self._error_message(selector)) pymongo.errors.ServerSelectionTimeoutError: trap-umbriel:27017: [Errno 8] nodename nor servname provided, or not known

    opened by MSchmitt-git 1
  • Could not load meta information for optimum pipe

    Could not load meta information for optimum pipe

    I am trying to use a model I was given by a colleague but keep coming up against an error finding base.PhotonBase.

    Here is the code I ran:

    from photonai.base import Hyperpipe best_model_file = 'mymodel.photon' my_model = Hyperpipe.load_optimum_pipe(best_model_file)

    where mymodel.photon sits in a folder that also includes a "photon_best_model" folder with __optimum_pipe_0_SimpleImputer.pkl, _optimum_pipe_1_StandardScaler.pkl, _optimum_pipe_2_Ridge.pkl, and optimum_pipe_blueprint.pkl' I requested these files from my colleague because the errors I was getting indicated they needed to be there to load the model.

    Running this I get the following error: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. defaults = yaml.load(f) /Users/lee_jollans/anaconda3/lib/python3.7/site-packages/sklearn/externals/joblib/__init__.py:15: FutureWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+. warnings.warn(msg, category=FutureWarning) Could not load meta information for optimum pipe Traceback (most recent call last): File "tryphoton.py", line 10, in <module> my_model = Hyperpipe.load_optimum_pipe(best_model_file) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/photonai/base/hyperpipe.py", line 1105, in load_optimum_pipe return PhotonModelPersistor.load_optimum_pipe(file, password) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/photonai/base/hyperpipe.py", line 1444, in load_optimum_pipe element_list = PhotonModelPersistor.load_elements(folder=load_folder) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/photonai/base/hyperpipe.py", line 1410, in load_elements loaded_pipeline_element = joblib.load(os.path.join(folder, element_info['filename'] + '.pkl')) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 605, in load obj = _unpickle(fobj, filename, mmap_mode) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 529, in _unpickle obj = unpickler.load() File "/Users/lee_jollans/anaconda3/lib/python3.7/pickle.py", line 1085, in load dispatch[key[0]](self) File "/Users/lee_jollans/anaconda3/lib/python3.7/pickle.py", line 1373, in load_global klass = self.find_class(module, name) File "/Users/lee_jollans/anaconda3/lib/python3.7/pickle.py", line 1423, in find_class __import__(module, level=0) ModuleNotFoundError: No module named 'photonai.base.PhotonBase'

    My colleague had originally noted that Hyperpipe is imported using from photonai.base.PhotonBase import Hyperpipe, which also did not work because I got the PhotonBase error.

    Note, I am running macOS Mojave and python 3.7.3

    Any help is greatly appreciated!

    opened by ljollans 8
  • Suggestions  in fabolas.py

    Suggestions in fabolas.py

    Hi, I’m a student and learning about BayesianOptimization rencently. I’m trying to make fabolas compactible to George 0.3.1 and I think I did it. And I hope I can give you some suggestions:

    1. I suggest that using stationary kernel (E.g. SE kernel) instead of non-stationary kernel (LinearKernel in Fabolas.py), because when you run get_incumbent() (in Fabolas.py), you will project the environment variables to 1, and then change to 0 because of _quadratic_bf(). Then you will run predict() in get_incumbent(), and the parameters of predict() will be matrix with env=0 (E.g. (a1,b1,0) ,(a2,b2,0) ,(a3,b3,0)...) If you use LinearKernel, the var of predict() will be a zeroes, and mean will of predict() will be a vector with same elements.As a result, the epmgp.py cannot work

    2. The parameter of EnvPrior() “n_lr=degree+1” can change to “n_lr= len(env_kernel) “

    opened by cjfcsjt 1
Releases(2.2.1)
  • 2.2.1(Aug 3, 2022)

  • 2.2.0(Nov 5, 2021)

  • 2.1.0(Mar 5, 2021)

    Documentation: https://wwu-mmll.github.io/photonai/

    Changelog Features:

    • enable integration of custom metrics
    • integrate automatic generation of learning curves
    • integrate nevergrad hyperparameter optimization strategy
    • add new hyperparameter optimizer designed to compare different (learning) algorithms in a Switch (OR-element)
    • add functionality to automatically find, analyze and compare the best config for each estimator (Switch) per outer fold
    • added scorer method to hyperpipe that scores with best_config_metric, therefore the Hyperpipe object can be used with scikit-learn functions.
    • integrated sklearn permutation feature importances into the workflow
    • disable usage of test samples with the parameter use_test_set in hyperpipe
    • removed the need to import Output Settings class to declare the project_folder -> moved to Hyperpipe constructor
    • added inverse_transform methods to several PHOTONAI algorithm implementations

    Development:

    • integrate documentation into github repo based on mkdocs and material theme: https://wwu-mmll.github.io/photonai/
    • switch continuos integration protocol to github actions: https://github.com/wwu-mmll/photonai/actions
    • code clean ups
    Source code(tar.gz)
    Source code(zip)
  • 2.0.0(Jul 14, 2020)

    • removed investigator, instead we offer the Explorer, a Javascript web application to visualize and analyze the results
    • moved photon.neuro module to an own package called photonai_neuro
    • updated repository structure, moved tests and examples to root directory
    • consistently named the repository photonai everywhere
    • included continuous integration pipeline with travis-ci
    Source code(tar.gz)
    Source code(zip)
  • 0.4.0(Feb 21, 2019)

    Starting with this release, you should be able to install PHOTON via pip. Issues with installing the requirements have been resolved. This release includes the PHOTON Investigator to visualize the results.

    Source code(tar.gz)
    Source code(zip)
Owner
Medical Machine Learning Lab - University of Münster
Medical Machine Learning Lab - University of Münster
Explainable Zero-Shot Topic Extraction

Zero-Shot Topic Extraction with Common-Sense Knowledge Graph This repository contains the code for reproducing the results reported in the paper "Expl

D2K Lab 56 Dec 14, 2022
Optimus: the first large-scale pre-trained VAE language model

Optimus: the first pre-trained Big VAE language model This repository contains source code necessary to reproduce the results presented in the EMNLP 2

314 Dec 19, 2022
MLP-Numpy - A simple modular implementation of Multi Layer Perceptron in pure Numpy.

MLP-Numpy A simple modular implementation of Multi Layer Perceptron in pure Numpy. I used the Iris dataset from scikit-learn library for the experimen

Soroush Omranpour 1 Jan 01, 2022
Hierarchical Aggregation for 3D Instance Segmentation (ICCV 2021)

HAIS Hierarchical Aggregation for 3D Instance Segmentation (ICCV 2021) by Shaoyu Chen, Jiemin Fang, Qian Zhang, Wenyu Liu, Xinggang Wang*. (*) Corresp

Hust Visual Learning Team 145 Jan 05, 2023
This repository contains the entire code for our work "Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid Precoding"

Two-Timescale-DNN Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid Precoding This repository contains the entire code for our work

QiyuHu 3 Mar 07, 2022
Calculates carbon footprint based on fuel mix and discharge profile at the utility selected. Can create graphs and tabular output for fuel mix based on input file of series of power drawn over a period of time.

carbon-footprint-calculator Conda distribution ~/anaconda3/bin/conda install anaconda-client conda-build ~/anaconda3/bin/conda config --set anaconda_u

Seattle university Renewable energy research 7 Sep 26, 2022
Code for GNMR in ICDE 2021

GNMR Code for GNMR in ICDE 2021 Please unzip data files in Datasets/MultiInt-ML10M first. Run labcode_preSamp.py (with graph sampling) for ECommerce-c

7 Oct 27, 2022
Addon and nodes for working with structural biology and molecular data in Blender.

Molecular Nodes 🧬 🔬 💻 Buy Me a Coffee to Keep Development Going! Join a Community of Blender SciVis People! What is Molecular Nodes? Molecular Node

Brady Johnston 456 Jan 08, 2023
[ICME 2021 Oral] CORE-Text: Improving Scene Text Detection with Contrastive Relational Reasoning

CORE-Text: Improving Scene Text Detection with Contrastive Relational Reasoning This repository is the official PyTorch implementation of CORE-Text, a

Jingyang Lin 18 Aug 11, 2022
Codes for [NeurIPS'21] You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership.

You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership Codes for [NeurIPS'21] You are caught stealing my winni

VITA 8 Nov 01, 2022
This is a clean and robust Pytorch implementation of DQN and Double DQN.

DQN/DDQN-Pytorch This is a clean and robust Pytorch implementation of DQN and Double DQN. Here is the training curve: All the experiments are trained

XinJingHao 15 Dec 27, 2022
A Python Package for Portfolio Optimization using the Critical Line Algorithm

PyCLA A Python Package for Portfolio Optimization using the Critical Line Algorithm Getting started To use PyCLA, clone the repo and install the requi

19 Oct 11, 2022
A Moonraker plug-in for real-time compensation of frame thermal expansion

Frame Expansion Compensation A Moonraker plug-in for real-time compensation of frame thermal expansion. Installation Credit to protoloft, from whom I

58 Jan 02, 2023
Episodic-memory - Ego4D Episodic Memory Benchmark

Ego4D Episodic Memory Benchmark EGO4D is the world's largest egocentric (first p

3 Feb 18, 2022
Yolo ros - YOLO-ROS for HUAWEI ATLAS200

YOLO-ROS YOLO-ROS for NVIDIA YOLO-ROS for HUAWEI ATLAS200, please checkout for b

ChrisLiu 5 Oct 18, 2022
DeepLab-ResNet rebuilt in TensorFlow

DeepLab-ResNet-TensorFlow This is an (re-)implementation of DeepLab-ResNet in TensorFlow for semantic image segmentation on the PASCAL VOC dataset. Fr

Vladimir 1.2k Nov 04, 2022
Lipstick ain't enough: Beyond Color-Matching for In-the-Wild Makeup Transfer (CVPR 2021)

Table of Content Introduction Datasets Getting Started Requirements Usage Example Training & Evaluation CPM: Color-Pattern Makeup Transfer CPM is a ho

VinAI Research 248 Dec 13, 2022
Towards Improving Embedding Based Models of Social Network Alignment via Pseudo Anchors

PSML paper: Towards Improving Embedding Based Models of Social Network Alignment via Pseudo Anchors PSML_IONE,PSML_ABNE,PSML_DEEPLINK,PSML_SNNA: numpy

13 Nov 27, 2022
FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack

FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack Case study of the FCA. The code can be find in FCA. Cas

IDRL 21 Dec 15, 2022
MediaPipeのPythonパッケージのサンプルです。2020/12/11時点でPython実装のある4機能(Hands、Pose、Face Mesh、Holistic)について用意しています。

mediapipe-python-sample MediaPipeのPythonパッケージのサンプルです。 2020/12/11時点でPython実装のある以下4機能について用意しています。 Hands Pose Face Mesh Holistic Requirement mediapipe 0.

KazuhitoTakahashi 217 Dec 12, 2022