Notebook and code to synthesize complex and highly dimensional datasets using Gretel APIs.

Related tags

Deep Learningtrainer
Overview

Gretel Trainer

This code is designed to help users successfully train synthetic models on complex datasets with high row and column counts. The code works by intelligently dividing a dataset into a set of smaller datasets of correlated columns that can be parallelized and then joined together.

Get Started

Running the notebook

  1. Launch the Notebook in Google Colab or your preferred environment.
  2. Add your dataset and Gretel API key to the notebook.
  3. Generate synthetic data!

NOTE: Either delete the existing or choose a new cache file name if you are starting a dataset run from scratch.

TODOs / Roadmap

  • Enable additional sampling from from trained models.
  • Detect and label encode random UIDs (preprocessing).
Comments
  • Benchmark route Amplify models through Trainer

    Benchmark route Amplify models through Trainer

    Top level change

    Now that Trainer has a GretelAmplify model, Benchmark uses Trainer for Amplify runs instead of the SDK.

    Refactor

    I refactored Benchmark's Gretel models and executors with the goal of centralizing and thus making it simpler to understand:

    • which model types use Trainer (opt-in) vs. use the SDK
    • the "compatibility requirements" for different models (currently: LSTM <= 150 columns, GPTX == 1 column)

    These had been spread across a few different places (compare.py determined Trainer/SDK, gretel/sdk.py had GPTX compatibility, gretel/trainer.py had LSTM compatibility), but now it can all be found in gretel/models.py.

    At first glance it would seem compatibility requirements could be defined on specific model subclasses to make things more polymorphic. However, Benchmark's Gretel model classes are really just friendly wrappers around specific model configurations (from the blueprints repo) and do not represent all possible instances of that model type running through Benchmark. Instead, we instruct users subclass the generic GretelModel base class when they want to provide their own specific Gretel configuration. There are two reasons for this:

    1. It's a simpler instruction (always subclass this one thing)
    2. It enables us to include model types that are not yet "first class supported," such as DGAN (which we can't support in the same way we do models like Amplify/LSTM/etc. because DGAN's config includes required fields that are specifically coupled to the data source—there is no "one size fits all" blueprint).

    Small fixes

    • fix the model_slug value for Trainer's GretelACTGAN model
      • :warning: should this be changed to a list ["actgan", "ctgan"] for a little while for a smoother transition/deprecation experience??
    • zero-index custom model runs' run-identifier to match gretel model runs (which were themselves fixed to match project names here)
    opened by mikeknep 2
  • Lift gretel model compatibility to separate module

    Lift gretel model compatibility to separate module

    What's here

    Make it easier to find the "compatibility rules" for models by lifting the logic to its own module.

    Why not add this logic to the specific model classes? Wouldn't that be more polymorphic?

    The model classes (GretelLSTM, GretelCTGAN, etc.) are wrappers around specific configurations from the blueprints repo. They do not represent every possible configuration of that model type. If a user wants to run a customized LSTM config, for example, they subclass GretelModel, not GretelLSTM:

    class MyLstm(GretelModel):
        config = "/path/to/my_lstm.yml"
    

    Note: they could subclass GretelLSTM, but 1) it's easier to tell people to just subclass GretelModel regardless of model type, and/because 2) this ultimately treats the model configuration as the source of truth.

    If someone mistakenly created a custom Gretel model like this...

    class MyGptX(GretelGPTX):
        config = "/path/to/my_amplify.yml"
    

    ...Benchmark will treat this as an Amplify model, because basically all it does with the class instance is grab the config attribute (and the name—the results output will show the name as MyGptX.)

    opened by mikeknep 1
  • Lr/artifact manifest

    Lr/artifact manifest

    Added logic for config selection and updated dictionary key to access manifest per latest internal changes.

    Note that high-dimensionality-high-record is non-existent at the moment, as is the manifest endpoint :)

    Items yet to be addressed:

    • turn off partitions for non-LSTM models
    opened by lipikaramaswamy 1
  • Add param to pass custom base configuration

    Add param to pass custom base configuration

    • Prefer config if present, otherwise use the model_type's default config.
    • This does open the door a little wider to setting an invalid config that won't be known to be bad until attempting to train. That door was already slightly ajar in that one could use model_params to set keys to invalid values.
    • Not included here, but a thought: we could validate model_type earlier (even as the very first step of __init__) to fail fast, specifically before even creating a project.
    opened by mikeknep 1
  • Remove no-op elif case from runner

    Remove no-op elif case from runner

    Particularly given that we now have a third model (Amplify) supported in Trainer, we can remove this no-op elif clause so that the runner only has special logic for / awareness of LSTM (expand up in the diff for context).

    opened by mikeknep 0
  • Switch CTGAN usages to ACTGAN.

    Switch CTGAN usages to ACTGAN.

    ACTGAN is the successor of CTGAN.

    Note (1): this change is backward compatible, as all of the parameters that CTGAN supported are supported by ACTGAN as well.

    Note (2): any previously trained CTGAN models will be still usable, i.e. it will be possible to generate new records using old CTGAN models.

    opened by pimlock 0
  • Fix off-by-one difference between project name and run ID

    Fix off-by-one difference between project name and run ID

    Quick fix so that benchmark's internal run identifier lines up with the project name in Gretel Cloud. We'll eventually have a more user-friendly and stable interface to access detailed run information, but until we figure out how exactly we want that to look and do it, this should make things a little more friendly for those willing to dive into the internals: the models from project benchmark-{timestamp}-3 will correspond to comparison.results_dict["gretel-3"] (instead of "gretel-4")

    Note: I considered just using the full project name as the identifier instead of gretel-{index}, but we don't have an equivalent to project names for user custom model runs, so I figure the current [gretel|custom]-{index} approach is still best for now.

    opened by mikeknep 0
  • Configure session before starting Benchmark comparison

    Configure session before starting Benchmark comparison

    Current behavior

    When running in an environment where no Gretel credentials can be found (e.g. Colab), when Benchmark kicks off a comparison the background threads instantiating Trainer instances will prompt for an API key. This is problematic for multiple reasons, all (I believe) due to it running in multiple background threads: it prompts multiple times, doesn't accept input and/or cache properly, and ultimately crashes.

    This fix

    Benchmark itself now checks for a configured session before kicking off any real work. It prompts (api_key="prompt") if no credentials are found, validates (validate=True) the supplied API key, and caches (cache="yes") it for all the runs it manages. The configure_session calls that happen when instantiating Trainer effectively "pass through." I've tested this by installing trainer from this branch in Colab and it is now working as expected.

    opened by mikeknep 0
  • Include dataset name in trainer uploads.

    Include dataset name in trainer uploads.

    Add original file name to data sources uploaded as part of trainer projects. This helps disambiguate the data sources from multiple trainer runs where previously they were always named trainer_0.csv, trainer_1.csv, etc.

    Also fixes StrategyRunner to not silently swallow all ApiExceptions when submitting a job, so errors not associated with max job limit are still thrown and surfaced to the user.

    opened by kboyd 0
  • Auto-determine best model from training data

    Auto-determine best model from training data

    Rather than create a GretelAuto model class that would need to override or work around several _BaseConfig details (validation, max/limit values, etc.), my goal here is to establish the convention that model type is optional and if you don't specify one when instantiating the Trainer, you're OK with us choosing for you. This is a change from the current behavior (optional but default to LSTM). In this case, we defer setting the trainer instance's self.model_type until such time as we can determine the best model to use: namely, at train time when a dataset has been provided.

    I'm a little unclear on the load (from cache) workflow, which in this branch's implementation would set the StrategyRunner's model_config to None. I think this is OK because the only methods referencing that value are part of training (train_all_partitions => train_next_partition => train_partition), and that workflow is only kicked off by the Trainer's train method, which will load in data and use it to determine and set a concrete model.

    I've also added an optional delimiter parameter to train to help support files with non-comma delimiters.

    opened by mikeknep 0
  • Get average sqs score from across partitions

    Get average sqs score from across partitions

    A few ways we could slice and dice this; I figure there may be additional SQS info we want from the run in the future so I decided to expose the entire List[dict] from the runner, and let the trainer pluck out and calculate the first such aggregate, user-friendly data. I'm open to pushing more of this down to the runner and/or transforming the SQS dictionaries into first-class types (likely dataclasses) if anyone has a strong opinion or thinks it'd be useful.

    opened by mikeknep 0
  • Use artifact manifest for determine_best_model.

    Use artifact manifest for determine_best_model.

    Not fully tested. Waiting for new backend API to be available.

    Should revisit retry logic if we can reliably distinguish between a pending manifest (still being generated) and some other error. Or if retrying is included in the gretel_client interface.

    opened by kboyd 1
Releases(v0.5.0)
  • v0.5.0(Nov 18, 2022)

    What's Changed

    • GretelCTGAN has been completely removed, fully replaced by its successor, GretelACTGAN
    • GretelACTGAN uses the new tabular-actgan config by default
    • Benchmark now routes Amplify models through Trainer rather than the SDK
    • Bug fix: helper to properly configure Gretel session before starting Benchmark comparison when unset
    • Bug fix: zero-index Benchmark run ID (internal) to fix off-by-one difference with project name

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.4.1...v0.5.0

    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Nov 2, 2022)

    What's Changed

    • Add pip install command and Colab disclaimer to Benchmark notebook by @mikeknep in https://github.com/gretelai/trainer/pull/22
    • Include dataset name in trainer uploads. by @kboyd in https://github.com/gretelai/trainer/pull/21
    • Docs improvements by @MasonEgger (https://github.com/gretelai/trainer/pull/23 https://github.com/gretelai/trainer/pull/24 https://github.com/gretelai/trainer/pull/28 https://github.com/gretelai/trainer/pull/26)
    • Add support for Gretel Amplify by @pimlock in https://github.com/gretelai/trainer/pull/29

    New Contributors

    • @kboyd made their first contribution in https://github.com/gretelai/trainer/pull/21
    • @MasonEgger made their first contribution in https://github.com/gretelai/trainer/pull/23
    • @pimlock made their first contribution in https://github.com/gretelai/trainer/pull/29

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.4.0...v0.4.1

    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Oct 6, 2022)

    What's Changed

    • Initial release of new Benchmark module :rocket: by @mikeknep in https://github.com/gretelai/trainer/pull/19
    • Create simple-conditional-generation.ipynb :notebook: by @zredlined in https://github.com/gretelai/trainer/pull/18

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.3.0...v0.4.0

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Aug 30, 2022)

  • v0.2.3(Aug 24, 2022)

    What's Changed

    • The trainer now chooses the best model configuration based on input training data when model_type is not specified in advance at Trainer instantiation (previously defaulted to GretelLSTM)
    • train accepts an optional delimiter argument (defaults to comma when unspecified)
    • Input training data is divided more equally across row partitions
    • LSTM models generate a consistent number of records (5000) during data training (previously matched size of input training data)
    • Fixed trainer generate to synthesize the correct number of records when multiple row partitions are used
    • Fixed trainer get_sqs_score method

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.2.2...v0.2.3

    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Aug 11, 2022)

    What's Changed

    • Update default model config by @zredlined in https://github.com/gretelai/trainer/pull/10
    • Remove project delete instruction by @drew in https://github.com/gretelai/trainer/pull/11
    • CTGAN and conditional data generation by @zredlined in https://github.com/gretelai/trainer/pull/12
    • Get average sqs score from across partitions by @mikeknep in https://github.com/gretelai/trainer/pull/14

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.2.1...v0.2.2

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Jun 16, 2022)

  • v0.2.0(Jun 10, 2022)

  • v0.1.0(Jun 10, 2022)

Owner
Gretel.ai
Gretel.ai Open Source Projects and Tools
Gretel.ai
This repository contains an overview of important follow-up works based on the original Vision Transformer (ViT) by Google.

This repository contains an overview of important follow-up works based on the original Vision Transformer (ViT) by Google.

75 Dec 02, 2022
Code for the upcoming CVPR 2021 paper

The Temporal Opportunist: Self-Supervised Multi-Frame Monocular Depth Jamie Watson, Oisin Mac Aodha, Victor Prisacariu, Gabriel J. Brostow and Michael

Niantic Labs 496 Dec 30, 2022
This repository contains implementations and illustrative code to accompany DeepMind publications

DeepMind Research This repository contains implementations and illustrative code to accompany DeepMind publications. Along with publishing papers to a

DeepMind 11.3k Dec 31, 2022
DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time

DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time Introduction This is official implementation for DR-GAN (IEEE TCS

Kang Liao 18 Dec 23, 2022
Only valid pull requests will be allowed. Use python only and readme changes will not be accepted.

❌ This repo is excluded from hacktoberfest This repo is for python beginners and contains lot of beginner python projects for practice. You can also s

Prajjwal Pathak 50 Dec 28, 2022
[AI6101] Introduction to AI & AI Ethics is a core course of MSAI, SCSE, NTU, Singapore

[AI6101] Introduction to AI & AI Ethics is a core course of MSAI, SCSE, NTU, Singapore. The repository corresponds to the AI6101 of Semester 1, AY2021-2022, starting from 08/2021. The instructors of

AccSrd 1 Sep 22, 2022
AI-generated-characters for Learning and Wellbeing

AI-generated-characters for Learning and Wellbeing Click here for the full project page. This repository contains the source code for the paper AI-gen

MIT Media Lab 214 Jan 01, 2023
Machine Learning with JAX Tutorials

The purpose of this repo is to make it easy to get started with JAX. It contains my "Machine Learning with JAX" series of tutorials (YouTube videos and Jupyter Notebooks) as well as the content I fou

Aleksa Gordić 372 Dec 28, 2022
A Small and Easy approach to the BraTS2020 dataset (2D Segmentation)

BraTS2020 A Light & Scalable Solution to BraTS2020 | Medical Brain Tumor Segmentation (2D Segmentation) Developed the segmentation models for segregat

Gunjan Haldar 0 Jan 19, 2022
An open-source outlier detection package by Getcontact Data Team

pyfbad The pyfbad library supports anomaly detection projects. An end-to-end anomaly detection application can be written using the source codes of th

Teknasyon Tech 41 Dec 27, 2022
Evaluating saliency methods on artificial data with different background types

Evaluating saliency methods on artificial data with different background types This repository contains the relevant code for the MedNeurips 2021 subm

2 Jul 05, 2022
Python implementation of 3D facial mesh exaggeration using the techniques described in the paper: Computational Caricaturization of Surfaces.

Python implementation of 3D facial mesh exaggeration using the techniques described in the paper: Computational Caricaturization of Surfaces.

Wonjong Jang 8 Nov 01, 2022
Python code for the paper How to scale hyperparameters for quickshift image segmentation

How to scale hyperparameters for quickshift image segmentation Python code for the paper How to scale hyperparameters for quickshift image segmentatio

0 Jan 25, 2022
PyTorch implementation of the paper: Long-tail Learning via Logit Adjustment

logit-adj-pytorch PyTorch implementation of the paper: Long-tail Learning via Logit Adjustment This code implements the paper: Long-tail Learning via

Chamuditha Jayanga 53 Dec 23, 2022
An implementation of shampoo

shampoo.pytorch An implementation of shampoo, proposed in Shampoo : Preconditioned Stochastic Tensor Optimization by Vineet Gupta, Tomer Koren and Yor

Ryuichiro Hataya 69 Sep 10, 2022
This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".

SimMIM By Zhenda Xie*, Zheng Zhang*, Yue Cao*, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai and Han Hu*. This repo is the official implementation of

Microsoft 674 Dec 26, 2022
A multi-functional library for full-stack Deep Learning. Simplifies Model Building, API development, and Model Deployment.

chitra What is chitra? chitra (चित्र) is a multi-functional library for full-stack Deep Learning. It simplifies Model Building, API development, and M

Aniket Maurya 210 Dec 21, 2022
Compute execution plan: A DAG representation of work that you want to get done. Individual nodes of the DAG could be simple python or shell tasks or complex deeply nested parallel branches or embedded DAGs themselves.

Hello from magnus Magnus provides four capabilities for data teams: Compute execution plan: A DAG representation of work that you want to get done. In

12 Feb 08, 2022
A repository for generating stylized talking 3D and 3D face

style_avatar A repository for generating stylized talking 3D faces and 2D videos. This is the repository for paper Imitating Arbitrary Talking Style f

Haozhe Wu 191 Dec 22, 2022
Datasets, tools, and benchmarks for representation learning of code.

The CodeSearchNet challenge has been concluded We would like to thank all participants for their submissions and we hope that this challenge provided

GitHub 1.8k Dec 25, 2022