Build Low Code Automated Tensorflow, What-IF explainable models in just 3 lines of code.

Overview

Downloads Generic badge Generic badge example workflow Open issues

Auto Tensorflow - Mission:

Build Low Code Automated Tensorflow, What-IF explainable models in just 3 lines of code.

To make Deep Learning on Tensorflow absolutely easy for the masses with its low code framework and also increase trust on ML models through What-IF model explainability.

Under the hood:

Built on top of the powerful Tensorflow ecosystem tools like TFX , TF APIs and What-IF Tool , the library automatically does all the heavy lifting internally like EDA, schema discovery, feature engineering, HPT, model search etc. This empowers developers to focus only on building end user applications quickly without any knowledge of Tensorflow, ML or debugging. Built for handling large volume of data / BigData - using only TF scalable components. Moreover the models trained with auto-tensorflow can directly be deployed on any cloud like GCP / AWS / Azure.

Official Launch: https://youtu.be/sil-RbuckG0

Features:

  1. Build Classification / Regression models on CSV data
  2. Automated Schema Inference
  3. Automated Feature Engineering
    • Discretization
    • Scaling
    • Normalization
    • Text Embedding
    • Category encoding
  4. Automated Model build for mixed data types( Continuous, Categorical and Free Text )
  5. Automated Hyper-parameter tuning
  6. Automated GPU Distributed training
  7. Automated UI based What-IF analysis( Fairness, Feature Partial dependencies, What-IF )
  8. Control over complexity of model
  9. No dependency over Pandas / SKLearn
  10. Can handle dataset of any size - including multiple CSV files

Tutorials:

  1. Open In Colab - Auto Classification on CSV data
  2. Open In Colab - Auto Regression on CSV data

Setup:

  1. Install library
    • PIP(Recommended): pip install auto-tensorflow
    • Nightly: pip install git+https://github.com/rafiqhasan/auto-tensorflow.git
  2. Works best on UNIX/Linux/Debian/Google Colab/MacOS

Usage:

  1. Initialize TFAuto Engine
from auto_tensorflow.tfa import TFAuto
tfa = TFAuto(train_data_path='/content/train_data/', test_data_path='/content/test_data/', path_root='/content/tfauto')
  1. Step 1 - Automated EDA and Schema discovery
tfa.step_data_explore(viz=True) ##Viz=False for no visualization
  1. Step 2 - Automated ML model build and train
tfa.step_model_build(label_column = 'price', model_type='REGRESSION', model_complexity=1)
  1. Step 3 - Automated What-IF Tool launch
tfa.step_model_whatif()

API Arguments:

  • Method TFAuto

    • train_data_path: Path where training data is stored
    • test_data_path: Path where Test / Eval data is stored
    • path_root: Directory for running TFAuto( Directory should NOT exist )
  • Method step_data_explore

    • viz: Is data visualization required ? - True or False( Default )
  • Method step_model_build

    • label_column: The feature to be used as Label
    • model_type: Either of 'REGRESSION'( Default ), 'CLASSIFICATION'
    • model_complexity:
      • 0 : Model with default hyper-parameters
      • 1 (Default): Model with automated hyper-parameter tuning
      • 2 : Complexity 1 + Advanced fine-tuning of Text layers

Current limitations:

There are a few limitations in the initial release but we are working day and night to resolve these and add them as future features.

  1. Doesn't support Image / Audio data

Future roadmap:

  1. Add support for Timeseries / Audio / Image data
  2. Add feature to download full pipeline model Python code for advanced tweaking

Release History:

1.3.2 - 27/11/2021 - Release Notes

1.3.1 - 18/11/2021 - Release Notes

1.2.0 - 24/07/2021 - Release Notes

1.1.1 - 14/07/2021 - Release Notes

1.0.1 - 07/07/2021 - Release Notes

Comments
  • Failed to install 1.2.0

    Failed to install 1.2.0

    Describe the bug Does not resolve dependency 👍 Show error when I run; pip install auto-tensorflow I got this message: Could not find a version that matches keras-nightly~=2.5.0.dev

    To Reproduce Steps to reproduce the behavior: pip install auto-tensorflow Expected behavior Install auto-tensorflow

    Versions:

    • Auto-Tensorflow:1.2.0
    • Tensorflow:
    • Tensorflow-Extended:

    Additional context Add any other context about the problem here.

    wontfix 
    opened by HenrryVargas 8
  • Colab Regression Example No Longer Working?

    Colab Regression Example No Longer Working?

    Trying to run the Colab Regression notebook. All dependencies get installed, I Restart and Run All to start the code. It errors out here:

    ##Step 1
    ##Run Data setup -> Infer Schema, find anomalies, create profile and show viz
    tfa.step_data_explore(viz=False)
    
    Data: Pipeline execution started...
    WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features.
    WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.
    WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.
    ERROR:absl:Execution 2 failed.
    ---------------------------------------------------------------------------
    TypeCheckError                            Traceback (most recent call last)
    [<ipython-input-6-7e17a616f197>](https://localhost:8080/#) in <module>
          1 ##Step 1
          2 ##Run Data setup -> Infer Schema, find anomalies, create profile and show viz
    ----> 3 tfa.step_data_explore(viz=False)
    
    14 frames
    [/usr/local/lib/python3.7/dist-packages/auto_tensorflow/tfa.py](https://localhost:8080/#) in step_data_explore(self, viz)
       1216     Viz: (False) Is data visualization required ?
       1217     '''
    -> 1218     self.pipeline = self.tfadata.run_initial(self._train_data_path, self._test_data_path, self._tfx_root, self._metadata_db_root, self.tfautils, viz)
       1219     self.generate_config_json()
       1220 
    
    [/usr/local/lib/python3.7/dist-packages/auto_tensorflow/tfa.py](https://localhost:8080/#) in run_initial(self, _train_data_path, _test_data_path, _tfx_root, _metadata_db_root, tfautils, viz)
        211     #Run data pipeline
        212     print("Data: Pipeline execution started...")
    --> 213     LocalDagRunner().run(self.pipeline)
        214     self._run = True
        215 
    
    [/usr/local/lib/python3.7/dist-packages/tfx/orchestration/portable/tfx_runner.py](https://localhost:8080/#) in run(self, pipeline)
         76     c = compiler.Compiler()
         77     pipeline_pb = c.compile(pipeline)
    ---> 78     return self.run_with_ir(pipeline_pb)
    
    [/usr/local/lib/python3.7/dist-packages/tfx/orchestration/local/local_dag_runner.py](https://localhost:8080/#) in run_with_ir(self, pipeline)
         85           with metadata.Metadata(connection_config) as mlmd_handle:
         86             partial_run_utils.snapshot(mlmd_handle, pipeline)
    ---> 87         component_launcher.launch()
         88         logging.info('Component %s is finished.', node_id)
    
    [/usr/local/lib/python3.7/dist-packages/tfx/orchestration/portable/launcher.py](https://localhost:8080/#) in launch(self)
        543               executor_watcher.address)
        544           executor_watcher.start()
    --> 545         executor_output = self._run_executor(execution_info)
        546       except Exception as e:  # pylint: disable=broad-except
        547         execution_output = (
    
    [/usr/local/lib/python3.7/dist-packages/tfx/orchestration/portable/launcher.py](https://localhost:8080/#) in _run_executor(self, execution_info)
        418     outputs_utils.make_output_dirs(execution_info.output_dict)
        419     try:
    --> 420       executor_output = self._executor_operator.run_executor(execution_info)
        421       code = executor_output.execution_result.code
        422       if code != 0:
    
    [/usr/local/lib/python3.7/dist-packages/tfx/orchestration/portable/beam_executor_operator.py](https://localhost:8080/#) in run_executor(self, execution_info, make_beam_pipeline_fn)
         96         make_beam_pipeline_fn=make_beam_pipeline_fn)
         97     executor = self._executor_cls(context=context)
    ---> 98     return python_executor_operator.run_with_executor(execution_info, executor)
    
    [/usr/local/lib/python3.7/dist-packages/tfx/orchestration/portable/python_executor_operator.py](https://localhost:8080/#) in run_with_executor(execution_info, executor)
         57   output_dict = copy.deepcopy(execution_info.output_dict)
         58   result = executor.Do(execution_info.input_dict, output_dict,
    ---> 59                        execution_info.exec_properties)
         60   if not result:
         61     # If result is not returned from the Do function, then try to
    
    [/usr/local/lib/python3.7/dist-packages/tfx/components/statistics_gen/executor.py](https://localhost:8080/#) in Do(self, input_dict, output_dict, exec_properties)
        138             stats_api.GenerateStatistics(stats_options)
        139             | 'WriteStatsOutput[%s]' % split >>
    --> 140             stats_api.WriteStatisticsToBinaryFile(output_path))
        141         logging.info('Statistics for split %s written to %s.', split,
        142                      output_uri)
    
    [/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py](https://localhost:8080/#) in __or__(self, ptransform)
        135 
        136   def __or__(self, ptransform):
    --> 137     return self.pipeline.apply(ptransform, self)
        138 
        139 
    
    [/usr/local/lib/python3.7/dist-packages/apache_beam/pipeline.py](https://localhost:8080/#) in apply(self, transform, pvalueish, label)
        651     if isinstance(transform, ptransform._NamedPTransform):
        652       return self.apply(
    --> 653           transform.transform, pvalueish, label or transform.label)
        654 
        655     if not isinstance(transform, ptransform.PTransform):
    
    [/usr/local/lib/python3.7/dist-packages/apache_beam/pipeline.py](https://localhost:8080/#) in apply(self, transform, pvalueish, label)
        661       old_label, transform.label = transform.label, label
        662       try:
    --> 663         return self.apply(transform, pvalueish)
        664       finally:
        665         transform.label = old_label
    
    [/usr/local/lib/python3.7/dist-packages/apache_beam/pipeline.py](https://localhost:8080/#) in apply(self, transform, pvalueish, label)
        710 
        711       if type_options is not None and type_options.pipeline_type_check:
    --> 712         transform.type_check_outputs(pvalueish_result)
        713 
        714       for tag, result in ptransform.get_named_nested_pvalues(pvalueish_result):
    
    [/usr/local/lib/python3.7/dist-packages/apache_beam/transforms/ptransform.py](https://localhost:8080/#) in type_check_outputs(self, pvalueish)
        464 
        465   def type_check_outputs(self, pvalueish):
    --> 466     self.type_check_inputs_or_outputs(pvalueish, 'output')
        467 
        468   def type_check_inputs_or_outputs(self, pvalueish, input_or_output):
    
    [/usr/local/lib/python3.7/dist-packages/apache_beam/transforms/ptransform.py](https://localhost:8080/#) in type_check_inputs_or_outputs(self, pvalueish, input_or_output)
        495                 hint=hint,
        496                 actual_type=pvalue_.element_type,
    --> 497                 debug_str=type_hints.debug_str()))
        498 
        499   def _infer_output_coder(self, input_type=None, input_coder=None):
    
    TypeCheckError: Output type hint violation at WriteStatsOutput[train]: expected <class 'apache_beam.pvalue.PDone'>, got <class 'str'>
    Full type hint:
    IOTypeHints[inputs=((<class 'tensorflow_metadata.proto.v0.statistics_pb2.DatasetFeatureStatisticsList'>,), {}), outputs=((<class 'apache_beam.pvalue.PDone'>,), {})]
    File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
    File "<frozen importlib._bootstrap_external>", line 728, in exec_module
    File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
    File "/usr/local/lib/python3.7/dist-packages/tensorflow_data_validation/api/stats_api.py", line 113, in <module>
        class WriteStatisticsToBinaryFile(beam.PTransform):
    File "/usr/local/lib/python3.7/dist-packages/apache_beam/typehints/decorators.py", line 776, in annotate_input_types
        *converted_positional_hints, **converted_keyword_hints)
    
    based on:
      IOTypeHints[inputs=None, outputs=((<class 'apache_beam.pvalue.PDone'>,), {})]
      File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
      File "<frozen importlib._bootstrap_external>", line 728, in exec_module
      File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
      File "/usr/local/lib/python3.7/dist-packages/tensorflow_data_validation/api/stats_api.py", line 113, in <module>
          class WriteStatisticsToBinaryFile(beam.PTransform):
      File "/usr/local/lib/python3.7/dist-packages/apache_beam/typehints/decorators.py", line 863, in annotate_output_types
          f._type_hints = th.with_output_types(return_type_hint)  # pylint: disable=protected-access
    
    opened by windowshopr 2
  • Dump when training Text column model on GPUs

    Dump when training Text column model on GPUs

    Describe the bug The model dumps with error when training a model on GPU runtime

    To Reproduce Train a model with Free text column on GPU device

    Expected behavior Should not give any error

    Versions:

    • Auto-Tensorflow: 1.0.1
    • Tensorflow: 2.5.0
    • Tensorflow-Extended: 0.29.0

    Additional context Add any other context about the problem here.

    bug 
    opened by rafiqhasan 2
  • Add automated - advanced feature engineering

    Add automated - advanced feature engineering

    Is your feature request related to a problem? Please describe. Yes

    Describe the solution you'd like Add more feature engineering options for automated consideration:

    1. Squared
    2. Square root
    3. Min-Max scaling( Normalization is already there )
    4. etc

    Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

    Additional context Add any other context or screenshots about the feature request here.

    enhancement 
    opened by rafiqhasan 1
  • Known limitations

    Known limitations

    There are a few limitations in the initial release but we are working day and night to resolve these and add them as future features.

    1. Doesn't support Image / Audio data
    2. Doesn't support - quote delimited CSVs( TFX doesn't support qCSV yet )
    3. Classification only supports integer labels from 0 to N
    enhancement 
    opened by rafiqhasan 1
  • When AutoTF will be released for Time Series ?

    When AutoTF will be released for Time Series ?

    Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

    Describe the solution you'd like A clear and concise description of what you want to happen.

    Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

    Additional context Add any other context or screenshots about the feature request here.

    enhancement 
    opened by gulabpatel 1
Releases(1.3.4)
  • 1.3.4(Dec 9, 2022)

    • Fixed bugs
    • Cleaned up PIP dependencies for faster installation

    Full Changelog: https://github.com/rafiqhasan/auto-tensorflow/compare/1.3.3...1.3.4

    Source code(tar.gz)
    Source code(zip)
  • 1.3.3(Dec 9, 2022)

  • 1.3.2(Nov 26, 2021)

    • Added bucketization feature engineering
    • Added more diverse HPT options
    • Replaced RELU with SELU
    • Better accuracy on regression models
    • Changed HPT objective for classification models
    • Multiple improvisations for higher accuracy models

    Full Changelog: https://github.com/rafiqhasan/auto-tensorflow/compare/1.3.1...1.3.2

    Source code(tar.gz)
    Source code(zip)
  • 1.3.1(Nov 18, 2021)

    Features:

    1. Upgraded to TF 2.6.0
    2. Upgraded to TFX 1.4.0
    3. Added new feature engineering functions
    4. Added capability to handle multiple line CSVs
    5. Keras Tuner functionality now more optimised and HPT runs faster
    Source code(tar.gz)
    Source code(zip)
  • 1.2.0(Jul 24, 2021)

    1.2.0 - 07/24/2021

    • Upgraded to TFX 1.0.0
    • Major performance fixes
    • Fixed bugs
    • Added more features:
      • TFX CSVExampleGen speedup
      • Added more feature engineering options
    Source code(tar.gz)
    Source code(zip)
  • 1.1.1(Jul 20, 2021)

    1.1.1 - 07/14/2021

    • Fixed bugs
    • Added more features:
      • Added complexity = 2 for automated tunable textual layers
      • Textual label for Classification
      • Imbalanced label handling
      • GPU fixes
    Source code(tar.gz)
    Source code(zip)
  • 1.0.1(Jul 20, 2021)

Owner
Hasan Rafiq
Technology enthusiast working @ Google: Google Cloud, Machine Learning, Tensorflow, Python
Hasan Rafiq
The official re-implementation of the Neurips 2021 paper, "Targeted Neural Dynamical Modeling".

Targeted Neural Dynamical Modeling Note: This is a re-implementation (in Tensorflow2) of the original TNDM model. We do not plan to further update the

6 Oct 05, 2022
Official PyTorch implementation of the paper: DeepSIM: Image Shape Manipulation from a Single Augmented Training Sample

DeepSIM: Image Shape Manipulation from a Single Augmented Training Sample (ICCV 2021 Oral) Project | Paper Official PyTorch implementation of the pape

Eliahu Horwitz 393 Dec 22, 2022
Official repo of the paper "Surface Form Competition: Why the Highest Probability Answer Isn't Always Right"

Surface Form Competition This is the official repo of the paper "Surface Form Competition: Why the Highest Probability Answer Isn't Always Right" We p

Peter West 46 Dec 23, 2022
Implementation for the EMNLP 2021 paper "Interactive Machine Comprehension with Dynamic Knowledge Graphs".

Interactive Machine Comprehension with Dynamic Knowledge Graphs Implementation for the EMNLP 2021 paper. Dependencies apt-get -y update apt-get instal

Xingdi (Eric) Yuan 19 Aug 23, 2022
CS583: Deep Learning

CS583: Deep Learning

Shusen Wang 2.6k Dec 30, 2022
Bringing Characters to Life with Computer Brains in Unity

AI4Animation: Deep Learning for Character Control This project explores the opportunities of deep learning for character animation and control as part

Sebastian Starke 5.5k Jan 04, 2023
MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data

This repository is the official PyTorch implementation of Meta-Balance. Find the paper on arxiv MetaBalance: High-Performance Neural Networks for Clas

Arpit Bansal 20 Oct 18, 2021
A Python library for common tasks on 3D point clouds

Point Cloud Utils (pcu) - A Python library for common tasks on 3D point clouds Point Cloud Utils (pcu) is a utility library providing the following fu

Francis Williams 622 Dec 27, 2022
Readings for "A Unified View of Relational Deep Learning for Polypharmacy Side Effect, Combination Therapy, and Drug-Drug Interaction Prediction."

Polypharmacy - DDI - Synergy Survey The Survey Paper This repository accompanies our survey paper A Unified View of Relational Deep Learning for Polyp

AstraZeneca 79 Jan 05, 2023
OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.

OpenFace 2.2.0: a facial behavior analysis toolkit Over the past few years, there has been an increased interest in automatic facial behavior analysis

Tadas Baltrusaitis 5.8k Dec 31, 2022
nn_builder lets you build neural networks with less boilerplate code

nn_builder lets you build neural networks with less boilerplate code. You specify the type of network you want and it builds it. Install pip install n

Petros Christodoulou 157 Nov 20, 2022
ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs

ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs This is the code of paper ConE: Cone Embeddings for Multi-Hop Reasoning over Knowl

MIRA Lab 33 Dec 07, 2022
Official implementation of NLOS-OT: Passive Non-Line-of-Sight Imaging Using Optimal Transport (IEEE TIP, accepted)

NLOS-OT Official implementation of NLOS-OT: Passive Non-Line-of-Sight Imaging Using Optimal Transport (IEEE TIP, accepted) Description In this reposit

Ruixu Geng(耿瑞旭) 16 Dec 16, 2022
Wind Speed Prediction using LSTMs in PyTorch

Implementation of Deep-Forecast using PyTorch Deep Forecast: Deep Learning-based Spatio-Temporal Forecasting Adapted from original implementation Setu

Onur Kaplan 151 Dec 14, 2022
The official PyTorch implementation of paper BBN: Bilateral-Branch Network with Cumulative Learning for Long-Tailed Visual Recognition

BBN: Bilateral-Branch Network with Cumulative Learning for Long-Tailed Visual Recognition Boyan Zhou, Quan Cui, Xiu-Shen Wei*, Zhao-Min Chen This repo

Megvii-Nanjing 616 Dec 21, 2022
Noise Conditional Score Networks (NeurIPS 2019, Oral)

Generative Modeling by Estimating Gradients of the Data Distribution This repo contains the official implementation for the NeurIPS 2019 paper Generat

451 Dec 26, 2022
WPPNets: Unsupervised CNN Training with Wasserstein Patch Priors for Image Superresolution

WPPNets: Unsupervised CNN Training with Wasserstein Patch Priors for Image Superresolution This code belongs to the paper [1] available at https://arx

Fabian Altekrueger 5 Jun 02, 2022
Code and models for "Pano3D: A Holistic Benchmark and a Solid Baseline for 360 Depth Estimation", OmniCV Workshop @ CVPR21.

Pano3D A Holistic Benchmark and a Solid Baseline for 360o Depth Estimation Pano3D is a new benchmark for depth estimation from spherical panoramas. We

Visual Computing Lab, Information Technologies Institute, Centre for Reseach and Technology Hellas 50 Dec 29, 2022
Tool for live presentations using manim

manim-presentation Tool for live presentations using manim Install pip install manim-presentation opencv-python Usage Use the class Slide as your sce

Federico Galatolo 146 Jan 06, 2023
Code for the paper: Adversarial Machine Learning: Bayesian Perspectives

Code for the paper: Adversarial Machine Learning: Bayesian Perspectives This repository contains code for reproducing the experiments in the ** Advers

Roi Naveiro 2 Nov 11, 2022