Simplified interface for TensorFlow (mimicking Scikit Learn) for Deep Learning

Related tags

Deep Learningskflow
Overview

SkFlow has been moved to Tensorflow.

SkFlow has been moved to http://github.com/tensorflow/tensorflow into contrib folder specifically located here. The development will continue there. Please submit any issues and pull requests to Tensorflow repository instead.

This repository will ramp down, including after next Tensorflow release we will wind down code here. Please see instructions on most recent installation here.

Comments
  • How do I do multilabel image classification?

    How do I do multilabel image classification?

    Do I have to make changes in the multioutput file? I ideally want to train any model, like Inception, on my training data which has multi labels. How do I do that?

    help wanted examples 
    opened by unography 21
  • Add early stopping and reporting based on validation data

    Add early stopping and reporting based on validation data

    This PR allows a user to specify a validation dataset that are used for early stopping (and reporting). The PR was created to address issue 85

    I made changes in 3 places.

    1. The trainer now takes a dictionary containing the validation data (in the same format as the output of the data feeder's get_dict_fn).
    2. The fit method now takes arguments for val_X and val_y. It converts these into the correct format for the trainer.
    3. The example file digits.py now uses early stopping, by supplying val_X and val_y.

    I can add early stopping to other examples if this approach looks good, though their behavior should not otherwise be affected by the current PR.

    cla: yes 
    opened by dansbecker 14
  • Class weight support

    Class weight support

    Hi,

    I am using skflow.ops.dnn to classify two - classes dataset (True and False). The percentage of True example is very small, so I have an imbalanced dataset.

    It seems to me that one way to resolve the issue is to use weighted classes. However, when I look to the implementation of skflow.ops.dnn, I do not know how could I do weighted classes with DNN.

    Is it possible to do that with skflow, or is there another technique to deal with imbalanced dataset problem in skflow?

    Thanks

    enhancement 
    opened by vinhqdang 13
  • Added verbose option

    Added verbose option

    I added an option to control the "verbosity". For this, I added the parameter "verbose" in the init method of the init.py file and to the train function in the trainers.py file. In addition, I passed this argument to the "self._trainer.train()" call in the init file and added a condition to make the prints in the trainer.py file.

    cla: no 
    opened by ivallesp 12
  • Predict batch size default

    Predict batch size default

    This changes the default batch size for prediction to be the same as for training, enabling efficient grid search. Previously GridSearchCV would try to make predictions in a single batch, which could take a lot of memory.

    This also adds a simple example of using skflow with GridSearchCV.

    cla: no 
    opened by mheilman 11
  • Add example accessing of weights

    Add example accessing of weights

    It wasn't clear how to access weights using classifier.get_tensor_value('foo') syntax. This adds some examples for the CNN model. They were figured out by logging the training as though for using TensorBoard, and then running strings on the logfile to look for the right namespace.

    Is there a better way to access these weights? Or to learn their names? The logging must walk through the graph and record these names. Maybe if there were a way to quickly list all the names, that'd be enough for advanced users to figure it out.

    cla: yes 
    opened by dvbuntu 10
  • Plotting neural network built by skflow

    Plotting neural network built by skflow

    Hi,

    Sorry I asked too much.

    I think plotting is always a nice feature. Is it possible right now for skflow (or can we do that through tensorflow directly)?

    opened by vinhqdang 10
  • move monitor and logdir arguments to init

    move monitor and logdir arguments to init

    opened by mheilman 8
  • Exception when running language model example

    Exception when running language model example

    Hi,

    Thanks for making this tool. It will definitely make things easier for NN newcomers.

    I just tried running your language model example and got the following exception:

    Traceback (most recent call last):
      File "test.py", line 84, in <module>
        estimator.fit(X, y)
      File "/Users/aleksandar/tensorflow/lib/python3.5/site-packages/skflow/estimators/base.py", line 243, in fit
        feed_params_fn=self._data_feeder.get_feed_params)
      File "/Users/aleksandar/tensorflow/lib/python3.5/site-packages/skflow/trainer.py", line 114, in train
        feed_dict = feed_dict_fn()
      File "/Users/aleksandar/tensorflow/lib/python3.5/site-packages/skflow/io/data_feeder.py", line 307, in _feed_dict_fn
        inp[i, :] = six.next(self.X)
    StopIteration
    

    I made sure that my python distribution has the correct version of six. I tried running it both in a virtual environment and in a normal Python 3 distro. Any ideas what might be causing this?

    opened by savkov 7
  • another ValidationMonitor with validation(+early stopping) per epoch

    another ValidationMonitor with validation(+early stopping) per epoch

    From what I understand, the existing ValidationMonitor performs validation every [print_steps] steps, and checks for stop condition every [early_stopping_rounds] steps. I'd like to add another ValidationMonitor that performs validation once and checks for stoping condition once every epoch. Is this the recommended practice in machine learning regarding validation and early stopping? I mean I'd like to add a fit process something like this:

    def fit(self, x_train, y_train, x_validate, y_validate):
        while (current_validation_loss < previous_validation_loss):
            estimator.train_one_more_epoch(x_train, y_train)
            previous_validation_loss = current_validation_loss
            current_validation_loss = some_error(y_validate, estimator.predict(x_validate))
    
    enhancement help wanted 
    opened by alanyuchenhou 7
  • Example of language model

    Example of language model

    Add an example of language model (RNN). For example character level on sheikspear book (similar to https://github.com/sherjilozair/char-rnn-tensorflow).

    examples 
    opened by ilblackdragon 7
  • .travis.yml: The 'sudo' tag is now deprecated in Travis CI

    .travis.yml: The 'sudo' tag is now deprecated in Travis CI

    opened by cclauss 1
  • Why hasn't this repo been archived yet?

    Why hasn't this repo been archived yet?

    New versions of TF have already been released since the last commit to this repo. As far as I've understood, after having read the README file of this project, you intended to close this repo. So, why hasn't it been done yet?

    opened by nbro 0
Releases(v0.1)
  • v0.1(Feb 14, 2016)

Codebase for "Revisiting spatio-temporal layouts for compositional action recognition" (Oral at BMVC 2021).

Revisiting spatio-temporal layouts for compositional action recognition Codebase for "Revisiting spatio-temporal layouts for compositional action reco

Gorjan 20 Dec 15, 2022
PyTorch implementation for Partially View-aligned Representation Learning with Noise-robust Contrastive Loss (CVPR 2021)

2021-CVPR-MvCLN This repo contains the code and data of the following paper accepted by CVPR 2021 Partially View-aligned Representation Learning with

XLearning Group 33 Nov 01, 2022
CCCL: Contrastive Cascade Graph Learning.

CCGL: Contrastive Cascade Graph Learning This repo provides a reference implementation of Contrastive Cascade Graph Learning (CCGL) framework as descr

Xovee Xu 19 Dec 05, 2022
Implementation of CVPR'21: RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction

RfD-Net [Project Page] [Paper] [Video] RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction Yinyu Nie, Ji Hou, Xiaoguang Han, Matthi

Yinyu Nie 162 Jan 06, 2023
DetCo: Unsupervised Contrastive Learning for Object Detection

DetCo: Unsupervised Contrastive Learning for Object Detection arxiv link News Sparse RCNN+DetCo improves from 45.0 AP to 46.5 AP(+1.5) with 3x+ms trai

Enze Xie 234 Dec 18, 2022
Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.

vid2vid Project | YouTube(short) | YouTube(full) | arXiv | Paper(full) Pytorch implementation for high-resolution (e.g., 2048x1024) photorealistic vid

NVIDIA Corporation 8.1k Jan 01, 2023
Selfplay In MultiPlayer Environments

This project allows you to train AI agents on custom-built multiplayer environments, through self-play reinforcement learning.

200 Jan 08, 2023
PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper.

deep-linear-shapes PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper. If you find this code useful i

Romain Loiseau 27 Sep 24, 2022
Implementation of "Large Steps in Inverse Rendering of Geometry"

Large Steps in Inverse Rendering of Geometry ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia), December 2021. Baptiste Nicolet · Alec Jacob

RGL: Realistic Graphics Lab 274 Jan 06, 2023
WatermarkRemoval-WDNet-WACV2021

WatermarkRemoval-WDNet-WACV2021 Thank you for your attention. Citation Please cite the related works in your publications if it helps your research: @

LUYI 63 Dec 05, 2022
The repo of Feedback Networks, CVPR17

Feedback Networks http://feedbacknet.stanford.edu/ Paper: Feedback Networks, CVPR 2017. Amir R. Zamir*,Te-Lin Wu*, Lin Sun, William B. Shen, Bertram E

Stanford Vision and Learning Lab 87 Nov 19, 2022
Graph-Refined Convolutional Network for Multimedia Recommendation with Implicit Feedback

Graph-Refined Convolutional Network for Multimedia Recommendation with Implicit Feedback This is our Pytorch implementation for the paper: Yinwei Wei,

17 Jun 10, 2022
A curated list of awesome deep long-tailed learning resources.

A curated list of awesome deep long-tailed learning resources.

vanint 210 Dec 25, 2022
null

DeformingThings4D dataset Video | Paper DeformingThings4D is an synthetic dataset containing 1,972 animation sequences spanning 31 categories of human

208 Jan 03, 2023
Weakly-supervised object detection.

Wetectron Wetectron is a software system that implements state-of-the-art weakly-supervised object detection algorithms. Project CVPR'20, ECCV'20 | Pa

NVIDIA Research Projects 342 Jan 05, 2023
Fast SHAP value computation for interpreting tree-based models

FastTreeSHAP FastTreeSHAP package is built based on the paper Fast TreeSHAP: Accelerating SHAP Value Computation for Trees published in NeurIPS 2021 X

LinkedIn 369 Jan 04, 2023
Boostcamp AI Tech 3rd / Basic Paper reading w.r.t Embedding

Boostcamp AI Tech 3rd : Basic Paper Reading w.r.t Embedding TL;DR 1992년부터 2018년도까지 이루어진 word/sentence embedding의 중요한 줄기를 이루는 기초 논문 스터디를 진행하고자 합니다. 논

Soyeon Kim 14 Nov 14, 2022
Our CIKM21 Paper "Incorporating Query Reformulating Behavior into Web Search Evaluation"

Reformulation-Aware-Metrics Introduction This codebase contains source-code of the Python-based implementation of our CIKM 2021 paper. Chen, Jia, et a

xuanyuan14 5 Mar 05, 2022
Code for our paper "Sematic Representation for Dialogue Modeling" in ACL2021

AMR-Dialogue An implementation for paper "Semantic Representation for Dialogue Modeling". You may find our paper here. Requirements python 3.6 pytorch

xfbai 45 Dec 26, 2022
Campsite Reservation Finder

yellowstone-camping UPDATE: yellowstone-camping is being expanded and renamed to camply. The updated tool now interfaces with the Recreation.gov API a

Justin Flannery 233 Jan 08, 2023