Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.

Related tags

Deep Learningtfdeploy
Overview

tfdeploy logo

Build Status Documentation Status Package Status

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.

Now with tensorflow 1.0 support.

Evaluation usage
import tfdeploy as td
import numpy as np

model = td.Model("/path/to/model.pkl")
inp, outp = model.get("input", "output")

batch = np.random.rand(10000, 784)
result = outp.eval({inp: batch})
Installation and dependencies

Via pip

pip install tfdeploy

or by simply copying the file into your project.

Numpy ≥ 1.10 should be installed on your system. Scipy is optional. See optimization for more info on optional packages.

By design, tensorflow is required when creating a model. All versions ≥ 1.0.1 are supported.

Content

Why?

Working with tensorflow is awesome. Model definition and training is simple yet powerful, and the range of built-in features is just striking.

However, when it comes down to model deployment and evaluation, things get a bit more cumbersome than they should be. You either export your graph to a new file and save your trained variables in a separate file, or you make use of tensorflow's serving system. Wouldn't it be great if you could just export your model to a simple numpy-based callable? Of course it would. And this is exactly what tfdeploy does for you.

To boil it down, tfdeploy

  • is lightweight. A single file with < 150 lines of core code. Just copy it to your project.
  • faster than using tensorflow's Tensor.eval.
  • does not need tensorflow during evaluation.
  • only depends on numpy.
  • can load one or more models from a single file.
  • does not support GPUs (maybe gnumpy is worth a try here).

How?

The central class is tfdeploy.Model. The following two examples demonstrate how a model can be created from a tensorflow graph, saved to and loaded from disk, and eventually evaluated.

Convert your graph
import tensorflow as tf
import tfdeploy as td

# setup tfdeploy (only when creating models)
td.setup(tf)

# build your graph
sess = tf.Session()

# use names for input and output layers
x = tf.placeholder("float", shape=[None, 784], name="input")
W = tf.Variable(tf.truncated_normal([784, 100], stddev=0.05))
b = tf.Variable(tf.zeros([100]))
y = tf.nn.softmax(tf.matmul(x, W) + b, name="output")

sess.run(tf.global_variables_initializer())

# ... training ...

# create a tfdeploy model and save it to disk
model = td.Model()
model.add(y, sess) # y and all its ops and related tensors are added recursively
model.save("model.pkl")
Load the model and evaluate
import numpy as np
import tfdeploy as td

model = td.Model("model.pkl")

# shorthand to x and y
x, y = model.get("input", "output")

# evaluate
batch = np.random.rand(10000, 784)
result = y.eval({x: batch})
Write your own Operation

tfdeploy supports most of the Operation's implemented in tensorflow. However, if you miss one (in that case, submit a PR or an issue ;) ) or if you're using custom ops, you might want to extend tfdeploy by defining a new class op that inherits from tfdeploy.Operation:

import tensorflow as tf
import tfdeploy as td
import numpy as np

# setup tfdeploy (only when creating models)
td.setup(tf)

# ... write you model here ...

# let's assume your final tensor "y" relies on an op of type "InvertedSoftmax"
# before creating the td.Model, you should add that op to tfdeploy

class InvertedSoftmax(td.Operation):
    @staticmethod
    def func(a):
        e = np.exp(-a)
        # ops should return a tuple
        return np.divide(e, np.sum(e, axis=-1, keepdims=True)),

# this is equivalent to
# @td.Operation.factory
# def InvertedSoftmax(a):
#     e = np.exp(-a)
#     return np.divide(e, np.sum(e, axis=-1, keepdims=True)),

# now we're good to go
model = td.Model()
model.add(y, sess)
model.save("model.pkl")

When writing new ops, three things are important:

  • Try to avoid loops, prefer numpy vectorization.
  • Return a tuple.
  • Don't change incoming tensors/arrays in-place, always work on and return copies.

Ensembles

tfdeploy provides a helper class to evaluate an ensemble of models: Ensemble. It can load multiple models, evaluate them and combine their output values using different methods.

# create the ensemble
ensemble = td.Ensemble(["model1.pkl", "model2.pkl", ...], method=td.METHOD_MEAN)

# get input and output tensors (which actually are TensorEnsemble instances)
input, output = ensemble.get("input", "output")

# evaluate the ensemble just like a normal model
batch = ...
value = output.eval({input: batch})

The return value of get() is a TensorEnsemble istance. It is basically a wrapper around multiple tensors and should be used as keys in the feed_dict of the eval() call.

You can choose between METHOD_MEAN (the default), METHOD_MAX and METHOD_MIN. If you want to use a custom ensembling method, use METHOD_CUSTOM and overwrite the static func_custom() method of the TensorEnsemble instance.

Optimization

Most ops are written using pure numpy. However, multiple implementations of the same op are allowed that may use additional third-party Python packages providing even faster functionality for some situations.

For example, numpy does not provide a vectorized lgamma function. Thus, the standard tfdeploy.Lgamma op uses math.lgamma that was previously vectorized using numpy.vectorize. For these situations, additional implementations of the same op are possible (the lgamma example is quite academic, but this definitely makes sense for more sophisticated ops like pooling). We can simply tell the op to use its scipy implementation instead:

td.Lgamma.use_impl(td.IMPL_SCIPY)

Currently, allowed implementation types are numpy (IMPL_NUMPY, the default) and scipy (IMPL_SCIPY).

Adding additional implementations

Additional implementations can be added by setting the impl attribute of the op factory or by using the add_impl decorator of existing operations. The first registered implementation will be the default one.

# create the default lgamma op with numpy implementation
lgamma_vec = np.vectorize(math.lgamma)

@td.Operation.factory
# equivalent to
# @td.Operation.factory(impl=td.IMPL_NUMPY)
def Lgamma(a):
    return lgamma_vec(a),

# add a scipy-based implementation
@Lgamma.add_impl(td.IMPL_SCIPY)
def Lgamma(a):
    return sp.special.gammaln(a),
Auto-optimization

If scipy is available on your system, it is reasonable to use all ops in their scipy implementation (if it exists, of course). This should be configured before you create any model from tensorflow objects using the second argument of the setup function:

td.setup(tf, td.IMPL_SCIPY)

Ops that do not implement IMPL_SCIPY stick with the numpy version (IMPL_NUMPY).

Performance

tfdeploy is lightweight (1 file, < 150 lines of core code) and fast. Internal evaluation calls have only very few overhead and tensor operations use numpy vectorization. The actual performance depends on the ops in your graph. While most of the tensorflow ops have a numpy equivalent or can be constructed from numpy functions, a few ops require additional Python-based loops (e.g. BatchMatMul). But in many cases it's potentially faster than using tensorflow's Tensor.eval.

This is a comparison for a basic graph where all ops are vectorized (basically Add, MatMul and Softmax):

> ipython -i tests/perf/simple.py

In [1]: %timeit -n 100 test_tf()
100 loops, best of 3: 109 ms per loop

In [2]: %timeit -n 100 test_td()
100 loops, best of 3: 60.5 ms per loop

Contributing

If you want to contribute with new ops and features, I'm happy to receive pull requests. Just make sure to add a new test case to tests/core.py or tests/ops.py and run them via:

> python -m unittest tests
Test grid

In general, tests should be run for different environments:

Variation Values
tensorflow version 1.0.1
python version 2, 3
TD_TEST_SCIPY 0, 1
TD_TEST_GPU 0, 1
Docker

For testing purposes, it is convenient to use docker. Fortunately, the official tensorflow images contain all we need:

git clone https://github.com/riga/tfdeploy.git
cd tfdeploy

docker run --rm -v `pwd`:/root/tfdeploy -w /root/tfdeploy -e "TD_TEST_SCIPY=1" tensorflow/tensorflow:1.0.1 python -m unittest tests

Development

Authors

Comments
  • does tfdeploy support multiple output in a graph?

    does tfdeploy support multiple output in a graph?

    In the example, tfdeploy get the result of y1=W1x1+b1 by

    result1 = y1.eval({x1: batch})
    

    If I have a graph with two output y2=W2(W1x+b1)+b2, and y3=W3(W1x+b1)+b3, in tensorflow I can use

    sess.run([y2, y3])
    

    to get y2 and y3 simutaneously while avoiding redundant computation(of y1=W1x1+b1).

    is it possible to do the same thing with tfdeploy? or I have to use two commands like below

    result2 = y2.eval({x1: batch})
    result3 = y3.eval({x1: batch})
    
    question 
    opened by ugtony 6
  • Numpy invalid division issue in restored model

    Numpy invalid division issue in restored model

    Hi, I'm using a multilayer perceptron with dropout and exponential decay, adapted from the MNIST tutorial in tensorflow. I'm interested in deploying a softmax layer to get class membership probabilities rather than hard predictions. I can train and test the model successfully in tensorflow like this, but when I deploy and restore the model, I run into an "invalid value encountered in true_divide" error from numpy. Here is the (abbreviated) training part:

    ...
    # tf Graph input
    x = tf.placeholder("float", shape=[None, n_input],name="input")
    y = tf.placeholder("float", [None, n_classes])
    
    # Dropout
    keep_prob = tf.placeholder(tf.float32, name="keep_prob")
    dropout = 0.75
    ...
    
    def multilayer_perceptron(x, weights, biases, dropout):
    	# Hidden layer with RELU activation
    	layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
    	layer_1 = tf.nn.relu(layer_1)
    	# Apply dropout
    	layer_1 = tf.nn.dropout(layer_1, dropout)
    
    	# Hidden layer with RELU activation
    	layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
    	layer_2 = tf.nn.relu(layer_2)
    	# Output layer with linear activation
    	out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
    	return out_layer
    ...
    pred = multilayer_perceptron(x, weights, biases, keep_prob)
    ...
    # Training cycle
    for epoch in range(training_epochs):
    	avg_cost = 0.
    	total_batch = int(num_examples/batch_size)
    	# Loop over all batches
    	for i in range(total_batch):
    		batch_x, batch_y = next_batch(data,batch_size)
    		# Run optimization op (backprop) and cost op (to get loss value)
    		_, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y, keep_prob: dropout})
    		# Compute average loss
    		avg_cost += c / total_batch
    	# Display logs per epoch step
    	if epoch % display_step == 0:
    		print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost))
    print("Optimization Finished!")
    
    y = tf.nn.softmax(pred, name="output")
    
    # Evaluate some row in test data, without dropout, this seems to work
    print(y.eval({x: test_x[5, :].reshape(1, 262), keep_prob: 1.}))
    
    # Now save tfdeploy model to disk
    model = tfdeploy.Model()
    model.add(y, sess) # y and all its ops and related tensors are added recursively
    model.save("tdeploy_model.pkl")
    

    This first part works fine. But now the problem comes up:

    nn = tfdeploy.Model("tdeploy_model.pkl")
    x, y, keep_prob = self.nn.get("input", "output", "keep_prob")
    data = np.loadtxt("some_data.tab", delimiter="\t")
    # Pick some row
    data = data[4:5,0:data.shape[1]-2]
    # Get a prediction
    y.eval({x: data, keep_prob: 1.})
    

    This last operation throws a warning as below:

    tfdeploy.py:2098: RuntimeWarning: invalid value encountered in true_divide
      return np.divide(e, np.sum(e, axis=-1, keepdims=True)),
    

    And the predicted probabilities are [nan], instead of the floats I see when running directly under tensorflow. Any ideas on what is happening here? Thanks for your help!

    opened by amir-zeldes 4
  • max pooling across multiple filters

    max pooling across multiple filters

    I defined a CNN in TensorFlow, here's a chunk of it:

    import tensorflow as tf
    
    HEIGHT, WIDTH, DEPTH = 144, 192, 3
    N_CLASSES = 2
    
    x = tf.placeholder(tf.float32, shape=[None, HEIGHT, WIDTH, DEPTH], name="input")
    y_ = tf.placeholder(tf.float32, shape=[None, N_CLASSES], name="y_")
    
    WIN_X, WIN_Y = 5, 5
    N_FILTERS = 4
    
    W1 = tf.Variable(tf.truncated_normal([WIN_X, WIN_Y, DEPTH, N_FILTERS],
    stddev=1/np.sqrt(WIN_X*WIN_Y)))
    b1 = tf.Variable(tf.constant(0.1, shape=[N_FILTERS]))
    xw = tf.nn.conv2d(x, W1, strides=[1,1,1,1], padding="SAME", name="xw")
    h1 = tf.nn.relu(xw + b1, name="h1")
    p1 = tf.nn.max_pool(h1, ksize=[1,2,2,1], strides=[1,2,2,1], padding="VALID", name="p1")
    #...
    

    I was able to train and test the model. I then saved the model using tfdeploy. I can then load the model like so:

    model = tfdeploy.Model("myModel.pkl")
    # THIS CODE WORKS:
    
    x, test_point = model.get("input", "h1")
    test_point.eval({x:samps}) # 38 samples
    # BUT THIS DOESN'T WORK:
    
    x, test_point = model.get("input", "p1")
    test_point.eval({x:samps})
    

    Any idea what is going on? Here's the error message I'm getting:

    Traceback (most recent call last):
    File "test.py", line 68, in
    test_point.eval({x:samps})
    File "/home/sbmorphe/Downloads/tfdeploy.py", line 291, in eval
    self.value = self.op.eval(feed_dict=feed_dict, _uuid=_uuid)[self.value_index]
    File "/home/sbmorphe/Downloads/tfdeploy.py", line 462, in eval
    self.value = self.func(*args)
    File "/home/sbmorphe/Downloads/tfdeploy.py", line 474, in func
    return cls.func_numpy(*args)
    File "/home/sbmorphe/Downloads/tfdeploy.py", line 2185, in MaxPool
    patches = _conv_patches(a, np.ones(k[1:] + [1]), strides, padding.decode("ascii"), "edge")
    File "/home/sbmorphe/Downloads/tfdeploy.py", line 2108, in _conv_patches
    src[s + tuple(slice(*tpl) for tpl in zip(pos, pos + f.shape[:-2]))][en] * f
    ValueError: could not broadcast input array from shape (38,2,2,4,1) into shape (38,2,2,1,1)
    
    opened by sbmorphe 4
  • Implement missing ops

    Implement missing ops

    Tensorflow ops: https://www.tensorflow.org/versions/master/api_docs/python/math_ops.html

    Some of them are just composite ops (such as floordiv), which can be skipped =).

    enhancement 
    opened by riga 4
  • Maximum recursion depth exceeded while calling a Python object

    Maximum recursion depth exceeded while calling a Python object

    When I try to export my model with tf.deploy, I get a RuntimeError because of too many recursions.

    Here's part of the stacktrace (rest just keeps on repeating):

    ---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
    <ipython-input-11-2c22f3704a32> in <module>()
          1 model = td.Model()
          2 
    ----> 3 model.add(output, sess) # y and all its ops and related tensors are added recursively
          4 model.save("model.pkl")
    
    /usr/local/lib/python2.7/dist-packages/tfdeploy.pyc in add(self, tensor, tf_sess, key, **kwargs)
        151         """
        152         if not isinstance(tensor, Tensor):
    --> 153             tensor = Tensor(tensor, tf_sess, **kwargs)
        154 
        155         if key is None:
    
    /usr/local/lib/python2.7/dist-packages/tfdeploy.pyc in __call__(cls, tf_tensor, *args, **kwargs)
        192         # simple caching
        193         if tf_tensor not in cls.instances:
    --> 194             inst = super(TensorRegister, cls).__call__(tf_tensor, *args, **kwargs)
        195             cls.instances[tf_tensor] = inst
        196         return cls.instances[tf_tensor]
    
    /usr/local/lib/python2.7/dist-packages/tfdeploy.pyc in __init__(self, tf_tensor, tf_sess, tf_feed_dict)
        247         # no op for variables, placeholders and constants
        248         if tf_tensor.op.type not in ("Variable", "Const", "Placeholder"):
    --> 249             self.op = Operation.new(tf_tensor.op, tf_sess, tf_feed_dict=tf_feed_dict)
        250 
        251     def get(self, *names):
    
    /usr/local/lib/python2.7/dist-packages/tfdeploy.pyc in new(cls, tf_op, *args, **kwargs)
        422             raise UnknownOperationException("unknown operation: %s" % tf_op.type)
        423 
    --> 424         return cls.classes[tf_op.type](tf_op, *args, **kwargs)
        425 
        426     def set_attr(self, attr, value):
    
    /usr/local/lib/python2.7/dist-packages/tfdeploy.pyc in __call__(cls, tf_op, *args, **kwargs)
        320         # simple caching
        321         if tf_op not in cls.instances:
    --> 322             inst = super(OperationRegister, cls).__call__(tf_op, *args, **kwargs)
        323             cls.instances[tf_op] = inst
        324         return cls.instances[tf_op]
    
    /usr/local/lib/python2.7/dist-packages/tfdeploy.pyc in __init__(self, tf_op, *args, **kwargs)
        394 
        395         self.name = tf_op.name
    --> 396         self.inputs = tuple(Tensor(tf_tensor, *args, **kwargs) for tf_tensor in tf_op.inputs)
        397 
        398         self.value = None
    
    /usr/local/lib/python2.7/dist-packages/tfdeploy.pyc in <genexpr>((tf_tensor,))
        394 
        395         self.name = tf_op.name
    --> 396         self.inputs = tuple(Tensor(tf_tensor, *args, **kwargs) for tf_tensor in tf_op.inputs)
        397 
        398         self.value = None
    
    /usr/local/lib/python2.7/dist-packages/tfdeploy.pyc in __call__(cls, tf_tensor, *args, **kwargs)
        192         # simple caching
        193         if tf_tensor not in cls.instances:
    --> 194             inst = super(TensorRegister, cls).__call__(tf_tensor, *args, **kwargs)
        195             cls.instances[tf_tensor] = inst
        196         return cls.instances[tf_tensor]
    

    Here's the graph that I'm using: https://1drv.ms/u/s!AjBMlWMdSnfSg7xkXuRAhgMgRNzXRQ And the code for creating the model looks somewhat like this https://1drv.ms/u/s!AjBMlWMdSnfSg7xkXuRAhgMgRNzXRQ (small facenet inception model using tf.slim)

    Note: I've manually added the new ConcatV2 op using the following code:

    @td.Operation.factory
    def ConcatV2(inputs, dim):
        """
        Concat op.
        """
        return np.concatenate(inputs, axis=dim),
    

    Haven't really checked if that's correct, but I don't think that's connected to the error as I've encountered it before when I used the old Concat op.

    Really like this project and it would be awesome if this error got resolved. Thank you so much for your work!

    opened by dominikandreas 3
  • UnknownOperationException: Unknown operation: MaxPool

    UnknownOperationException: Unknown operation: MaxPool

    The error exists but I do not know why it happened. I used tf.nn.max_pool(img, ksize=[1,2,2,1], strides = [1,2,2,1],padding='SAME') for MaxPool Operation Could you give me some advice?

    opened by Jingkang50 2
  • Problem with saving model in python3

    Problem with saving model in python3

    I am getting following error TypeError: must be str, not bytes at line https://github.com/riga/tfdeploy/blob/master/tfdeploy.py#L176.

    It seem that pickle.dump and pickle.load requires files opened as binary in python3.

    opened by thran 2
  • reduction functions failed when reduction_indices is numpy.int32

    reduction functions failed when reduction_indices is numpy.int32

    When running the tfdeploy model, I got this error

    File "/home/sutony/git/facenet/src/align/tfdeploy.py", line 1838, in Max axis=tuple(reduction_indices) TypeError: 'numpy.int32' object is not iterable

    It happens when the reduction_indices is an integer

    #does not work when reduction_indices is an numpy integer (e.g., np.int32)
    tuple(reduction_indices)  
    

    I changed the code in Max() and Sum() to make the program runnable, but the code is not neat and I'm not sure if it works for other cases.

    if not isinstance(reduction_indices, np.ndarray):
        return np.amax(a, axis=reduction_indices, keepdims=keep_dims),
    else:
        return np.amax(a, axis=tuple(reduction_indices), keepdims=keep_dims),
    

    The other reduction functions might cause the same error.

    opened by ugtony 1
  • Allow evaluating models on Quantopian

    Allow evaluating models on Quantopian

    Hi, I came across this library when searching for ways to run neural networks on Quantopian. This is the closest one I could find that could run on their platform. However, since they only allow certain libraries, some stuff will have to change.

    This can hopefully be solved by guarding the imports, or make them conditional, for those modules that aren't allowed. Then it's the big one, encode the model as CSV, because sadly that is the only format they support loading.

    Those this sound good/doable?

    opened by maxnordlund 1
  • Comparison to the C++ API

    Comparison to the C++ API

    Have you compared how this compares to model evaluation via the c++ API?

    I'd think the most common optimization-of-evaluation use case would be generating highly portable C code for production systems where there's no desire to include the 100+ MB of tensorflow libs. (Assuming you could get superior performance with only Eigen maybe)

    question 
    opened by raddy 1
  •  Performance issues in tests/perf/measure_runtimes.py(P2)

    Performance issues in tests/perf/measure_runtimes.py(P2)

    Hello,I found a performance issue in the definition of create_models , tests/perf/measure_runtimes.py, tf_sess = tf.Session() was repeatedly called and was not closed. I think it will increase the efficiency and avoid out of memory if you close this session after using it.

    Here are two files to support this issue,support1 and support2

    Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

    opened by DLPerf 0
  •  Performance issue in the definition of create_tf_model, tests/perf/measure_runtimes.py(P1)

    Performance issue in the definition of create_tf_model, tests/perf/measure_runtimes.py(P1)

    Hello, I found a performance issue in the definition of create_tf_model, tests/perf/measure_runtimes.py, b = tf.Variable(tf.zeros([units])) will be created repeatedly during program execution, resulting in reduced efficiency. I think it should be created before the loop.

    Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

    opened by DLPerf 0
  • Only save once?

    Only save once?

    It seems that after saving a model, all subsequent saves are identical.

    I made a minimal example that reproduces the problem:

    import tensorflow as tf
    import tfdeploy.tfdeploy as td
    
    def pickleModel(sess,out,file_name):
        model = td.Model()
        model.add(out,sess)
        model.save(file_name)
    
    def unpickleModel(file_name):
        model = td.Model(file_name)
        out = model.get('output')
        print("td evaluation = ",out.eval())
    
    if __name__ == '__main__':
    
        counter = tf.Variable(  0.0  , name='counter' )
        out = tf.multiply(counter,1,name ='output')
        increment = tf.assign(counter,counter+1)
    
        sess = tf.Session()
        sess.run(tf.global_variables_initializer())
    
        pickleModel(sess,out,'file1')
        print('tensorflow evaluation = ',sess.run(out))
        unpickleModel('file1')
    
        sess.run(increment)
    
        pickleModel(sess,out,'file2')
        print('tensorflow evaluation = ',sess.run(out))
        unpickleModel('file2')
    

    The ouput is : tensorflow evaluation = 0.0 td evaluation = 0.0 tensorflow evaluation = 1.0 td evaluation = 0.0

    But the last td evaluation should be equal to 1.0. What is going on?

    opened by WardBeullens 2
  • Error while evaluating saved model

    Error while evaluating saved model

    Hi, I'm trying to restore a model saved with tfdeploy but during evaluation I have the following errors:

    <class 'tfdeploy.Tensor'>
    <class 'tfdeploy.Tensor'>
    Traceback (most recent call last):
      File "ensemble_learning.py", line 71, in <module>
        result = y_test.eval({x_test: test_waves})
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 292, in eval
        self.value = self.op.eval(feed_dict=feed_dict, _uuid=_uuid)[self.value_index]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in eval
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in <listcomp>
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 292, in eval
        self.value = self.op.eval(feed_dict=feed_dict, _uuid=_uuid)[self.value_index]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in eval
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in <listcomp>
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 292, in eval
        self.value = self.op.eval(feed_dict=feed_dict, _uuid=_uuid)[self.value_index]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in eval
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in <listcomp>
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 292, in eval
        self.value = self.op.eval(feed_dict=feed_dict, _uuid=_uuid)[self.value_index]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in eval
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in <listcomp>
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 292, in eval
        self.value = self.op.eval(feed_dict=feed_dict, _uuid=_uuid)[self.value_index]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in eval
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in <listcomp>
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 292, in eval
        self.value = self.op.eval(feed_dict=feed_dict, _uuid=_uuid)[self.value_index]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in eval
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in <listcomp>
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 292, in eval
        self.value = self.op.eval(feed_dict=feed_dict, _uuid=_uuid)[self.value_index]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in eval
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in <listcomp>
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 292, in eval
        self.value = self.op.eval(feed_dict=feed_dict, _uuid=_uuid)[self.value_index]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in eval
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in <listcomp>
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 292, in eval
        self.value = self.op.eval(feed_dict=feed_dict, _uuid=_uuid)[self.value_index]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in eval
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in <listcomp>
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 292, in eval
        self.value = self.op.eval(feed_dict=feed_dict, _uuid=_uuid)[self.value_index]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in eval
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in <listcomp>
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 292, in eval
        self.value = self.op.eval(feed_dict=feed_dict, _uuid=_uuid)[self.value_index]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in eval
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 461, in <listcomp>
        args = [t.eval(feed_dict=feed_dict, _uuid=_uuid) for t in self.inputs]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 292, in eval
        self.value = self.op.eval(feed_dict=feed_dict, _uuid=_uuid)[self.value_index]
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 469, in eval
        self.value = self.func(*args)
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 481, in func
        return cls.func_numpy(*args)
      File "/home/stefano/Dropbox/DeepWave/tfdeploy.py", line 1296, in Div
        return np.divide(a, b),
    TypeError: unsupported operand type(s) for /: 'float' and 'NoneType'
    
    

    The training code is:

    ...
    deepnn definition, dataset loading....
    ...
    
    td.setup(tf)
    start_time = time.time()
    with tf.Session() as sess:
        x = tf.placeholder(tf.float32, [None, 160], name="input")
        y = tf.placeholder(tf.float32, [None, 2], name="y-input")
        keep_prob = tf.placeholder(tf.float32)
        # Build the graph for the deep net
        y_conv = deepnn(x)
        y_soft = tf.nn.softmax(y_conv, name="output")
        prediction_y_conv = tf.argmax(y_conv, 1) # Predicted labels
        prediction_y = tf.argmax(y, 1) # Original labels
    
        with tf.name_scope('cost'):
            cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_conv)) #cost
    
        with tf.name_scope('train'):
        	train_step = tf.train.AdamOptimizer(LEARNING_RATE).minimize(cross_entropy) #optimizer
    
        with tf.name_scope('accuracy'):
            with tf.name_scope('correct_prediction'):
        	    correct_prediction = tf.equal(prediction_y_conv, prediction_y)
            with tf.name_scope('accuracy'):
        	    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    
        # create a summary for our cost and accuracy
        tf.summary.scalar("accuracy", accuracy)
        tf.summary.scalar("cost", cross_entropy)
        merged = tf.summary.merge_all()
    
        writer = tf.summary.FileWriter(LOG_DIR, sess.graph)
    
        sess.run(tf.global_variables_initializer())
    
    
    
        best_accuracy = 0
        for e in range(num_epochs):
            print('Epoch num: ', e + 1)
            print('Learning rate: ', LEARNING_RATE)
            for i in range(num_batches):
                if i % 50 == 0:
                    train_accuracy = sess.run(accuracy, feed_dict={x: train_waves[i], y: train_labels[i], keep_prob: 1.0})
                    print('step %d, %d [s], training accuracy %g' % (i, (time.time() - start_time), train_accuracy))
                summary, _ = sess.run([merged,train_step], feed_dict={x: train_waves[i], y: train_labels[i], keep_prob: 1.0})
                writer.add_summary(summary,i)
    
            current_accuracy = accuracy.eval(feed_dict={x: test_waves, y: test_labels, keep_prob: 1.0})
            print('test accuracy %g' % current_accuracy)
            if current_accuracy > best_accuracy:
                # create a tfdeploy model and save it to disk
                model = td.Model()
                model.add(y_soft, sess) # y and all its ops and related tensors are added recursively
                model.save(SESS_DIR + '/' + MODEL_NAME)
                best_accuracy = current_accuracy
    
            # Compute confusion matrix
            cm_labels = prediction_y.eval(feed_dict={y: test_labels})
            cm_predictions = prediction_y_conv.eval(feed_dict={x: test_waves, keep_prob: 1.0})
            print(sess.run(tf.contrib.metrics.confusion_matrix(cm_labels,cm_predictions)))
            # Decrease learning rate with num_epochs
            #LEARNING_RATE = LEARNING_RATE - (LEARNING_RATE/100)*10
            print("Execution time [s]: %d" % (time.time() - start_time))
    
    

    And the import code is:

    ....
    load dataset
    ....
    model = td.Model(SESS_DIR+"/model1.pkl")
    #model = td.Model("model.pkl")
    
    # shorthand to x and y
    x_test, y_test = model.get("input", "output")
    print(type(x_test))
    print(type(y_test))
    
    # evaluate
    result = y_test.eval({x_test: test_waves})
    

    Have I done something wrong? :| Thank you for your work!

    EDIT: I forgot the dropout placeholder! :) In the training file i modified keep_prob: keep_prob = tf.placeholder(tf.float32, name="keep_prob")

    Now the import code is:

    ciao = td.Model(SESS_DIR+"/model1.pkl")
    x, y, keep_prob = ciao.get("input", "output", "keep_prob")
    
    # Get a prediction
    pred = y.eval({x: test_waves, keep_prob: 1.0})
    

    But now I have this error:

    /home/stefano/Dropbox/DeepWave/tfdeploy.py:2081: RuntimeWarning: overflow encountered in exp
      e = np.exp(a)
    /home/stefano/Dropbox/DeepWave/tfdeploy.py:2082: RuntimeWarning: invalid value encountered in true_divide
      return np.divide(e, np.sum(e, axis=-1, keepdims=True)),
    

    EDIT2: I don't know why.... but now it works!! :D I had a "normalized" dataset (values between 0 and 1). Removed the normalization the problem is gone. Now i have a dataset with values between 0 and 255.

    opened by stefat77 0
  • added example and tests for exporting keras models

    added example and tests for exporting keras models

    I added tests for simple CNN-based Keras models (https://github.com/riga/tfdeploy/issues/18) as well as the pretrained models which then cover the popular image classification models (ResNet, VGG, Inception), currently all of the tests pass but there is a hard coded list of UNSUPPORTED_LAYERS which are tolerated failures and show the corresponding missing operation on the tfdeploy side (https://github.com/riga/tfdeploy/issues/31)

    Model 1: use_upsample could not be serialized unknown operation: ResizeNearestNeighbor
    Model 5: use_lstm could not be serialized unknown operation: TensorArrayReadV3
    Model 6: use_dropout could not be serialized unknown operation: Merge
    Model 7: use_locallyconnected could not be serialized unknown operation: BatchMatMul
    Model 8: use_repeatvec could not be serialized unknown operation: StridedSlice
    Model 9: use_conv2dtrans could not be serialized unknown operation: Conv2DBackpropInput
    Model 10: use_bn could not be serialized unknown operation: Merge
    
    opened by kmader 0
Releases(v0.3.3)
Owner
Marcel R.
ML, HEP, CERN, Higgs Physics, Workflow Management, Open Source, Open Data, Analysis Preservation
Marcel R.
Reimplementation of the paper "Attention, Learn to Solve Routing Problems!" in jax/flax.

JAX + Attention Learn To Solve Routing Problems Reinplementation of the paper Attention, Learn to Solve Routing Problems! using Jax and Flax. Fully su

Gabriela Surita 7 Dec 01, 2022
MAterial del programa Misión TIC 2022

Mision TIC 2022 Esta iniciativa, aparece como respuesta frente a los retos de la Cuarta Revolución Industrial, y tiene como objetivo la formación de 1

6 May 25, 2022
Cl datasets - PyTorch image dataloaders and utility functions to load datasets for supervised continual learning

Continual learning datasets Introduction This repository contains PyTorch image

berjaoui 5 Aug 28, 2022
Implementation of paper "Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal"

Patch-wise Adversarial Removal Implementation of paper "Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal

4 Oct 12, 2022
Tutorial page of the Climate Hack, the greatest hackathon ever

Tutorial page of the Climate Hack, the greatest hackathon ever

UCL Artificial Intelligence Society 12 Jul 02, 2022
Pretraining Representations For Data-Efficient Reinforcement Learning

Pretraining Representations For Data-Efficient Reinforcement Learning Max Schwarzer, Nitarshan Rajkumar, Michael Noukhovitch, Ankesh Anand, Laurent Ch

Mila 40 Dec 11, 2022
A Deep learning based streamlit web app which can tell with which bollywood celebrity your face resembles.

Project Name: Which Bollywood Celebrity You look like A Deep learning based streamlit web app which can tell with which bollywood celebrity your face

BAPPY AHMED 20 Dec 28, 2021
NER for Indian languages

CL-NERIL: A Cross-Lingual Model for NER in Indian Languages Code for the paper - https://arxiv.org/abs/2111.11815 Setup Setup a virtual environment Th

Akshara P 0 Nov 24, 2021
Aspect-Sentiment-Multiple-Opinion Triplet Extraction (NLPCC 2021)

The code and data for the paper "Aspect-Sentiment-Multiple-Opinion Triplet Extraction" Requirements Python 3.6.8 torch==1.2.0 pytorch-transformers==1.

慢半拍 5 Jul 02, 2022
Official implementation of ACMMM'20 paper 'Self-supervised Video Representation Learning Using Inter-intra Contrastive Framework'

Self-supervised Video Representation Learning Using Inter-intra Contrastive Framework Official code for paper, Self-supervised Video Representation Le

Li Tao 103 Dec 21, 2022
Released code for Objects are Different: Flexible Monocular 3D Object Detection, CVPR21

MonoFlex Released code for Objects are Different: Flexible Monocular 3D Object Detection, CVPR21. Work in progress. Installation This repo is tested w

Yunpeng 169 Dec 06, 2022
Fast, general, and tested differentiable structured prediction in PyTorch

Fast, general, and tested differentiable structured prediction in PyTorch

HNLP 1.1k Dec 16, 2022
Official source code of paper 'IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo'

IterMVS official source code of paper 'IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo' Introduction IterMVS is a novel lear

Fangjinhua Wang 127 Jan 04, 2023
HMLET (Hybrid-Method-of-Linear-and-non-linEar-collaborative-filTering-method)

Methods HMLET (Hybrid-Method-of-Linear-and-non-linEar-collaborative-filTering-method) Dynamically selecting the best propagation method for each node

Yong 7 Dec 18, 2022
Multi-View Radar Semantic Segmentation

Multi-View Radar Semantic Segmentation Paper Multi-View Radar Semantic Segmentation, ICCV 2021. Arthur Ouaknine, Alasdair Newson, Patrick Pérez, Flore

valeo.ai 37 Oct 25, 2022
Cards Against Humanity AI

cah-ai This is a Cards Against Humanity AI implemented using a pre-trained Semantic Search model. How it works A player is described by a combination

Alex Nichol 2 Aug 22, 2022
Bayesian inference for Permuton-induced Chinese Restaurant Process (NeurIPS2021).

Permuton-induced Chinese Restaurant Process Note: Currently only the Matlab version is available, but a Python version will be available soon! This is

NTT Communication Science Laboratories 3 Dec 17, 2022
A deep learning CNN model to identify and classify and check if a person is wearing a mask or not.

Face Mask Detection The Model is designed to check if any human is wearing a mask or not. Dataset Description The Dataset contains a total of 11,792 i

1 Mar 01, 2022
N-Omniglot is a large neuromorphic few-shot learning dataset

N-Omniglot [Paper] || [Dataset] N-Omniglot is a large neuromorphic few-shot learning dataset. It reconstructs strokes of Omniglot as videos and uses D

11 Dec 05, 2022
Multi-modal Content Creation Model Training Infrastructure including the FACT model (AI Choreographer) implementation.

AI Choreographer: Music Conditioned 3D Dance Generation with AIST++ [ICCV-2021]. Overview This package contains the model implementation and training

Google Research 365 Dec 30, 2022