instruction
stringlengths
1
910
input
stringclasses
1 value
output
stringlengths
19
40.6k
system
stringclasses
2 values
Can I measure the execution time of individual operations with TensorFlow?
<p>I have used the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/client/timeline.py"><strong><code>Timeline</code></strong> object</a> to get the time of execution for each node in the graph:</p> <ul> <li>you use a classic <code>sess.run()</code> but also specify the optional arguments <code>options</code> and <code>run_metadata</code></li> <li>you then create a <code>Timeline</code> object with the <code>run_metadata.step_stats</code> data</li> </ul> <hr> <p>Here is an example program that measures the performance of a matrix multiplication:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf from tensorflow.python.client import timeline x = tf.random_normal([1000, 1000]) y = tf.random_normal([1000, 1000]) res = tf.matmul(x, y) # Run the graph with full trace option with tf.Session() as sess: run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE) run_metadata = tf.RunMetadata() sess.run(res, options=run_options, run_metadata=run_metadata) # Create the Timeline object, and write it to a json tl = timeline.Timeline(run_metadata.step_stats) ctf = tl.generate_chrome_trace_format() with open('timeline.json', 'w') as f: f.write(ctf) </code></pre> <hr> <p>You can then open Google Chrome, go to the page <code>chrome://tracing</code> and load the <code>timeline.json</code> file. You should see something like:</p> <p><a href="https://i.sstatic.net/qrPyb.png"><img src="https://i.sstatic.net/qrPyb.png" alt="timeline"></a></p>
you are the stackoverflow assistant
What&#39;s the difference between tf.Session() and tf.InteractiveSession()?
<p>Mainly taken from <a href="https://www.tensorflow.org/api_docs/python/tf/InteractiveSession" rel="noreferrer">official</a> documentation:</p> <blockquote> <p>The only difference with a regular Session is that an InteractiveSession installs itself as the default session on construction. The methods Tensor.eval() and Operation.run() will use that session to run ops.</p> </blockquote> <p>This allows to use interactive context, like shell, as it avoids having to pass an explicit Session object to run op:</p> <pre><code>sess = tf.InteractiveSession() a = tf.constant(5.0) b = tf.constant(6.0) c = a * b # We can just use 'c.eval()' without passing 'sess' print(c.eval()) sess.close() </code></pre> <p>It is also possible to say, that <code>InteractiveSession</code> supports less typing, as allows to run variables without needing to constantly refer to the session object.</p>
you are the stackoverflow assistant
Gradient Descent vs Adagrad vs Momentum in TensorFlow
<p>Here is a brief explanation based on my understanding:</p> <ul> <li><strong>momentum</strong> <a href="https://www.youtube.com/watch?v=8yg2mRJx-z4" rel="noreferrer">helps</a> SGD to navigate along the relevant directions and softens the oscillations in the irrelevant. It simply adds a fraction of the direction of the previous step to a current step. This achieves amplification of speed in the correct direction and softens oscillation in wrong directions. This fraction is usually in the (0, 1) range. It also makes sense to use adaptive momentum. In the beginning of learning a big momentum will only hinder your progress, so it makes sense to use something like 0.01 and once all the high gradients disappeared you can use a bigger momentum. There is one problem with momentum: when we are very close to the goal, our momentum in most of the cases is very high and it does not know that it should slow down. This can cause it to miss or oscillate around the minima</li> <li><strong>nesterov accelerated gradient</strong> overcomes this problem by starting to slow down early. In momentum we first compute gradient and then make a jump in that direction amplified by whatever momentum we had previously. NAG does the same thing but in another order: at first we make a big jump based on our stored information, and then we calculate the gradient and make a small correction. This seemingly irrelevant change gives significant practical speedups.</li> <li><strong>AdaGrad</strong> or adaptive gradient allows the learning rate to adapt based on parameters. It performs larger updates for infrequent parameters and smaller updates for frequent one. Because of this it is well suited for sparse data (NLP or image recognition). Another advantage is that it basically eliminates the need to tune the learning rate. Each parameter has its own learning rate and due to the peculiarities of the algorithm the learning rate is monotonically decreasing. This causes the biggest problem: at some point of time the learning rate is so small that the system stops learning.</li> <li><strong>AdaDelta</strong> <a href="http://int8.io/comparison-of-optimization-techniques-stochastic-gradient-descent-momentum-adagrad-and-adadelta/#AdaGrad" rel="noreferrer">resolves</a> the problem of monotonically decreasing learning rate in AdaGrad. In AdaGrad the learning rate was calculated approximately as one divided by the sum of square roots. At each stage you add another square root to the sum, which causes denominator to constantly increase. In AdaDelta instead of summing all past square roots it uses sliding window which allows the sum to decrease. <strong>RMSprop</strong> is very similar to AdaDelta</li> <li><p><strong>Adam</strong> or adaptive momentum is an algorithm similar to AdaDelta. But in addition to storing learning rates for each of the parameters it also stores momentum changes for each of them separately.</p> <p>A <a href="http://ruder.io/optimizing-gradient-descent/index.html#visualizationofalgorithms" rel="noreferrer">few visualizations</a>: <img src="https://i.sstatic.net/qAx2i.gif" alt="enter image description here"> <img src="https://i.sstatic.net/1obtV.gif" alt="enter image description here"></p></li> </ul> <p>I would say that SGD, Momentum and Nesterov are inferior than the last 3.</p>
you are the stackoverflow assistant
Tensorflow: None of the MLIR optimization passes are enabled (registered 1)
<p>MLIR is being used as another solution to implementing and optimizing Tensorflow logic. This informative message is <em>benign</em> and is saying MLIR was not being used. This is expected as in TF 2.3, the MLIR based implementation is still being developed and proven, so end users are generally not expected to use the MLIR implementation and are instead expected to use the non-MLIR feature complete implementation.</p> <p>Update: <a href="https://www.tensorflow.org/api_docs/python/tf/config/experimental/enable_mlir_graph_optimization" rel="noreferrer">still experimental</a> on version 2.9.1. On the docs it is written:</p> <blockquote> <p>DO NOT USE, DEV AND TESTING ONLY AT THE MOMENT.</p> </blockquote>
you are the stackoverflow assistant
AttributeError: module &#39;tensorflow&#39; has no attribute &#39;ConfigProto&#39;
<p>ConfigProto disappeared in tf 2.0, so an elegant solution is: </p> <pre><code>import tensorflow as tf </code></pre> <p>and then replace:</p> <p><code>tf.ConfigProto</code> by <code>tf.compat.v1.ConfigProto</code></p> <p>In fact, the compatibility built in 2.0 to get tf 1.XX: <code>tf.compat.v1</code> is really helpful.</p> <p>Useful link: Migrate your tensorflow 1. code to tensorflow 2.: <a href="https://www.tensorflow.org/guide/migrate" rel="noreferrer">https://www.tensorflow.org/guide/migrate</a></p>
you are the stackoverflow assistant
Installing Python3.6 alongside Python3.7 on Mac
<p>Try using <code>brew</code> for example if already using Python 3:</p> <pre><code>$ brew unlink python </code></pre> <p>Then <a href="https://stackoverflow.com/a/51125014/1135424">install python 3.6.5</a>:</p> <pre><code>$ brew install --ignore-dependencies https://raw.githubusercontent.com/Homebrew/homebrew-core/f2a764ef944b1080be64bd88dca9a1d80130c558/Formula/python.rb </code></pre> <p>To get back to python <code>3.7.4_1</code> use:</p> <pre><code>$ brew switch python 3.7.4_1 </code></pre> <p>And if need 3.6 again switch with:</p> <pre><code>$ brew switch python 3.6.5_1 </code></pre>
you are the stackoverflow assistant
Tensorflow: How do I convert a EagerTensor into a numpy array?
<p>There is a <code>.numpy()</code> function which you can use, alternatively you could also do <code>numpy.array(y)</code>. For example:</p> <pre><code>import tensorflow as tf import numpy as np tf.enable_eager_execution() x = tf.constant([1., 2.]) print(type(x)) # &lt;type 'EagerTensor'&gt; print(type(x.numpy())) # &lt;type 'numpy.ndarray'&gt; print(type(np.array(x))) # &lt;type 'numpy.ndarray'&gt; </code></pre> <p>See <a href="https://www.tensorflow.org/programmers_guide/eager#setup_and_basic_usage" rel="noreferrer">the section in the eager execution guide</a>.</p>
you are the stackoverflow assistant
How to count total number of trainable parameters in a tensorflow model?
<p>Loop over the shape of every variable in <code>tf.trainable_variables()</code>.</p> <pre><code>total_parameters = 0 for variable in tf.trainable_variables(): # shape is an array of tf.Dimension shape = variable.get_shape() print(shape) print(len(shape)) variable_parameters = 1 for dim in shape: print(dim) variable_parameters *= dim.value print(variable_parameters) total_parameters += variable_parameters print(total_parameters) </code></pre> <p>Update: I wrote an article to clarify the dynamic/static shapes in Tensorflow because of this answer: <a href="https://pgaleone.eu/tensorflow/2018/07/28/understanding-tensorflow-tensors-shape-static-dynamic/" rel="noreferrer">https://pgaleone.eu/tensorflow/2018/07/28/understanding-tensorflow-tensors-shape-static-dynamic/</a></p>
you are the stackoverflow assistant
TensorFlow: InternalError: Blas SGEMM launch failed
<p>Old question, but may help others. <br/> Try to close interactive sessions active in other processes (if IPython Notebook - just restart kernels). This helped me!<br/><br/> Additionally, I use this code to close local sessions in this kernel during experiments:</p> <pre><code>if 'session' in locals() and session is not None: print('Close interactive session') session.close() </code></pre>
you are the stackoverflow assistant
Tensorflow Tensorboard default port
<p>In fact there is an option to change the default port ... </p> <pre><code>tensorboard --logdir=/tmp --port=8008 </code></pre>
you are the stackoverflow assistant
FailedPreconditionError: Attempting to use uninitialized in Tensorflow
<p>The <code>FailedPreconditionError</code> arises because the program is attempting to read a variable (named <code>"Variable_1"</code>) before it has been initialized. In TensorFlow, all variables must be explicitly initialized, by running their "initializer" operations. For convenience, you can run all of the variable initializers in the current session by executing the following statement before your training loop:</p> <pre><code>tf.initialize_all_variables().run() </code></pre> <p>Note that this answer assumes that, as in the question, you are using <code>tf.InteractiveSession</code>, which allows you to run operations without specifying a session. For non-interactive uses, it is more common to use <code>tf.Session</code>, and initialize as follows:</p> <pre><code>init_op = tf.initialize_all_variables() sess = tf.Session() sess.run(init_op) </code></pre>
you are the stackoverflow assistant
How to export Keras .h5 to tensorflow .pb?
<p>Keras does not include by itself any means to export a TensorFlow graph as a protocol buffers file, but you can do it using regular TensorFlow utilities. <a href="https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc" rel="noreferrer">Here</a> is a blog post explaining how to do it using the utility script <a href="https://github.com/tensorflow/tensorflow/blob/v1.12.0/tensorflow/python/tools/freeze_graph.py" rel="noreferrer"><code>freeze_graph.py</code></a> included in TensorFlow, which is the "typical" way it is done.</p> <p>However, I personally find a nuisance having to make a checkpoint and then run an external script to obtain a model, and instead prefer to do it from my own Python code, so I use a function like this:</p> <pre class="lang-py prettyprint-override"><code>def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True): """ Freezes the state of a session into a pruned computation graph. Creates a new computation graph where variable nodes are replaced by constants taking their current value in the session. The new graph will be pruned so subgraphs that are not necessary to compute the requested outputs are removed. @param session The TensorFlow session to be frozen. @param keep_var_names A list of variable names that should not be frozen, or None to freeze all the variables in the graph. @param output_names Names of the relevant graph outputs. @param clear_devices Remove the device directives from the graph for better portability. @return The frozen graph definition. """ graph = session.graph with graph.as_default(): freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or [])) output_names = output_names or [] output_names += [v.op.name for v in tf.global_variables()] input_graph_def = graph.as_graph_def() if clear_devices: for node in input_graph_def.node: node.device = "" frozen_graph = tf.graph_util.convert_variables_to_constants( session, input_graph_def, output_names, freeze_var_names) return frozen_graph </code></pre> <p>Which is inspired in the implementation of <code>freeze_graph.py</code>. The parameters are similar to the script too. <code>session</code> is the TensorFlow session object. <code>keep_var_names</code> is only needed if you want to keep some variable not frozen (e.g. for stateful models), so generally not. <code>output_names</code> is a list with the names of the operations that produce the outputs that you want. <code>clear_devices</code> just removes any device directives to make the graph more portable. So, for a typical Keras <code>model</code> with one output, you would do something like:</p> <pre><code>from keras import backend as K # Create, compile and train model... frozen_graph = freeze_session(K.get_session(), output_names=[out.op.name for out in model.outputs]) </code></pre> <p>Then you can write the graph to a file as usual with <a href="https://www.tensorflow.org/api_docs/python/tf/train/write_graph" rel="noreferrer"><code>tf.train.write_graph</code></a>:</p> <pre><code>tf.train.write_graph(frozen_graph, "some_directory", "my_model.pb", as_text=False) </code></pre>
you are the stackoverflow assistant
&quot;synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / &#39;(1,)type&#39;.&quot; problem in TensorFlow
<p>If you're using TF 2.0 <strong>a quick solution would be to downgrade your numpy</strong> to 1.16.4. (I used 1.17 and received the same warning messages). </p> <pre><code>1. pip uninstall numpy 2. pip install numpy==1.16.4 </code></pre> <p>See <a href="https://github.com/tensorflow/tensorflow/issues/31249" rel="noreferrer">here</a> (thanks to ymodak)</p>
you are the stackoverflow assistant
pip3: command not found
<p>You would need to install pip3.</p> <p>On Linux, run first <code>sudo apt update</code>. Then the command would be: <code>sudo apt install python3-pip</code><br><br> On Mac, using brew, first <code>brew install python3</code><br> Then <code>brew postinstall python3</code></p> <p>Try calling <code>pip3 -V</code> to see if it worked.</p>
you are the stackoverflow assistant
Keras model.summary() object to string
<p>With my version of Keras (<code>2.0.6</code>) and Python (<code>3.5.0</code>), this works for me:</p> <pre><code># Create an empty model from keras.models import Sequential model = Sequential() # Open the file with open(filename + 'report.txt','w') as fh: # Pass the file handle in as a lambda function to make it callable model.summary(print_fn=lambda x: fh.write(x + '\n')) </code></pre> <p>This outputs the following lines to the file:</p> <pre><code>_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= Total params: 0 Trainable params: 0 Non-trainable params: 0 _________________________________________________________________ </code></pre>
you are the stackoverflow assistant
How to &quot;reset&quot; tensorboard data after killing tensorflow instance
<p>Note: The solution you've posted (erase TensorBoard's log files and kill the process) will work, but it isn't preferred, because it destroys historical information about your training.</p> <p>Instead, you can have each new training job write to a new subdirectory (of your top-level log directory). Then, TensorBoard will consider each job a new "run" and will create a nice comparison view so you can see how the training differed between iterations of your model. </p> <p>In the following an example from <a href="https://www.tensorflow.org/tensorboard/get_started" rel="noreferrer">https://www.tensorflow.org/tensorboard/get_started</a>:</p> <pre><code>model = create_model() ... model.compile(...) log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1) model.fit(..., callbacks=[tensorboard_callback]) </code></pre>
you are the stackoverflow assistant
What is the difference between variable_scope and name_scope?
<p>I had problems understanding the difference between <a href="https://www.tensorflow.org/api_docs/python/tf/variable_scope" rel="noreferrer">variable_scope</a> and <a href="https://www.tensorflow.org/api_docs/python/tf/name_scope" rel="noreferrer">name_scope</a> (they looked almost the same) before I tried to visualize everything by creating a simple example:</p> <pre><code>import tensorflow as tf def scoping(fn, scope1, scope2, vals): with fn(scope1): a = tf.Variable(vals[0], name='a') b = tf.get_variable('b', initializer=vals[1]) c = tf.constant(vals[2], name='c') with fn(scope2): d = tf.add(a * b, c, name='res') print '\n '.join([scope1, a.name, b.name, c.name, d.name]), '\n' return d d1 = scoping(tf.variable_scope, 'scope_vars', 'res', [1, 2, 3]) d2 = scoping(tf.name_scope, 'scope_name', 'res', [1, 2, 3]) with tf.Session() as sess: writer = tf.summary.FileWriter('logs', sess.graph) sess.run(tf.global_variables_initializer()) print sess.run([d1, d2]) writer.close() </code></pre> <p>Here I create a function that creates some variables and constants and groups them in scopes (depending by the type I provided). In this function I also print the names of all the variables. After that I executes the graph to get values of the resulting values and save event-files to investigate them in tensorboard. If you run this, you will get the following:</p> <pre><code>scope_vars scope_vars/a:0 scope_vars/b:0 scope_vars/c:0 scope_vars/res/res:0 scope_name scope_name/a:0 b:0 scope_name/c:0 scope_name/res/res:0 </code></pre> <p>You see the similar pattern if you open TB (as you see <code>b</code> is outside of <code>scope_name</code> rectangular): <a href="https://i.sstatic.net/K0VgJ.png" rel="noreferrer"><img src="https://i.sstatic.net/K0VgJ.png" alt="enter image description here"></a></p> <hr> <p><strong>This gives you the answer</strong>:</p> <p>Now you see that <code>tf.variable_scope()</code> adds a prefix to the names of all variables (no matter how you create them), ops, constants. On the other hand <code>tf.name_scope()</code> ignores variables created with <code>tf.get_variable()</code> because it assumes that you know which variable and in which scope you wanted to use.</p> <p>A good documentation on <a href="https://www.tensorflow.org/programmers_guide/variable_scope" rel="noreferrer">Sharing variables</a> tells you that </p> <blockquote> <p><code>tf.variable_scope()</code>: Manages namespaces for names passed to <code>tf.get_variable()</code>.</p> </blockquote> <p>The same documentation provides a more details how does Variable Scope work and when it is useful.</p>
you are the stackoverflow assistant
Tensorflow doesn&#39;t seem to see my gpu
<p>I came across this same issue in jupyter notebooks. This could be an easy fix.</p> <pre><code>$ pip uninstall tensorflow $ pip install tensorflow-gpu </code></pre> <p>You can check if it worked with:</p> <pre><code>tf.test.gpu_device_name() </code></pre> <h3>Update 2020</h3> <p>It seems like tensorflow 2.0+ comes with gpu capabilities therefore <code>pip install tensorflow</code> should be enough</p>
you are the stackoverflow assistant
Tensorflow One Hot Encoder?
<p>As of TensorFlow 0.8, there is now a <a href="https://www.tensorflow.org/api_docs/python/tf/one_hot" rel="noreferrer">native one-hot op, <code>tf.one_hot</code></a> that can convert a set of sparse labels to a dense one-hot representation. This is in addition to <a href="https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits" rel="noreferrer"><code>tf.nn.sparse_softmax_cross_entropy_with_logits</code></a>, which can in some cases let you compute the cross entropy directly on the sparse labels instead of converting them to one-hot.</p> <p><strong>Previous answer, in case you want to do it the old way:</strong> @Salvador's answer is correct - there (used to be) no native op to do it. Instead of doing it in numpy, though, you can do it natively in tensorflow using the sparse-to-dense operators:</p> <pre><code>num_labels = 10 # label_batch is a tensor of numeric labels to process # 0 &lt;= label &lt; num_labels sparse_labels = tf.reshape(label_batch, [-1, 1]) derived_size = tf.shape(label_batch)[0] indices = tf.reshape(tf.range(0, derived_size, 1), [-1, 1]) concated = tf.concat(1, [indices, sparse_labels]) outshape = tf.pack([derived_size, num_labels]) labels = tf.sparse_to_dense(concated, outshape, 1.0, 0.0) </code></pre> <p>The output, labels, is a one-hot matrix of batch_size x num_labels.</p> <p>Note also that as of 2016-02-12 (which I assume will eventually be part of a 0.7 release), TensorFlow also has the <code>tf.nn.sparse_softmax_cross_entropy_with_logits</code> op, which in some cases can let you do training without needing to convert to a one-hot encoding.</p> <p>Edited to add: At the end, you may need to explicitly set the shape of labels. The shape inference doesn't recognize the size of the num_labels component. If you don't need a dynamic batch size with derived_size, this can be simplified.</p> <p>Edited 2016-02-12 to change the assignment of outshape per comment below.</p>
you are the stackoverflow assistant
Split a dataset created by Tensorflow dataset API in to Train and Test?
<p>Assuming you have <code>all_dataset</code> variable of <code>tf.data.Dataset</code> type:</p> <pre><code>test_dataset = all_dataset.take(1000) train_dataset = all_dataset.skip(1000) </code></pre> <p>Test dataset now has first 1000 elements and the rest goes for training.</p>
you are the stackoverflow assistant
Keras - Difference between categorical_accuracy and sparse_categorical_accuracy
<p>So in <code>categorical_accuracy</code> you need to specify your target (<code>y</code>) as one-hot encoded vector (e.g. in case of 3 classes, when a true class is second class, <code>y</code> should be <code>(0, 1, 0)</code>. In <code>sparse_categorical_accuracy</code> you need should only provide an integer of the true class (in the case from previous example - it would be <code>1</code> as classes indexing is <code>0</code>-based).</p>
you are the stackoverflow assistant
In TensorFlow, what is tf.identity used for?
<p>After some stumbling I think I've noticed a single use case that fits all the examples I've seen. If there are other use cases, please elaborate with an example.</p> <p>Use case:</p> <p>Suppose you'd like to run an operator every time a particular Variable is evaluated. For example, say you'd like to add one to <code>x</code> every time the variable <code>y</code> is evaluated. It might seem like this will work:</p> <pre><code>x = tf.Variable(0.0) x_plus_1 = tf.assign_add(x, 1) with tf.control_dependencies([x_plus_1]): y = x init = tf.initialize_all_variables() with tf.Session() as session: init.run() for i in xrange(5): print(y.eval()) </code></pre> <p>It doesn't: it'll print 0, 0, 0, 0, 0. Instead, it seems that we need to add a new node to the graph within the <code>control_dependencies</code> block. So we use this trick:</p> <pre><code>x = tf.Variable(0.0) x_plus_1 = tf.assign_add(x, 1) with tf.control_dependencies([x_plus_1]): y = tf.identity(x) init = tf.initialize_all_variables() with tf.Session() as session: init.run() for i in xrange(5): print(y.eval()) </code></pre> <p>This works: it prints 1, 2, 3, 4, 5.</p> <p>If in the CIFAR-10 tutorial we dropped <code>tf.identity</code>, then <code>loss_averages_op</code> would never run.</p>
you are the stackoverflow assistant
Module &#39;tensorflow&#39; has no attribute &#39;contrib&#39;
<p><code>tf.contrib</code> has moved out of TF starting TF 2.0 alpha.<br> Take a look at these tf 2.0 release notes <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.0.0-alpha0" rel="noreferrer">https://github.com/tensorflow/tensorflow/releases/tag/v2.0.0-alpha0</a><br> You can upgrade your TF 1.x code to TF 2.x using the <code>tf_upgrade_v2</code> script <a href="https://www.tensorflow.org/alpha/guide/upgrade" rel="noreferrer">https://www.tensorflow.org/alpha/guide/upgrade</a></p>
you are the stackoverflow assistant
tf.data.Dataset: how to get the dataset size (number of elements in an epoch)?
<p><code>len(list(dataset))</code> works in eager mode, although that's obviously not a good general solution.</p>
you are the stackoverflow assistant
Why the 6 in relu6?
<p>From <a href="https://www.reddit.com/r/MachineLearning/comments/3s65x8/tensorflow_relu6_minmaxfeatures_0_6/" rel="noreferrer">this reddit thread</a>:</p> <blockquote> <p>This is useful in making the networks ready for fixed-point inference. If you unbound the upper limit, you lose too many bits to the Q part of a Q.f number. Keeping the ReLUs bounded by 6 will let them take a max of 3 bits (upto 8) leaving 4/5 bits for .f</p> </blockquote> <p>It seems, then, that 6 is just an arbitrary value chosen according to the number of bits you want to be able to compress your network's trained parameters into. As per the "why" only the version with value 6 is implemented, I assume it's because that's the value that fits best in 8 bits, which probably is the most common use-case.</p>
you are the stackoverflow assistant
Unbalanced data and weighted cross entropy
<p>Note that <code>weighted_cross_entropy_with_logits</code> is the weighted variant of <code>sigmoid_cross_entropy_with_logits</code>. Sigmoid cross entropy is typically used for <em>binary</em> classification. Yes, it can handle multiple labels, but sigmoid cross entropy basically makes a (binary) decision on each of them -- for example, for a face recognition net, those (not mutually exclusive) labels could be "<em>Does the subject wear glasses?</em>", "<em>Is the subject female?</em>", etc.</p> <p>In binary classification(s), each output channel corresponds to a binary (soft) decision. Therefore, the weighting needs to happen within the computation of the loss. This is what <code>weighted_cross_entropy_with_logits</code> does, by weighting one term of the cross-entropy over the other.</p> <p>In mutually exclusive multilabel classification, we use <code>softmax_cross_entropy_with_logits</code>, which behaves differently: each output channel corresponds to the score of a class candidate. The decision comes <em>after</em>, by comparing the respective outputs of each channel.</p> <p>Weighting in before the final decision is therefore a simple matter of modifying the scores before comparing them, typically by multiplication with weights. For example, for a ternary classification task,</p> <pre><code># your class weights class_weights = tf.constant([[1.0, 2.0, 3.0]]) # deduce weights for batch samples based on their true label weights = tf.reduce_sum(class_weights * onehot_labels, axis=1) # compute your (unweighted) softmax cross entropy loss unweighted_losses = tf.nn.softmax_cross_entropy_with_logits(onehot_labels, logits) # apply the weights, relying on broadcasting of the multiplication weighted_losses = unweighted_losses * weights # reduce the result to get your final loss loss = tf.reduce_mean(weighted_losses) </code></pre> <p>You could also rely on <code>tf.losses.softmax_cross_entropy</code> to handle the last three steps.</p> <p>In your case, where you need to tackle data imbalance, the class weights could indeed be inversely proportional to their frequency in your train data. Normalizing them so that they sum up to one or to the number of classes also makes sense.</p> <p>Note that in the above, we penalized the loss based on the true label of the samples. We could also have penalized the loss based on the <em>estimated</em> labels by simply defining</p> <pre><code>weights = class_weights </code></pre> <p>and the rest of the code need not change thanks to broadcasting magic.</p> <p>In the general case, you would want weights that depend on the kind of error you make. In other words, for each pair of labels <code>X</code> and <code>Y</code>, you could choose how to penalize choosing label <code>X</code> when the true label is <code>Y</code>. You end up with a whole prior weight matrix, which results in <code>weights</code> above being a full <code>(num_samples, num_classes)</code> tensor. This goes a bit beyond what you want, but it might be useful to know nonetheless that only your definition of the weight tensor need to change in the code above.</p>
you are the stackoverflow assistant
TensorFlow - Importing data from a TensorBoard TFEvent file?
<p>As Fabrizio <a href="https://stackoverflow.com/a/37306050/3574081">says</a>, TensorBoard is a great tool for visualizing the contents of your summary logs. However, if you want to perform a custom analysis, you can use <a href="https://stackoverflow.com/a/37306050/3574081"><code>tf.train.summary_iterator()</code></a> function to loop over all of the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/util/event.proto" rel="noreferrer"><code>tf.Event</code></a> and <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/summary.proto" rel="noreferrer"><code>tf.Summary</code></a> protocol buffers in the log:</p> <pre><code>for summary in tf.train.summary_iterator("/path/to/log/file"): # Perform custom processing in here. </code></pre> <p>UPDATE for tf2:</p> <pre><code>from tensorflow.python.summary.summary_iterator import summary_iterator </code></pre> <p>You need to import it, that module level is not currently imported by default. On 2.0.0-rc2</p>
you are the stackoverflow assistant
How to add if condition in a TensorFlow graph?
<p>You're correct that the <code>if</code> statement doesn't work here, because the condition is evaluated at graph construction time, whereas presumably you want the condition to depend on the value fed to the placeholder at runtime. (In fact, it will always take the first branch, because <code>condition &gt; 0</code> evaluates to a <code>Tensor</code>, which is <a href="https://www.udacity.com/wiki/cs258/truthiness-in-python" rel="noreferrer">"truthy" in Python</a>.)</p> <p>To support conditional control flow, TensorFlow provides the <a href="https://www.tensorflow.org/versions/r0.7/api_docs/python/control_flow_ops.html#cond" rel="noreferrer"><code>tf.cond()</code></a> operator, which evaluates one of two branches, depending on a boolean condition. To show you how to use it, I'll rewrite your program so that <code>condition</code> is a scalar <code>tf.int32</code> value for simplicity:</p> <pre class="lang-py prettyprint-override"><code>x = tf.placeholder(tf.float32, shape=[None, ins_size**2*3], name="x_input") condition = tf.placeholder(tf.int32, shape=[], name="condition") W = tf.Variable(tf.zeros([ins_size**2 * 3, label_option]), name="weights") b = tf.Variable(tf.zeros([label_option]), name="bias") y = tf.cond(condition &gt; 0, lambda: tf.matmul(x, W) + b, lambda: tf.matmul(x, W) - b) </code></pre>
you are the stackoverflow assistant
How do I use TensorFlow GPU?
<p>Follow this tutorial <a href="https://www.codingforentrepreneurs.com/blog/install-tensorflow-gpu-windows-cuda-cudnn/" rel="noreferrer">Tensorflow GPU</a> I did it and it works perfect.</p> <p><strong>Attention!</strong> - install <strong>version 9.0!</strong> newer version is not supported by Tensorflow-gpu</p> <p><strong>Steps:</strong></p> <ol> <li>Uninstall your old tensorflow</li> <li>Install tensorflow-gpu <code>pip install tensorflow-gpu</code></li> <li>Install Nvidia Graphics Card &amp; Drivers (you probably already have)</li> <li>Download &amp; Install CUDA</li> <li>Download &amp; Install cuDNN</li> <li>Verify by simple program</li> </ol> <pre class="lang-py prettyprint-override"><code>from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) </code></pre>
you are the stackoverflow assistant
Nvidia Cudatoolkit vs Conda Cudatoolkit
<p>If using anaconda to install tensorflow-gpu, yes it will install cuda and cudnn for you in same conda environment as tensorflow-gpu. All you need to install yourself is the latest nvidia-driver (so that it works with the latest CUDA level and all older CUDA levels you use.)</p> <p>This has many advantages over the pip install tensorflow-gpu method:</p> <ol> <li>Anaconda will always install the CUDA and CuDNN version that the TensorFlow code was compiled to use.</li> <li>You can have multiple conda environments with different levels of TensorFlow, CUDA, and CuDNN and just use conda activate to switch between them.</li> <li>You don't have to deal with installing CUDA and cuDNN manaually at the system wide level.</li> </ol> <p>The disadvantage when compared to pip install tensorflow-gpu, is the latest version of tensorflow is added to pypi weeks before Anaconda is able to update the conda recipe and publish their builds of the latest TensorFlow version. </p>
you are the stackoverflow assistant
&quot;Could not interpret optimizer identifier&quot; error in Keras
<p>The reason is you are using <code>tensorflow.python.keras</code> API for model and layers and <code>keras.optimizers</code> for SGD. They are two different Keras versions of TensorFlow and pure Keras. They could not work together. You have to change everything to one version. Then it should work.</p>
you are the stackoverflow assistant
How do I check if keras is using gpu version of tensorflow?
<p>You are using the GPU version. You can list the available tensorflow devices with (also check <a href="https://stackoverflow.com/questions/38559755/how-to-get-current-available-gpus-in-tensorflow">this</a> question):</p> <pre><code>from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) # list of DeviceAttributes </code></pre> <p><strong>EDIT:</strong></p> <p>With tensorflow >= 1.4 you can run the <a href="https://www.tensorflow.org/api_docs/python/tf/test/is_gpu_available" rel="noreferrer">following</a> function:</p> <pre><code>import tensorflow as tf tf.test.is_gpu_available() # True/False # Or only check for gpu's with cuda support tf.test.is_gpu_available(cuda_only=True) </code></pre> <p><strong>EDIT 2:</strong></p> <p>The above function is deprecated in <code>tensorflow &gt; 2.1</code>. Instead you should use the following function:</p> <pre><code>import tensorflow as tf tf.config.list_physical_devices('GPU') </code></pre> <hr> <p><strong>NOTE:</strong></p> <p>In your case both the cpu and gpu are available, if you use the cpu version of tensorflow the gpu will not be listed. In your case, without setting your tensorflow device (<code>with tf.device("..")</code>), tensorflow will automatically pick your gpu!</p> <p>In addition, your <code>sudo pip3 list</code> clearly shows you are using tensorflow-gpu. If you would have the tensoflow cpu version the name would be something like <code>tensorflow(1.1.0)</code>.</p> <p>Check <a href="https://github.com/tensorflow/tensorflow/issues/7778" rel="noreferrer">this</a> issue for information about the warnings.</p>
you are the stackoverflow assistant
AttributeError: module &#39;tensorflow&#39; has no attribute &#39;reset_default_graph&#39;
<p>This function is deprecated. Use <code>tf.compat.v1.reset_default_graph()</code> instead.</p> <p><em>Update</em> This is not the only function to be out of date. Check out <a href="https://stackoverflow.com/a/55872941/8205650">this answer</a> for release notes and a conversion script. </p>
you are the stackoverflow assistant
Convert Keras model to C++
<p>To answer my own question and have a solution - I wrote a plain c++ solution called <a href="https://github.com/pplonski/keras2cpp" rel="noreferrer">keras2cpp</a> (its code available on github).</p> <p>In this solution you store network architecture (in json) and weights (in hdf5). Then you can dump a network to a plain text file with provided script. You can use obtained text file with network in pure c++ code. There are no dependencies on python libraries or hdf5. It should work for theano and tensorflow backend.</p>
you are the stackoverflow assistant
Remove nodes from graph or reset entire default graph
<p><strong>Update 11/2/2016</strong></p> <p><code>tf.reset_default_graph()</code></p> <p><strong>Old stuff</strong></p> <p>There's <code>reset_default_graph</code>, but not part of public API (I think it should be, does someone wants to <a href="https://github.com/tensorflow/tensorflow/issues">file an issue</a> on GitHub?)</p> <p>My work-around to reset things is this:</p> <pre><code>from tensorflow.python.framework import ops ops.reset_default_graph() sess = tf.InteractiveSession() </code></pre>
you are the stackoverflow assistant
What&#39;s the difference between scikit-learn and tensorflow? Is it possible to use them together?
<p>The Tensorflow is a library for constructing Neural Networks. The scikit-learn contains ready to use algorithms. The TF can work with a variety of data types: tabular, text, images, audio. The scikit-learn is intended to work with tabular data.</p> <p>Yes, you can use both packages. But if you need only classic Multi-Layer implementation then the <code>MLPClassifier</code> and <code>MLPRegressor</code> available in scikit-learn is a very good choice. I have run a comparison of MLP implemented in TF vs Scikit-learn and there weren't significant differences and scikit-learn MLP works about 2 times faster than TF on CPU. You can read the details of the comparison in <a href="https://mljar.com/blog/tensorflow-vs-scikit-learn/" rel="noreferrer">my blog post</a>.</p> <p>Below the scatter plots of performance comparison:</p> <p><a href="https://i.sstatic.net/54VVq.png" rel="noreferrer"><img src="https://i.sstatic.net/54VVq.png" alt="Tensorflow vs Scikit-learn on classification task" /></a></p> <p><a href="https://i.sstatic.net/7zJGr.png" rel="noreferrer"><img src="https://i.sstatic.net/7zJGr.png" alt="Tensorflow vs Scikit-learn on regression task" /></a></p>
you are the stackoverflow assistant
How to define max_queue_size, workers and use_multiprocessing in keras fit_generator()?
<p>Q_0: </p> <blockquote> <p>Question: Does this refer to how many batches are prepared on CPU? How is it related to workers? How to define it optimally?</p> </blockquote> <p>From the <a href="https://stackoverflow.com/questions/36986815/what-is-the-parameter-max-q-size-used-for-in-model-fit-generator/36989864#36989864">link</a> you posted, you can learn that your CPU keeps creating batches until the queue is at the maximum queue size or reaches the stop. You want to have batches ready for your GPU to "take" so that the GPU doesn't have to wait for the CPU. An ideal value for the queue size would be to make it large enough that your GPU is always running near the maximum and never has to wait for the CPU to prepare new batches. </p> <p>Q_1:</p> <blockquote> <p>Question: How do I find out how many batches my CPU can/should generate in parallel?</p> </blockquote> <p>If you see that your GPU is idling and waiting for batches, try to increase the amount of workers and perhaps also the queue size.</p> <p>Q_2:</p> <blockquote> <p>Do I have to set this parameter to true if I change workers? Does it relate to CPU usage?</p> </blockquote> <p><a href="https://keunwoochoi.wordpress.com/2017/08/24/tip-fit_generator-in-keras-how-to-parallelise-correctly/" rel="noreferrer">Here</a> is a practical analysis of what happens when you set it to <code>True</code> or <code>False</code>. <a href="https://stackoverflow.com/questions/54620551/confusion-about-multiprocessing-and-workers-in-keras-fit-generator-with-window">Here</a> is a recommendation to set it to <code>False</code> to prevent freezing (in my setup <code>True</code> works fine without freezing). Perhaps someone else can increase our understanding of the topic.</p> <h3>In summary:</h3> <p>Try not to have a sequential setup, try to enable the CPU to provide enough data for the GPU. <img src="https://www.embedded-vision.com/sites/default/files/technical-articles/OpenCLGPUs/Figure1.jpg" alt=""></p> <p>Also: You could (should?) create several questions the next time, so that it is easier to answer them.</p>
you are the stackoverflow assistant
Dimension of shape in conv1D
<p><strong>td; lr</strong> you need to reshape you data to have a <em>spatial</em> dimension for <code>Conv1d</code> to make sense:</p> <pre><code>X = np.expand_dims(X, axis=2) # reshape (569, 30) to (569, 30, 1) # now input can be set as model.add(Conv1D(2,2,activation='relu',input_shape=(30, 1)) </code></pre> <p>Essentially reshaping a dataset that looks like this:</p> <pre><code>features .8, .1, .3 .2, .4, .6 .7, .2, .1 </code></pre> <p>To:</p> <pre><code>[[.8 .1 .3], [.2, .4, .6 ], [.7, .2, .1]] </code></pre> <p><strong>Explanation and examples</strong></p> <p>Normally convolution works over spatial dimensions. The kernel is &quot;convolved&quot; over the dimension producing a tensor. In the case of Conv1D, the kernel is passed over the 'steps' dimension of every example.</p> <p>You will see Conv1D used in NLP where <code>steps</code> is a number of words in the sentence (padded to some fixed maximum length). The words would be encoded as vectors of length 4.</p> <p>Here is an example sentence:</p> <pre><code>jack .1 .3 -.52 | is .05 .8, -.7 |&lt;--- kernel is `convolving` along this dimension. a .5 .31 -.2 | boy .5 .8 -.4 \|/ </code></pre> <p>And the way we would set the input to the conv in this case:</p> <pre><code>maxlen = 4 input_dim = 3 model.add(Conv1D(2,2,activation='relu',input_shape=(maxlen, input_dim)) </code></pre> <p>In your case, you will treat the features as the spatial dimensions with each feature having length 1. (see below)</p> <p>Here would be an example from your dataset</p> <pre><code>att1 .04 | att2 .05 | &lt; -- kernel convolving along this dimension att3 .1 | notice the features have length 1. each att4 .5 \|/ example have these 4 featues. </code></pre> <p>And we would set the Conv1D example as:</p> <pre><code>maxlen = num_features = 4 # this would be 30 in your case input_dim = 1 # since this is the length of _each_ feature (as shown above) model.add(Conv1D(2,2,activation='relu',input_shape=(maxlen, input_dim)) </code></pre> <p>As you see your dataset has to be reshaped in to (569, 30, 1) use:</p> <pre><code>X = np.expand_dims(X, axis=2) # reshape (569, 30, 1) # now input can be set as model.add(Conv1D(2,2,activation='relu',input_shape=(30, 1)) </code></pre> <p>Here is a full-fledged example that you can run (I'll use the <a href="https://keras.io/getting-started/functional-api-guide/" rel="noreferrer">Functional API</a>)</p> <pre><code>from keras.models import Model from keras.layers import Conv1D, Dense, MaxPool1D, Flatten, Input import numpy as np inp = Input(shape=(5, 1)) conv = Conv1D(filters=2, kernel_size=2)(inp) pool = MaxPool1D(pool_size=2)(conv) flat = Flatten()(pool) dense = Dense(1)(flat) model = Model(inp, dense) model.compile(loss='mse', optimizer='adam') print(model.summary()) # get some data X = np.expand_dims(np.random.randn(10, 5), axis=2) y = np.random.randn(10, 1) # fit model model.fit(X, y) </code></pre>
you are the stackoverflow assistant
tf.nn.conv2d vs tf.layers.conv2d
<p>As GBY mentioned, they use the same implementation.</p> <p>There is a slight difference in the parameters.</p> <p>For tf.nn.conv2d:</p> <pre><code>filter: A Tensor. Must have the same type as input. A 4-D tensor of shape [filter_height, filter_width, in_channels, out_channels] </code></pre> <p>For tf.layers.conv2d:</p> <pre><code>filters: Integer, the dimensionality of the output space (i.e. the number of filters in the convolution). </code></pre> <p>I would use tf.nn.conv2d when loading a pretrained model (example code: <a href="https://github.com/ry/tensorflow-vgg16" rel="noreferrer">https://github.com/ry/tensorflow-vgg16</a>), and tf.layers.conv2d for a model trained from scratch.</p>
you are the stackoverflow assistant
How to set specific gpu in tensorflow?
<p>There are 3 ways to achieve this:</p> <ol> <li><p>Using <code>CUDA_VISIBLE_DEVICES</code> environment variable. by setting environment variable <code>CUDA_VISIBLE_DEVICES="1"</code> makes only device 1 visible and by setting <code>CUDA_VISIBLE_DEVICES="0,1"</code> makes devices 0 and 1 visible. You can do this in python by having a line <code>os.environ["CUDA_VISIBLE_DEVICES"]="0,1"</code> after importing <code>os</code> package.</p></li> <li><p>Using <code>with tf.device('/gpu:2')</code> and creating the graph. Then it will use GPU device 2 to run.</p></li> <li><p>Using <code>config = tf.ConfigProto(device_count = {'GPU': 1})</code> and then <code>sess = tf.Session(config=config)</code>. This will use GPU device 1.</p></li> </ol>
you are the stackoverflow assistant
TensorFlow - regularization with L2 loss, how to apply to all weights, not just last one?
<p>A shorter and scalable way of doing this would be ;</p> <pre><code>vars = tf.trainable_variables() lossL2 = tf.add_n([ tf.nn.l2_loss(v) for v in vars ]) * 0.001 </code></pre> <p>This basically sums the l2_loss of all your trainable variables. You could also make a dictionary where you specify only the variables you want to add to your cost and use the second line above. Then you can add lossL2 with your softmax cross entropy value in order to calculate your total loss. </p> <p><strong>Edit</strong> : As mentioned by Piotr Dabkowski, <em>the code above will also regularise biases</em>. This can be avoided by adding an if statement in the second line ; </p> <pre><code>lossL2 = tf.add_n([ tf.nn.l2_loss(v) for v in vars if 'bias' not in v.name ]) * 0.001 </code></pre> <p>This can be used to exclude other variables. </p>
you are the stackoverflow assistant
Simple way to visualize a TensorFlow graph in Jupyter?
<p>Here's a recipe I copied from one of Alex Mordvintsev deep dream <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/deepdream/deepdream.ipynb" rel="noreferrer">notebook</a> at some point</p> <pre class="lang-python prettyprint-override"><code>from IPython.display import clear_output, Image, display, HTML import numpy as np def strip_consts(graph_def, max_const_size=32): """Strip large constant values from graph_def.""" strip_def = tf.GraphDef() for n0 in graph_def.node: n = strip_def.node.add() n.MergeFrom(n0) if n.op == 'Const': tensor = n.attr['value'].tensor size = len(tensor.tensor_content) if size &gt; max_const_size: tensor.tensor_content = "&lt;stripped %d bytes&gt;"%size return strip_def def show_graph(graph_def, max_const_size=32): """Visualize TensorFlow graph.""" if hasattr(graph_def, 'as_graph_def'): graph_def = graph_def.as_graph_def() strip_def = strip_consts(graph_def, max_const_size=max_const_size) code = """ &lt;script&gt; function load() {{ document.getElementById("{id}").pbtxt = {data}; }} &lt;/script&gt; &lt;link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()&gt; &lt;div style="height:600px"&gt; &lt;tf-graph-basic id="{id}"&gt;&lt;/tf-graph-basic&gt; &lt;/div&gt; """.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand())) iframe = """ &lt;iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"&gt;&lt;/iframe&gt; """.format(code.replace('"', '&amp;quot;')) display(HTML(iframe)) </code></pre> <p>Then to visualize current graph</p> <pre class="lang-python prettyprint-override"><code>show_graph(tf.get_default_graph().as_graph_def()) </code></pre> <p>If your graph is saved as pbtxt, you could do</p> <pre class="lang-python prettyprint-override"><code>gdef = tf.GraphDef() from google.protobuf import text_format text_format.Merge(open("tf_persistent.pbtxt").read(), gdef) show_graph(gdef) </code></pre> <p>You'll see something like this</p> <p><a href="https://i.sstatic.net/1XDms.png" rel="noreferrer"><img src="https://i.sstatic.net/1XDms.png" alt="enter image description here"></a></p>
you are the stackoverflow assistant
Tensorflow NaN bug?
<p>Actually, it turned out to be something stupid. I'm posting this in case anyone else would run into a similar error.</p> <pre><code>cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv)) </code></pre> <p>is actually a horrible way of computing the cross-entropy. In some samples, certain classes could be excluded with certainty after a while, resulting in y_conv=0 for that sample. That's normally not a problem since you're not interested in those, but in the way cross_entropy is written there, it yields 0*log(0) for that particular sample/class. Hence the NaN.</p> <p>Replacing it with </p> <pre><code>cross_entropy = -tf.reduce_sum(y_*tf.log(tf.clip_by_value(y_conv,1e-10,1.0))) </code></pre> <p>solved all my problems.</p>
you are the stackoverflow assistant
Tensorflow vs OpenCV
<p>The main difference is that TensorFlow is a framework for machine learning, and OpenCV is a library for computer vision. It can be a good start to check the link below to get a grasp for the difference between framework and library: <a href="https://stackoverflow.com/questions/148747/what-is-the-difference-between-a-framework-and-a-library">What is the difference between a framework and a library?</a> </p> <p>You can do image recognition with TensorFlow. Though it is suited for more general problems as well, such as: classification, clustering and regression.</p> <p>I guess people downvoted because this question might be more relevant to: <a href="https://datascience.stackexchange.com/">https://datascience.stackexchange.com/</a> </p>
you are the stackoverflow assistant
No module named &#39;tqdm&#39;
<p>You need to install tqdm module, you can do it by using python pip.</p> <pre><code>pip install tqdm </code></pre> <p>for more info <a href="https://pypi.python.org/pypi/tqdm#latest-pypi-stable-release" rel="noreferrer">tqdm</a></p>
you are the stackoverflow assistant
How to understand static shape and dynamic shape in TensorFlow?
<p>Sometimes the shape of a tensor depends on a value that is computed at runtime. Let's take the following example, where <code>x</code> is defined as a <a href="https://www.tensorflow.org/versions/r0.8/api_docs/python/io_ops.html#placeholder" rel="noreferrer"><code>tf.placeholder()</code></a> vector with four elements:</p> <pre><code>x = tf.placeholder(tf.int32, shape=[4]) print x.get_shape() # ==&gt; '(4,)' </code></pre> <p>The value of <code>x.get_shape()</code> is the static shape of <code>x</code>, and the <code>(4,</code>) means that it is a vector of length 4. Now let's apply the <a href="https://www.tensorflow.org/versions/r0.8/api_docs/python/math_ops.html#unique" rel="noreferrer"><code>tf.unique()</code></a> op to <code>x</code></p> <pre><code>y, _ = tf.unique(x) print y.get_shape() # ==&gt; '(?,)' </code></pre> <p>The <code>(?,)</code> means that <code>y</code> is a vector of unknown length. Why is it unknown? <a href="https://www.tensorflow.org/versions/r0.8/api_docs/python/math_ops.html#unique" rel="noreferrer"><code>tf.unique(x)</code></a> returns the unique values from <code>x</code>, and the values of <code>x</code> are unknown because it is a <code>tf.placeholder()</code>, so it doesn't have a value until you feed it. Let's see what happens if you feed two different values:</p> <pre><code>sess = tf.Session() print sess.run(y, feed_dict={x: [0, 1, 2, 3]}).shape # ==&gt; '(4,)' print sess.run(y, feed_dict={x: [0, 0, 0, 0]}).shape # ==&gt; '(1,)' </code></pre> <p>Hopefully this makes it clear that a tensor can have a different static and dynamic shape. The dynamic shape is always fully defined—it has no <code>?</code> dimensions—but the static shape can be less specific. This is what allows TensorFlow to support operations like <code>tf.unique()</code> and <a href="https://www.tensorflow.org/versions/r0.8/api_docs/python/array_ops.html#dynamic_partition" rel="noreferrer"><code>tf.dynamic_partition()</code></a>, which can have variable-sized outputs, and are used in advanced applications.</p> <p>Finally, the <a href="https://www.tensorflow.org/versions/r0.8/api_docs/python/array_ops.html#shape" rel="noreferrer"><code>tf.shape()</code></a> op can be used to get the dynamic shape of a tensor and use it in a TensorFlow computation:</p> <pre><code>z = tf.shape(y) print sess.run(z, feed_dict={x: [0, 1, 2, 3]}) # ==&gt; [4] print sess.run(z, feed_dict={x: [0, 0, 0, 0]}) # ==&gt; [1] </code></pre> <p>Here's a schematic image showing both: <a href="https://i.sstatic.net/ul2KM.png" rel="noreferrer"><img src="https://i.sstatic.net/ul2KM.png" alt="enter image description here" /></a></p>
you are the stackoverflow assistant
Error running basic tensorflow example
<p>From the path in your stack trace (<code>/git/tensorflow/tensorflow/…</code>), it looks like your Python path may be loading the tensorflow libraries from the source directory, rather than the version that you have installed. As a result, it is unable to find the (compiled) <code>pywrap_tensorflow</code> library, which is installed in a different directory.</p> <p>A common solution is to <code>cd</code> out of the <code>/git/tensorflow</code> directory before starting <code>python</code> or <code>ipython</code>.</p>
you are the stackoverflow assistant
ValueError: Shapes (None, 1) and (None, 2) are incompatible
<p><strong>i was facing the same problem my shapes were</strong></p> <pre><code>shape of X (271, 64, 64, 3) shape of y (271,) shape of trainX (203, 64, 64, 3) shape of trainY (203, 1) shape of testX (68, 64, 64, 3) shape of testY (68, 1) </code></pre> <p>and</p> <pre><code>loss=&quot;categorical_crossentropy&quot; </code></pre> <p>i changed it to</p> <pre><code>loss=&quot;sparse_categorical_crossentropy&quot; </code></pre> <p>and it worked like a charm for me</p>
you are the stackoverflow assistant
tf.data with multiple inputs / outputs in Keras
<p>I'm not using Keras but I would go with an tf.data.Dataset.from_generator() - like:</p> <pre><code>def _input_fn(): sent1 = np.array([1, 2, 3, 4, 5, 6, 7, 8], dtype=np.int64) sent2 = np.array([20, 25, 35, 40, 600, 30, 20, 30], dtype=np.int64) sent1 = np.reshape(sent1, (8, 1, 1)) sent2 = np.reshape(sent2, (8, 1, 1)) labels = np.array([40, 30, 20, 10, 80, 70, 50, 60], dtype=np.int64) labels = np.reshape(labels, (8, 1)) def generator(): for s1, s2, l in zip(sent1, sent2, labels): yield {"input_1": s1, "input_2": s2}, l dataset = tf.data.Dataset.from_generator(generator, output_types=({"input_1": tf.int64, "input_2": tf.int64}, tf.int64)) dataset = dataset.batch(2) return dataset ... model.fit(_input_fn(), epochs=10, steps_per_epoch=4) </code></pre> <p>This generator can iterate over your e.g text-files / numpy arrays and yield on every call a example. In this example, I assume that the word of the sentences are already converted to the indices in the vocabulary. </p> <p>Edit: Since OP asked, it should be also possible with <code>Dataset.from_tensor_slices()</code>:</p> <pre><code>def _input_fn(): sent1 = np.array([1, 2, 3, 4, 5, 6, 7, 8], dtype=np.int64) sent2 = np.array([20, 25, 35, 40, 600, 30, 20, 30], dtype=np.int64) sent1 = np.reshape(sent1, (8, 1)) sent2 = np.reshape(sent2, (8, 1)) labels = np.array([40, 30, 20, 10, 80, 70, 50, 60], dtype=np.int64) labels = np.reshape(labels, (8)) dataset = tf.data.Dataset.from_tensor_slices(({"input_1": sent1, "input_2": sent2}, labels)) dataset = dataset.batch(2, drop_remainder=True) return dataset </code></pre>
you are the stackoverflow assistant
How to check if keras tensorflow backend is GPU or CPU version?
<p>Also you can check using Keras backend function:</p> <pre><code>from keras import backend as K K.tensorflow_backend._get_available_gpus() </code></pre> <p>I test this on Keras (2.1.1)</p>
you are the stackoverflow assistant
TensorFlow: Blas GEMM launch failed
<p>This worked for me on TensorFlow 2.1.0 (per: <a href="https://www.tensorflow.org/api_docs/python/tf/config/experimental/set_memory_growth" rel="noreferrer">https://www.tensorflow.org/api_docs/python/tf/config/experimental/set_memory_growth</a>)</p> <pre><code>import tensorflow as tf physical_devices = tf.config.list_physical_devices('GPU') for device in physical_devices: tf.config.experimental.set_memory_growth(device, True) </code></pre>
you are the stackoverflow assistant
Negative dimension size caused by subtracting 3 from 1 for &#39;Conv2D&#39;
<p>Your issue comes from the <code>image_ordering_dim</code> in <code>keras.json</code>.</p> <p>From <a href="https://keras.io/preprocessing/image/" rel="noreferrer">Keras Image Processing doc</a>: </p> <blockquote> <p>dim_ordering: One of {"th", "tf"}. "tf" mode means that the images should have shape (samples, height, width, channels), "th" mode means that the images should have shape (samples, channels, height, width). It defaults to the image_dim_ordering value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "tf".</p> </blockquote> <p>Keras maps the convolution operation to the chosen backend (theano or tensorflow). However, both backends have made different choices for the ordering of the dimensions. If your image batch is of N images of HxW size with C channels, theano uses the NCHW ordering while tensorflow uses the NHWC ordering.</p> <p>Keras allows you to choose which ordering you prefer and will do the conversion to map to the backends behind. But if you choose <code>image_ordering_dim="th"</code> it expects Theano-style ordering (NCHW, the one you have in your code) and if <code>image_ordering_dim="tf"</code> it expects tensorflow-style ordering (NHWC).</p> <p>Since your <code>image_ordering_dim</code> is set to <code>"tf"</code>, if you reshape your data to the tensorflow style it should work:</p> <pre><code>X_train = X_train.reshape(X_train.shape[0], img_cols, img_rows, 1) X_test = X_test.reshape(X_test.shape[0], img_cols, img_rows, 1) </code></pre> <p>and </p> <pre><code>input_shape=(img_cols, img_rows, 1) </code></pre>
you are the stackoverflow assistant
Is there a way to suppress the messages TensorFlow prints?
<p><strong>UPDATE</strong> (beyond 1.14): see my more thorough answer here (this is a dupe question anyway): <a href="https://stackoverflow.com/a/38645250/6557588">https://stackoverflow.com/a/38645250/6557588</a></p> <p>In addition to Wintro's answer, you can also disable/suppress TensorFlow logs from the C side (i.e. the uglier ones starting with single characters: I, E, etc.); the <a href="https://github.com/tensorflow/tensorflow/issues/1258" rel="noreferrer">issue</a> open regarding logging has been updated to state that you can now control logging via an environmental variable. You can now change the level by setting the environmental variable called <code>TF_CPP_MIN_LOG_LEVEL</code>; it defaults to 0 (all logs shown), but can be set to 1 to filter out <code>INFO</code> logs, 2 to additionally filter out <code>WARNING</code> logs, and 3 to additionally filter out <code>ERROR</code> logs. It appears to be in master now, and will likely be a part of future version (i.e. versions after r0.11). See <a href="https://github.com/tensorflow/tensorflow/issues/1258" rel="noreferrer">this page</a> for more information. Here is an example of changing the verbosity using Python:</p> <pre><code>import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # or any {'0', '1', '2'} import tensorflow as tf </code></pre> <p>You can set this environmental variable in the environment that you run your script in. For example, with bash this can be in the file <code>~/.bashrc</code>, <code>/etc/environment</code>, <code>/etc/profile</code>, or in the actual shell as:</p> <pre class="lang-sh prettyprint-override"><code>TF_CPP_MIN_LOG_LEVEL=2 python my_tf_script.py </code></pre>
you are the stackoverflow assistant
tf.shape() get wrong shape in tensorflow
<p><a href="https://www.tensorflow.org/versions/r0.8/api_docs/python/array_ops.html#shape" rel="noreferrer">tf.shape(input, name=None)</a> returns a 1-D integer tensor representing the shape of input.</p> <p>You're looking for: <code>x.get_shape()</code> that returns the <code>TensorShape</code> of the <code>x</code> variable.</p> <p>Update: I wrote an article to clarify the dynamic/static shapes in Tensorflow because of this answer: <a href="https://pgaleone.eu/tensorflow/2018/07/28/understanding-tensorflow-tensors-shape-static-dynamic/" rel="noreferrer">https://pgaleone.eu/tensorflow/2018/07/28/understanding-tensorflow-tensors-shape-static-dynamic/</a></p>
you are the stackoverflow assistant
How to interpret Poolallocator messages in tensorflow?
<p>TensorFlow has multiple memory allocators, for memory that will be used in different ways. Their behavior has some adaptive aspects.</p> <p>In your particular case, since you're using a GPU, there is a PoolAllocator for CPU memory that is pre-registered with the GPU for fast DMA. A tensor that is expected to be transferred from CPU to GPU, e.g., will be allocated from this pool.</p> <p>The PoolAllocators attempt to amortize the cost of calling a more expensive underlying allocator by keeping around a pool of allocated then freed chunks that are eligible for immediate reuse. Their default behavior is to grow slowly until the eviction rate drops below some constant. (The eviction rate is the proportion of free calls where we return an unused chunk from the pool to the underlying pool in order not to exceed the size limit.) In the log messages above, you see "Raising pool_size_limit_" lines that show the pool size growing. Assuming that your program actually has a steady state behavior with a maximum size collection of chunks it needs, the pool will grow to accommodate it, and then grow no more. It behaves this way rather than simply retaining all chunks ever allocated so that sizes needed only rarely, or only during program startup, are less likely to be retained in the pool.</p> <p>These messages should only be a cause for concern if you run out of memory. In such a case the log messages may help diagnose the problem. Note also that peak execution speed may only be attained after the memory pools have grown to the proper size.</p>
you are the stackoverflow assistant
WARNING:tensorflow:sample_weight modes were coerced from ... to [&#39;...&#39;]
<p>This seems like a bogus message. I get the same warning message after upgrading to TensorFlow 2.1, but I do not use any class weights or sample weights at all. I do use a generator that returns a tuple like this:</p> <pre><code>return inputs, targets </code></pre> <p>And now I just changed it to the following to make the warning go away:</p> <pre><code>return inputs, targets, [None] </code></pre> <p>I don't know if this is relevant, but my model uses 3 inputs, so my <code>inputs</code> variable is actually a list of 3 numpy arrays. <code>targets</code> is just a single numpy array.</p> <p>In any case, it's just a warning. The training works fine either way.</p> <h1>Edit for TensorFlow 2.2:</h1> <p>This bug seems to have been fixed in TensorFlow 2.2, which is great. However the fix above will fail in TF 2.2, because it will try to get the shape of the sample weights, which will obviously fail with <code>AttributeError: 'NoneType' object has no attribute 'shape'</code>. So undo the above fix when upgrading to 2.2.</p>
you are the stackoverflow assistant
Tensorflow Data Adapter Error: ValueError: Failed to find data adapter that can handle input
<p>Have you checked whether your training/testing data and training/testing labels are all numpy arrays? It might be that you're mixing numpy arrays with lists. </p>
you are the stackoverflow assistant
This model has not yet been built error on model.summary()
<p>The error says what to do:</p> <blockquote> <p>This model has not yet been built. Build the model first by calling <code>build()</code></p> </blockquote> <pre class="lang-py prettyprint-override"><code>model.build(input_shape) # `input_shape` is the shape of the input data # e.g. input_shape = (None, 32, 32, 3) model.summary() </code></pre>
you are the stackoverflow assistant
Why can I not import Tensorflow.contrib I get an error of No module named &#39;tensorflow.python.saved
<p>For anyone who is trying some old codes from <strong><em>github</em></strong> with <code>Tensorflow 1.x.x</code> versions while having <code>Tensorflow 2.0.x</code> please note that <code>tf.contrib</code> no longer exist in <code>Tensorflow 2.0.x</code> and it's modules were moved.<br> Please google the name of the module without the <code>tf.contrib</code> part to know it's new location and thus migrating your code accordingly by correcting the <code>import</code> statement.</p> <p><em>Hope this helped!</em></p>
you are the stackoverflow assistant
Keras ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=5
<p>The problem is <code>input_shape</code>. </p> <p>It should actually contain 3 dimensions only. And internally keras will add the batch dimension making it 4. </p> <p>Since you probably used <code>input_shape</code> with 4 dimensions (batch included), keras is adding the 5th. </p> <p>You should use <code>input_shape=(32,32,1)</code>.</p>
you are the stackoverflow assistant
Tensorflow dense gradient explanation?
<p>This warning is printed when a sparse <a href="https://www.tensorflow.org/api_docs/python/tf/IndexedSlices" rel="noreferrer"><code>tf.IndexedSlices</code></a> object is implicitly converted to a dense <a href="https://www.tensorflow.org/api_docs/python/tf/Tensor" rel="noreferrer"><code>tf.Tensor</code></a>. This typically happens when one op (usually <a href="https://www.tensorflow.org/api_docs/python/tf/gather" rel="noreferrer"><code>tf.gather()</code></a>) backpropagates a sparse gradient, but the op that receives it does not have a specialized gradient function that can handle sparse gradients. As a result, TensorFlow automatically densifies the <code>tf.IndexedSlices</code>, which can have a devastating effect on performance if the tensor is large. </p> <p>To fix this problem, you should try to ensure that the <code>params</code> input to <code>tf.gather()</code> (or the <code>params</code> inputs to <a href="https://www.tensorflow.org/versions/r0.7/api_docs/python/nn.html#embedding_lookup" rel="noreferrer"><code>tf.nn.embedding_lookup()</code></a>) is a <a href="https://www.tensorflow.org/api_docs/python/tf/Variable" rel="noreferrer"><code>tf.Variable</code></a>. Variables can receive the sparse updates directly, so no conversion is needed. Although <code>tf.gather()</code> (and <code>tf.nn.embedding_lookup()</code>) accept arbitrary tensors as inputs, this may lead to a more complicated backpropagation graph, resulting in implicit conversion.</p>
you are the stackoverflow assistant
List of tensor names in graph in Tensorflow
<p>The paper is not accurately reflecting the model. If you download the source from arxiv it has an accurate model description as model.txt, and the names in there correlate strongly with the names in the released model.</p> <p>To answer your first question, <code>sess.graph.get_operations()</code> gives you a list of operations. For an op, <code>op.name</code> gives you the name and <code>op.values()</code> gives you a list of tensors it produces (in the inception-v3 model, all tensor names are the op name with a ":0" appended to it, so <code>pool_3:0</code> is the tensor produced by the final pooling op.)</p>
you are the stackoverflow assistant
How do I convert a directory of jpeg images to TFRecords file in tensorflow?
<p>I hope this helps:</p> <pre class="lang-py prettyprint-override"><code>filename_queue = tf.train.string_input_producer(['/Users/HANEL/Desktop/tf.png']) # list of files to read reader = tf.WholeFileReader() key, value = reader.read(filename_queue) my_img = tf.image.decode_png(value) # use decode_png or decode_jpeg decoder based on your files. init_op = tf.initialize_all_variables() with tf.Session() as sess: sess.run(init_op) # Start populating the filename queue. coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) for i in range(1): #length of your filename list image = my_img.eval() #here is your image Tensor :) print(image.shape) Image.show(Image.fromarray(np.asarray(image))) coord.request_stop() coord.join(threads) </code></pre> <p>For getting all images as an array of tensors use the following code example. </p> <p><a href="https://github.com/HamedMP/ImageFlow" rel="noreferrer">Github repo of ImageFlow</a></p> <hr> <p>Update:</p> <p>In the previous answer I just told how to read an image in TF format, but not saving it in TFRecords. For that you should use:</p> <pre class="lang-py prettyprint-override"><code>def _int64_feature(value): return tf.train.Feature(int64_list=tf.train.Int64List(value=[value])) def _bytes_feature(value): return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value])) # images and labels array as input def convert_to(images, labels, name): num_examples = labels.shape[0] if images.shape[0] != num_examples: raise ValueError("Images size %d does not match label size %d." % (images.shape[0], num_examples)) rows = images.shape[1] cols = images.shape[2] depth = images.shape[3] filename = os.path.join(FLAGS.directory, name + '.tfrecords') print('Writing', filename) writer = tf.python_io.TFRecordWriter(filename) for index in range(num_examples): image_raw = images[index].tostring() example = tf.train.Example(features=tf.train.Features(feature={ 'height': _int64_feature(rows), 'width': _int64_feature(cols), 'depth': _int64_feature(depth), 'label': _int64_feature(int(labels[index])), 'image_raw': _bytes_feature(image_raw)})) writer.write(example.SerializeToString()) </code></pre> <p>More info <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/how_tos/reading_data/convert_to_records.py" rel="noreferrer">here</a></p> <p>And you read the data like this:</p> <pre class="lang-py prettyprint-override"><code># Remember to generate a file name queue of you 'train.TFRecord' file path def read_and_decode(filename_queue): reader = tf.TFRecordReader() _, serialized_example = reader.read(filename_queue) features = tf.parse_single_example( serialized_example, dense_keys=['image_raw', 'label'], # Defaults are not specified since both keys are required. dense_types=[tf.string, tf.int64]) # Convert from a scalar string tensor (whose single string has image = tf.decode_raw(features['image_raw'], tf.uint8) image = tf.reshape(image, [my_cifar.n_input]) image.set_shape([my_cifar.n_input]) # OPTIONAL: Could reshape into a 28x28 image and apply distortions # here. Since we are not applying any distortions in this # example, and the next step expects the image to be flattened # into a vector, we don't bother. # Convert from [0, 255] -&gt; [-0.5, 0.5] floats. image = tf.cast(image, tf.float32) image = tf.cast(image, tf.float32) * (1. / 255) - 0.5 # Convert label from a scalar uint8 tensor to an int32 scalar. label = tf.cast(features['label'], tf.int32) return image, label </code></pre>
you are the stackoverflow assistant
How do I install TensorFlow&#39;s tensorboard?
<p>The steps to install Tensorflow are here: <a href="https://www.tensorflow.org/install/" rel="noreferrer">https://www.tensorflow.org/install/</a></p> <p>For example, on Linux for CPU-only (no GPU), you would type this command:</p> <pre><code>pip install -U pip pip install tensorflow </code></pre> <p>Since <a href="https://pypi.python.org/pypi/tensorflow" rel="noreferrer">TensorFlow</a> depends on <a href="https://pypi.python.org/pypi/tensorboard" rel="noreferrer">TensorBoard</a>, running the following command should <strong>not</strong> be necessary:</p> <pre><code>pip install tensorboard </code></pre>
you are the stackoverflow assistant
What is the use of a *.pb file in TensorFlow and how does it work?
<p><code>pb</code> stands for protobuf. In TensorFlow, the protbuf file contains the graph definition as well as the weights of the model. Thus, a <code>pb</code> file is all you need to be able to run a given trained model.</p> <p>Given a <code>pb</code> file, you can load it as follows:</p> <pre><code>def load_pb(path_to_pb): with tf.gfile.GFile(path_to_pb, &quot;rb&quot;) as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) with tf.Graph().as_default() as graph: tf.import_graph_def(graph_def, name='') return graph </code></pre> <p>Once you have loaded the graph, you can basically do anything. For instance, you can retrieve tensors of interest with</p> <pre><code>input = graph.get_tensor_by_name('input:0') output = graph.get_tensor_by_name('output:0') </code></pre> <p>and use regular TensorFlow routine like:</p> <pre><code>sess.run(output, feed_dict={input: some_data}) </code></pre>
you are the stackoverflow assistant
Why is the accuracy for my Keras model always 0 when training?
<p>Your model seems to correspond to a regression model for the following reasons: </p> <ul> <li><p>You are using <code>linear</code> (the default one) as an activation function in the output layer (and <code>relu</code> in the layer before).</p></li> <li><p>Your loss is <code>loss='mean_squared_error'</code>. </p></li> </ul> <p>However, the metric that you use- <code>metrics=['accuracy']</code> corresponds to a classification problem. If you want to do regression, remove <code>metrics=['accuracy']</code>. That is, use</p> <pre><code>model.compile(optimizer='adam',loss='mean_squared_error') </code></pre> <p>Here is a list of keras metrics for regression and classification (taken from <a href="http://machinelearningmastery.com/custom-metrics-deep-learning-keras-python/" rel="noreferrer">this blog post</a>):</p> <blockquote> <p><strong>Keras Regression Metrics</strong></p> <p>•Mean Squared Error: mean_squared_error, MSE or mse </p> <p>•Mean Absolute Error: mean_absolute_error, MAE, mae </p> <p>•Mean Absolute Percentage Error: mean_absolute_percentage_error, MAPE, mape </p> <p>•Cosine Proximity: cosine_proximity, cosine</p> <p><strong>Keras Classification Metrics</strong></p> <p>•Binary Accuracy: binary_accuracy, acc</p> <p>•Categorical Accuracy: categorical_accuracy, acc</p> <p>•Sparse Categorical Accuracy: sparse_categorical_accuracy</p> <p>•Top k Categorical Accuracy: top_k_categorical_accuracy (requires you specify a k parameter)</p> <p>•Sparse Top k Categorical Accuracy: sparse_top_k_categorical_accuracy (requires you specify a k parameter)</p> </blockquote>
you are the stackoverflow assistant
How to approach a number guessing game (with a twist) algorithm?
<p>We'll combine graph-theory and probability:</p> <p>On the 1st day, build a set of all feasible solutions. Lets denote the solutions set as A1={a1(1), a1(2),...,a1(n)}.</p> <p>On the second day you can again build the solutions set A2.</p> <p>Now, for each element in A2, you'll need to check if it can be reached from each element of A1 (given x% tolerance). If so - connect A2(n) to A1(m). If it can't be reached from any node in A1(m) - you can delete this node.</p> <p>Basically we are building a connected directed acyclic graph.</p> <p>All paths in the graph are equally likely. You can find an exact solution only when there is a single edge from Am to Am+1 (from a node in Am to a node in Am+1).</p> <p>Sure, some nodes appear in more paths than other nodes. The probability for each node can be directly deduced based on the number of paths that contains this node.</p> <p>By assigning a weight to each node, which equals to the number of paths that leads to this node, there is no need to keep all history, but only the previous day.</p> <p>Also, have a look at <a href="https://stackoverflow.com/questions/1467907/algorithm-to-determine-non-negative-values-solution-existance-for-linear-diophant">non-negative-values linear diphantine equations</a> - A question I asked a while ago. The accepted answer is a great way to enumarte all combos in each step.</p>
you are the stackoverflow assistant
&quot;Could not load dynamic library &#39;libcudnn.so.8&#39;&quot; when running tensorflow on ubuntu 20.04
<p>So I had the same issue. As the comments say, it's because you need to install CUDNN. For that, there is a guide <a href="https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html" rel="noreferrer">here</a>.</p> <p>But as I know already your distro (Ubuntu 20.04) I can give you the command lines already:</p> <pre><code>wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600 export last_public_key=3bf863cc # SEE NOTE BELOW sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/${last_public_key}.pub sudo add-apt-repository &quot;deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /&quot; sudo apt-get update sudo apt-get install libcudnn8 sudo apt-get install libcudnn8-dev </code></pre> <p>where <code>${last_public_key}</code> is the last public key (file with <code>.pub</code> extension) published on <a href="https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/" rel="noreferrer">https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/</a>. (At March 8th 2023 when this post was edit, it was <code>3bf863cc</code>).</p> <p>And if you want to install a specific version, the last 2 commands would be replaced with</p> <pre><code>sudo apt-get install libcudnn8=${cudnn_version}-1+${cuda_version} sudo apt-get install libcudnn8-dev=${cudnn_version}-1+${cuda_version} </code></pre> <p>where <code>${cudnn_version}</code> is for example <code>8.2.4.*</code> and <code>${cuda_version}</code> is for example <code>cuda11.0</code> (as I see you have 11.0 on the command <code>nvidia-smi</code>, although I have not tested it as mine was 11.4 but I guess it should work Ok)</p>
you are the stackoverflow assistant
tensorflow on GPU: no known devices, despite cuda&#39;s deviceQuery returning a &quot;PASS&quot; result
<p>From the log output, it looks like you are running the CPU version of TensorFlow (PyPI: <a href="https://pypi.python.org/pypi/tensorflow" rel="noreferrer"><code>tensorflow</code></a>), and not the GPU version (PyPI: <a href="https://pypi.python.org/pypi/tensorflow-gpu" rel="noreferrer"><code>tensorflow-gpu</code></a>). Running the GPU version would either log information about the CUDA libraries, or an error if it failed to load them or open the driver.</p> <p>If you run the following commands, you should be able to use the GPU in subsequent runs:</p> <pre><code>$ pip uninstall tensorflow $ pip install tensorflow-gpu </code></pre>
you are the stackoverflow assistant
ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory
<p>I downloaded cuda 10.0 from the following link <a href="https://developer.nvidia.com/cuda-10.0-download-archive?target_os=Linux&amp;target_arch=x86_64&amp;target_distro=Ubuntu&amp;target_version=1804&amp;target_type=debnetwork" rel="noreferrer">CUDA 10.0</a></p> <p>Then I installed it using the following commands:</p> <pre><code>sudo dpkg -i cuda-repo-ubuntu1804_10.0.130-1_amd64.deb sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub sudo apt-get update sudo apt-get install cuda-10-0 </code></pre> <p>I then installed cudnn v7.5.0 for CUDA 10.0 by going to link <a href="https://developer.nvidia.com/rdp/cudnn-download" rel="noreferrer">CUDNN download</a> and you need to logon using an account.</p> <p>and after choosing the correct version I downloaded via link <a href="https://developer.nvidia.com/compute/machine-learning/cudnn/secure/v7.5.0.56/prod/10.0_20190219/cudnn-10.0-linux-ppc64le-v7.5.0.56.tgz" rel="noreferrer">CUDNN power link</a> after that I added the include and lib files for cudnn as follows:</p> <pre><code>sudo cp -P cuda/targets/ppc64le-linux/include/cudnn.h /usr/local/cuda-10.0/include/ sudo cp -P cuda/targets/ppc64le-linux/lib/libcudnn* /usr/local/cuda-10.0/lib64/ sudo chmod a+r /usr/local/cuda-10.0/lib64/libcudnn* </code></pre> <p>After modified the .bashrc for lib and path of cuda 10.0, if you do not have it you need to add them into .bashrc</p> <pre><code>export PATH=/usr/local/cuda-10.0/bin${PATH:+:${PATH}} export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64:${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} </code></pre> <p>And after all these steps, I managed to import tensorflow in python3 successfully.</p>
you are the stackoverflow assistant
Tensorflow Allocation Memory: Allocation of 38535168 exceeds 10% of system memory
<p>Try reducing batch_size attribute to a small number(like 1,2 or 3). Example:</p> <pre><code>train_generator = data_generator.flow_from_directory( 'path_to_the_training_set', target_size = (IMG_SIZE,IMG_SIZE), batch_size = 2, class_mode = 'categorical' ) </code></pre>
you are the stackoverflow assistant
What is a batch in TensorFlow?
<p>Let's say you want to do digit recognition (MNIST) and you have defined your architecture of the network (CNNs). Now, you can start feeding the images from the training data one by one to the network, get the prediction (till this step it's called as doing <em>inference</em>), compute the loss, compute the gradient, and then update the parameters of your network (i.e. <em>weights</em> and <em>biases</em>) and then proceed with the next image ... This way of training the model is sometimes called as <em>online learning</em>.</p> <p>But, you want the training to be faster, the gradients to be less noisy, and also take advantage of the power of GPUs which are efficient at doing array operations (<em>nD-arrays</em> to be specific). So, what you instead do is feed in <strong>say 100 images at a time</strong> (the choice of this size is up to you (i.e. it's a <em>hyperparameter</em>) and depends on your problem too). For instance, take a look at the below picture, (Author: Martin Gorner)</p> <p><a href="https://i.sstatic.net/8FzdQ.png" rel="noreferrer"><img src="https://i.sstatic.net/8FzdQ.png" alt="Batch size of 100"></a></p> <p>Here, since you're feeding in 100 images(<code>28x28</code>) at a time (instead of 1 as in the online training case), the <strong>batch size is 100</strong>. Oftentimes this is called as <em>mini-batch size</em> or simply <code>mini-batch</code>.</p> <hr> <p>Also the below picture: (Author: Martin Gorner)</p> <p><a href="https://i.sstatic.net/vncAa.png" rel="noreferrer"><img src="https://i.sstatic.net/vncAa.png" alt="batch size again"></a></p> <p>Now, the matrix multiplication will all just work out perfectly fine and you will also be taking advantage of the highly optimized array operations and hence achieve faster <em>training</em> time.</p> <p>If you observe the above picture, it doesn't matter that much whether you give 100 or 256 or 2048 or 10000 (<em>batch size</em>) images as long as it fits in the memory of your (GPU) hardware. You'll simply get that many predictions.</p> <p>But, please keep in mind that this <em>batch size</em> influences the training time, the error that you achieve, the gradient shifts etc., There is no general rule of thumb as to which batch size works out best. Just try a few sizes and pick the one which works best for you. But try not to use large batch sizes since it will overfit the data. People commonly use mini-batch sizes of <code>32, 64, 128, 256, 512, 1024, 2048</code>.</p> <hr> <p><strong>Bonus</strong>: To get a good grasp of how crazy you can go with this batch size, please give this paper a read: <a href="https://arxiv.org/pdf/1404.5997.pdf" rel="noreferrer">weird trick for parallelizing CNNs</a></p>
you are the stackoverflow assistant
Adjust Single Value within Tensor -- TensorFlow
<p><strong>UPDATE:</strong> TensorFlow 1.0 includes a <a href="https://www.tensorflow.org/api_docs/python/tf/scatter_nd" rel="noreferrer"><code>tf.scatter_nd()</code></a> operator, which can be used to create <code>delta</code> below without creating a <code>tf.SparseTensor</code>.</p> <hr> <p>This is actually surprisingly tricky with the existing ops! Perhaps somebody can suggest a nicer way to wrap up the following, but here's one way to do it.</p> <p>Let's say you have a <code>tf.constant()</code> tensor:</p> <pre><code>c = tf.constant([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]) </code></pre> <p>...and you want to add <code>1.0</code> at location [1, 1]. One way you could do this is to define a <a href="https://www.tensorflow.org/versions/master/api_docs/python/sparse_ops.html#SparseTensor" rel="noreferrer"><code>tf.SparseTensor</code></a>, <code>delta</code>, representing the change:</p> <pre><code>indices = [[1, 1]] # A list of coordinates to update. values = [1.0] # A list of values corresponding to the respective # coordinate in indices. shape = [3, 3] # The shape of the corresponding dense tensor, same as `c`. delta = tf.SparseTensor(indices, values, shape) </code></pre> <p>Then you can use the <a href="https://www.tensorflow.org/versions/master/api_docs/python/sparse_ops.html#sparse_tensor_to_dense" rel="noreferrer"><code>tf.sparse_tensor_to_dense()</code></a> op to make a dense tensor from <code>delta</code> and add it to <code>c</code>:</p> <pre><code>result = c + tf.sparse_tensor_to_dense(delta) sess = tf.Session() sess.run(result) # ==&gt; array([[ 0., 0., 0.], # [ 0., 1., 0.], # [ 0., 0., 0.]], dtype=float32) </code></pre>
you are the stackoverflow assistant
What do I need K.clear_session() and del model for (Keras with Tensorflow-gpu)?
<p><code>K.clear_session()</code> is useful when you're creating multiple models in succession, such as during hyperparameter search or cross-validation. Each model you train adds nodes (potentially numbering in the thousands) to the graph. TensorFlow executes the entire graph whenever you (or Keras) call <code>tf.Session.run()</code> or <code>tf.Tensor.eval()</code>, so your models will become slower and slower to train, and you may also run out of memory. Clearing the session removes all the nodes left over from previous models, freeing memory and preventing slowdown.</p> <hr> <p><strong>Edit 21/06/19:</strong></p> <p>TensorFlow is lazy-evaluated by default. TensorFlow operations aren't evaluated immediately: creating a tensor or doing some operations to it creates nodes in a dataflow graph. The results are calculated by evaluating the relevant parts of the graph in one go when you call <code>tf.Session.run()</code> or <code>tf.Tensor.eval()</code>. This is so TensorFlow can build an execution plan that allocates operations that can be performed in parallel to different devices. It can also fold adjacent nodes together or remove redundant ones (e.g. if you concatenated two tensors and later split them apart again unchanged). For more details, see <a href="https://www.tensorflow.org/guide/graphs" rel="noreferrer">https://www.tensorflow.org/guide/graphs</a></p> <p>All of your TensorFlow models are stored in the graph as a series of tensors and tensor operations. The basic operation of machine learning is tensor dot product - the output of a neural network is the dot product of the input matrix and the network weights. If you have a single-layer perceptron and 1,000 training samples, then each epoch creates at least 1,000 tensor operations. If you have 1,000 epochs, then your graph contains at least 1,000,000 nodes at the end, before taking into account preprocessing, postprocessing, and more complex models such as recurrent nets, encoder-decoder, attentional models, etc.</p> <p>The problem is that eventually the graph would be too large to fit into video memory (6 GB in my case), so TF would shuttle parts of the graph from video to main memory and back. Eventually it would even get too large for main memory (12 GB) and start moving between main memory and the hard disk. Needless to say, this made things incredibly, and increasingly, slow as training went on. Before developing this save-model/clear-session/reload-model flow, I calculated that, at the per-epoch rate of slowdown I experienced, my model would have taken longer than the age of the universe to finish training. </p> <blockquote> <p>Disclaimer: I haven't used TensorFlow in almost a year, so this might have changed. I remember there being quite a few GitHub issues around this so hopefully it has since been fixed.</p> </blockquote>
you are the stackoverflow assistant
What are possible values for data_augmentation_options in the TensorFlow Object Detection pipeline configuration?
<p>The list of options is provided in <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/protos/preprocessor.proto" rel="noreferrer">preprocessor.proto</a>: </p> <pre><code>NormalizeImage normalize_image = 1; RandomHorizontalFlip random_horizontal_flip = 2; RandomPixelValueScale random_pixel_value_scale = 3; RandomImageScale random_image_scale = 4; RandomRGBtoGray random_rgb_to_gray = 5; RandomAdjustBrightness random_adjust_brightness = 6; RandomAdjustContrast random_adjust_contrast = 7; RandomAdjustHue random_adjust_hue = 8; RandomAdjustSaturation random_adjust_saturation = 9; RandomDistortColor random_distort_color = 10; RandomJitterBoxes random_jitter_boxes = 11; RandomCropImage random_crop_image = 12; RandomPadImage random_pad_image = 13; RandomCropPadImage random_crop_pad_image = 14; RandomCropToAspectRatio random_crop_to_aspect_ratio = 15; RandomBlackPatches random_black_patches = 16; RandomResizeMethod random_resize_method = 17; ScaleBoxesToPixelCoordinates scale_boxes_to_pixel_coordinates = 18; ResizeImage resize_image = 19; SubtractChannelMean subtract_channel_mean = 20; SSDRandomCrop ssd_random_crop = 21; SSDRandomCropPad ssd_random_crop_pad = 22; SSDRandomCropFixedAspectRatio ssd_random_crop_fixed_aspect_ratio = 23; </code></pre> <p>You can see the details about each option in <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/core/preprocessor.py" rel="noreferrer">preprocessor.py</a>. Arguments can be provided as key-value pairs.</p> <pre><code> data_augmentation_options { ssd_random_crop { } } data_augmentation_options { random_pixel_value_scale { minval: 0.6 } } </code></pre>
you are the stackoverflow assistant
How to set layer-wise learning rate in Tensorflow?
<p>It can be achieved quite easily with 2 optimizers:</p> <pre><code>var_list1 = [variables from first 5 layers] var_list2 = [the rest of variables] train_op1 = GradientDescentOptimizer(0.00001).minimize(loss, var_list=var_list1) train_op2 = GradientDescentOptimizer(0.0001).minimize(loss, var_list=var_list2) train_op = tf.group(train_op1, train_op2) </code></pre> <p>One disadvantage of this implementation is that it computes tf.gradients(.) twice inside the optimizers and thus it might not be optimal in terms of execution speed. This can be mitigated by explicitly calling tf.gradients(.), splitting the list into 2 and passing corresponding gradients to both optimizers.</p> <p>Related question: <a href="https://stackoverflow.com/questions/34477889/holding-variables-constant-during-optimizer/34478044#34478044">Holding variables constant during optimizer</a></p> <p>EDIT: Added more efficient but longer implementation:</p> <pre><code>var_list1 = [variables from first 5 layers] var_list2 = [the rest of variables] opt1 = tf.train.GradientDescentOptimizer(0.00001) opt2 = tf.train.GradientDescentOptimizer(0.0001) grads = tf.gradients(loss, var_list1 + var_list2) grads1 = grads[:len(var_list1)] grads2 = grads[len(var_list1):] tran_op1 = opt1.apply_gradients(zip(grads1, var_list1)) train_op2 = opt2.apply_gradients(zip(grads2, var_list2)) train_op = tf.group(train_op1, train_op2) </code></pre> <p>You can use <code>tf.trainable_variables()</code> to get all training variables and decide to select from them. The difference is that in the first implementation <code>tf.gradients(.)</code> is called twice inside the optimizers. This may cause some redundant operations to be executed (e.g. gradients on the first layer can reuse some computations for the gradients of the following layers).</p>
you are the stackoverflow assistant
Tensorflow installation error: not a supported wheel on this platform
<p>I too got the same problem.</p> <p>I downloaded <code>get-pip.py</code> from <em><a href="https://bootstrap.pypa.io/get-pip.py" rel="nofollow noreferrer">https://bootstrap.pypa.io/get-pip.py</a></em> and then ran <code>python2.7 get-pip.py</code> for installing <code>pip2.7</code>.</p> <p>And then ran the <code>pip install</code> command with <code>python2.7</code> as follows.</p> <p><strong>For Ubuntu/Linux:</strong></p> <pre class="lang-none prettyprint-override"><code>python2.7 -m pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl </code></pre> <p><strong>For Mac OS X:</strong></p> <pre class="lang-none prettyprint-override"><code>python2.7 -m pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl </code></pre> <p>This should work just fine as it did for me :)</p> <p>I followed these instructions from <a href="https://askubuntu.com/questions/695981/platform-not-supported-for-tensorflow-on-ubuntu-14-04-2">here</a>.</p>
you are the stackoverflow assistant
Is Tensorflow compatible with a Windows workflow?
<p><strong>Updated 11/28/2016:</strong> Today we released the first release candidate of TensorFlow 0.12, which includes support for Windows. You can install the Python bindings using the following command in a Python shell:</p> <pre><code>C:\&gt; pip install tensorflow </code></pre> <p>...or, if you want GPU support:</p> <pre><code>C:\&gt; pip install tensorflow-gpu </code></pre> <p>You can also build TensorFlow yourself using Microsoft Visual C++ and NVCC (for the CUDA parts). The easiest way to build on Windows is currently to use the <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/cmake">CMake build</a>, and we will soon provide support for <a href="https://www.bazel.io/versions/master/docs/windows.html">Bazel on Windows</a>.</p> <hr> <p><strong>Previous answer:</strong> We haven't tried to build TensorFlow on Windows so far: the only supported platforms are Linux (Ubuntu) and Mac OS X, and we've only built binaries for those platforms.</p> <p>For now, on Windows, the easiest way to get started with TensorFlow would be to use Docker: <a href="http://tensorflow.org/get_started/os_setup.md#docker-based_installation">http://tensorflow.org/get_started/os_setup.md#docker-based_installation</a></p> <p>It should become easier to add Windows support when Bazel (the build system we are using) adds support for building on Windows, which is <a href="https://github.com/tensorflow/tensorflow/issues/17#issuecomment-189599501">on the roadmap for Bazel 0.3</a>. You can see <a href="http://bazel.io/roadmap.html">the full Bazel roadmap here</a>.</p> <p>In the meantime, you can follow <a href="https://github.com/tensorflow/tensorflow/issues/17">issue 17 on the TensorFlow GitHub page</a>.</p>
you are the stackoverflow assistant
What&#39;s the difference between a Tensorflow Keras Model and Estimator?
<p>As <code>@jaromir</code> <a href="https://stackoverflow.com/questions/51455863/whats-the-difference-between-a-tensorflow-keras-model-and-estimator/51455864?noredirect=1#comment138512926_51455864">pointed out</a> - estimators are deprecated and unavailable from Tensorflow 2.16. Use the Keras APIs instead. From the <a href="https://www.tensorflow.org/guide/estimator" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p><strong>Warning:</strong> TensorFlow 2.15 included the final release of the <code>tf-estimator</code> package. Estimators will not be available in TensorFlow 2.16 or after. See the <a href="https://www.tensorflow.org/guide/migrate/migrating_estimator" rel="nofollow noreferrer">migration guide</a> for more information about how to convert off of Estimators.</p> </blockquote> <p>Below is the original answer from 2018.</p> <hr /> <h2>Background</h2> <p>The Estimators API was added to Tensorflow in Release 1.1, and provides a high-level abstraction over lower-level Tensorflow core operations. It works with an Estimator instance, which is TensorFlow's high-level representation of a complete model.</p> <p><img src="https://www.tensorflow.org/images/tensorflow_programming_environment.png" alt="" /></p> <p><a href="https://keras.io/" rel="nofollow noreferrer">Keras</a> is similar to the Estimators API in that it abstracts deep learning model components such as layers, activation functions and optimizers, to make it easier for developers. It is a <em>model-level</em> library, and does not handle low-level operations, which is the job of <em>tensor manipulation libraries</em>, or <em>backends</em>. Keras supports three backends - <a href="https://www.tensorflow.org/" rel="nofollow noreferrer">Tensorflow</a>, <a href="http://deeplearning.net/software/theano/" rel="nofollow noreferrer">Theano</a> and <a href="https://learn.microsoft.com/en-us/cognitive-toolkit/" rel="nofollow noreferrer">CNTK</a>.</p> <p>Keras was not part of Tensorflow until <a href="https://github.com/tensorflow/tensorflow/releases/tag/v1.4.0" rel="nofollow noreferrer">Release 1.4.0</a> (2 Nov 2017). Now, when you use <code>tf.keras</code> (or talk about 'Tensorflow Keras'), you are simply using the Keras interface with the Tensorflow backend to build and train your model.</p> <p><img src="https://3.bp.blogspot.com/-l2UT45WGdyw/Wbe7au1nfwI/AAAAAAAAD1I/GeQcQUUWezIiaFFRCiMILlX2EYdG49C0wCLcBGAs/s1600/image6.png" alt="" /></p> <p>So both the Estimator API and Keras API provides a high-level API over low-level core Tensorflow API, and you can use either to train your model. But in most cases, if you are working with Tensorflow, you'd want to use the Estimators API for the reasons listed below.</p> <h2>Distribution</h2> <p>You can conduct distributed training across multiple servers with the Estimators API, but not with Keras API.</p> <p>From the <a href="https://www.tensorflow.org/guide/keras" rel="nofollow noreferrer">Tensorflow Keras Guide</a>, it says that:</p> <blockquote> <p>The Estimators API is used for training models for <strong>distributed environments</strong>.</p> </blockquote> <p>And from the <a href="https://www.tensorflow.org/guide/estimators#advantages_of_estimators" rel="nofollow noreferrer">Tensorflow Estimators Guide</a>, it says that:</p> <blockquote> <p>You can run Estimator-based models on a local host or on a <strong>distributed multi-server</strong> environment without changing your model. Furthermore, you can run Estimator-based models on CPUs, GPUs, or TPUs without recoding your model.</p> </blockquote> <h2>Pre-made Estimator</h2> <p>Whilst Keras provides abstractions that makes building your models easier, you still have to write code to build your model. With Estimators, Tensorflow provides <em>Pre-made Estimators</em>, which are models which you can use straight away, simply by plugging in the hyperparameters.</p> <p>Pre-made Estimators are similar to how you'd work with <a href="http://scikit-learn.org/stable/" rel="nofollow noreferrer"><code>scikit-learn</code></a>. For example, the <a href="https://www.tensorflow.org/api_docs/python/tf/estimator/LinearRegressor" rel="nofollow noreferrer"><code>tf.estimator.LinearRegressor</code></a> from Tensorflow is similar to the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html" rel="nofollow noreferrer"><code>sklearn.linear_model.LinearRegression</code></a> from <code>scikit-learn</code>.</p> <h2>Integration with Other Tensorflow Tools</h2> <p>Tensorflow provides a vistualzation tool called <a href="https://github.com/tensorflow/tensorboard" rel="nofollow noreferrer">TensorBoard</a> that helps you visualize your graph and statistics. By using an Estimator, you can easily save summaries to be visualized with Tensorboard.</p> <h2>Converting Keras Model to Estimator</h2> <p>To migrate a Keras model to an Estimator, use the <a href="https://www.tensorflow.org/api_docs/python/tf/keras/estimator/model_to_estimator" rel="nofollow noreferrer"><code>tf.keras.estimator.model_to_estimator</code></a> method.</p>
you are the stackoverflow assistant
MemoryError in TensorFlow; and &quot;successful NUMA node read from SysFS had negative value (-1)&quot; with xen
<p>There is the code which prints the message "successful NUMA node read from SysFS had negative value (-1)", and it is not Fatal Error, it is just warning. Real error is <code>MemoryError</code> in your <code>File "model_new.py", line 85, in &lt;module&gt;</code>. We need more sources to check this error. Try to make your model smaller or run on server with more RAM.</p> <hr> <p>About NUMA node warning:</p> <p><a href="https://github.com/tensorflow/tensorflow/blob/e4296aefff97e6edd3d7cee9a09b9dd77da4c034/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc#L855" rel="noreferrer">https://github.com/tensorflow/tensorflow/blob/e4296aefff97e6edd3d7cee9a09b9dd77da4c034/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc#L855</a></p> <pre class="lang-cpp prettyprint-override"><code>// Attempts to read the NUMA node corresponding to the GPU device's PCI bus out // of SysFS. Returns -1 if it cannot... static int TryToReadNumaNode(const string &amp;pci_bus_id, int device_ordinal) {... string filename = port::Printf("/sys/bus/pci/devices/%s/numa_node", pci_bus_id.c_str()); FILE *file = fopen(filename.c_str(), "r"); if (file == nullptr) { LOG(ERROR) &lt;&lt; "could not open file to read NUMA node: " &lt;&lt; filename &lt;&lt; "\nYour kernel may have been built without NUMA support."; return kUnknownNumaNode; } ... if (port::safe_strto32(content, &amp;value)) { if (value &lt; 0) { // See http://b/18228951 for details on this path. LOG(INFO) &lt;&lt; "successful NUMA node read from SysFS had negative value (" &lt;&lt; value &lt;&lt; "), but there must be at least one NUMA node" ", so returning NUMA node zero"; fclose(file); return 0; } </code></pre> <p>TensorFlow was able to open <code>/sys/bus/pci/devices/%s/numa_node</code> file where %s is id of GPU PCI card (<a href="https://github.com/tensorflow/tensorflow/blob/e4296aefff97e6edd3d7cee9a09b9dd77da4c034/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc#L951" rel="noreferrer"><code>string pci_bus_id = CUDADriver::GetPCIBusID(device_)</code></a>). Your PC is not multisocket, there is only single CPU socket with 8-core Xeon E5-2670 installed, so this id should be '0' (single NUMA node is numbered as 0 in Linux), but the error message says that it was <code>-1</code> value in this file!</p> <p>So, we know that sysfs is mounted into <code>/sys</code>, there is <code>numa_node</code> special file, CONFIG_NUMA is enabled in your Linux Kernel config (<code>zgrep NUMA /boot/config* /proc/config*</code>). Actually it is enabled: <code>CONFIG_NUMA=y</code> - in the <a href="https://packages.ubuntu.com/trusty/kernel/linux-image-4.4.0-78-generic" rel="noreferrer">deb of your x86_64 4.4.0-78-generic kernel</a></p> <p>The special file <code>numa_node</code> is documented in <a href="https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-bus-pci" rel="noreferrer">https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-bus-pci</a> (<strong>is the ACPI of your PC wrong?</strong>)</p> <pre><code>What: /sys/bus/pci/devices/.../numa_node Date: Oct 2014 Contact: Prarit Bhargava &lt;prarit@redhat.com&gt; Description: This file contains the NUMA node to which the PCI device is attached, or -1 if the node is unknown. The initial value comes from an ACPI _PXM method or a similar firmware source. If that is missing or incorrect, this file can be written to override the node. In that case, please report a firmware bug to the system vendor. Writing to this file taints the kernel with TAINT_FIRMWARE_WORKAROUND, which reduces the supportability of your system. </code></pre> <p>There is quick (<a href="https://en.wikipedia.org/wiki/Kludge" rel="noreferrer">kludge</a>) workaround for this error: find the <code>numa_node</code> of your GPU and with root account do after every boot this command where NNNNN is the PCI id of your card (search in <code>lspci</code> output and in <code>/sys/bus/pci/devices/</code> directory)</p> <pre><code>echo 0 | sudo tee -a /sys/bus/pci/devices/NNNNN/numa_node </code></pre> <p>Or just echo it into every such file, it should be rather safe:</p> <pre><code>for a in /sys/bus/pci/devices/*; do echo 0 | sudo tee -a $a/numa_node; done </code></pre> <p>Also your <code>lshw</code> shows that it is not PC, but Xen virtual guest. There is something wrong between Xen platform (ACPI) emulation and Linux PCI bus NUMA-support code.</p>
you are the stackoverflow assistant
What is num_units in tensorflow BasicLSTMCell?
<p>From <a href="https://jasdeep06.github.io/posts/Understanding-LSTM-in-Tensorflow-MNIST/" rel="nofollow noreferrer">this brilliant article</a></p> <blockquote> <p><code>num_units</code> can be interpreted as the analogy of hidden layer from the feed forward neural network. The number of nodes in hidden layer of a feed forward neural network is equivalent to num_units number of LSTM units in a LSTM cell at every time step of the network.</p> </blockquote> <p>See the <a href="https://github.com/jasdeep06/jasdeep06.github.io/blob/master/posts/Understanding-LSTM-in-Tensorflow-MNIST/images/num_units.png?raw=True" rel="nofollow noreferrer">image</a> there too!</p> <p><a href="https://i.sstatic.net/kGzGU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kGzGU.png" alt="enter image description here" /></a></p>
you are the stackoverflow assistant
How to fix &quot;AttributeError: module &#39;tensorflow&#39; has no attribute &#39;get_default_graph&#39;&quot;?
<p>Please try:</p> <p><code>from tensorflow.keras.models import Sequential</code> </p> <p>instead of</p> <p><code>from keras.models import Sequential</code></p>
you are the stackoverflow assistant
Tensorflow r1.0 : could not a find a version that satisfies the requirement tensorflow
<p>I was in same problem. </p> <p>Below command solved my problem</p> <pre><code>pip3 install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.0-py3-none-any.whl </code></pre> <p>to find the list of all the urls based on the python version and CPU or GPU only refer to: <a href="https://www.tensorflow.org/install/pip" rel="noreferrer">https://www.tensorflow.org/install/pip</a></p>
you are the stackoverflow assistant
How does one debug NaN values in TensorFlow?
<p>There are a couple of reasons WHY you can get a NaN-result, often it is because of too high a learning rate but plenty other reasons are possible like for example corrupt data in your input-queue or a log of 0 calculation.</p> <p>Anyhow, debugging with a print as you describe cannot be done by a simple print (as this would result only in the printing of the tensor-information inside the graph and not print any actual values). </p> <p>However, if you use tf.print as an op in bulding the graph (<a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/control_flow_ops.html#Print">tf.print</a>) then when the graph gets executed you will get the actual values printed (and it IS a good exercise to watch these values to debug and understand the behavior of your net).</p> <p>However, you are using the print-statement not entirely in the correct manner. This is an op, so you need to pass it a tensor and request a result-tensor that you need to work with later on in the executing graph. Otherwise the op is not going to be executed and no printing occurs. Try this:</p> <pre><code>Z = tf.sqrt(Delta_tilde) Z = tf.Print(Z,[Z], message="my Z-values:") # &lt;-------- TF PRINT STATMENT Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity) Z = tf.pow(Z, 2.0) </code></pre>
you are the stackoverflow assistant
TensorBoard - Plot training and validation losses on the same graph?
<p>The work-around I have been doing is to use two <code>SummaryWriter</code> with different log dir for training set and cross-validation set respectively. And you will see something like this:</p> <p><a href="https://i.sstatic.net/4Zqxa.png" rel="noreferrer"><img src="https://i.sstatic.net/4Zqxa.png" alt="enter image description here"></a></p>
you are the stackoverflow assistant
Loss function for class imbalanced binary classifier in Tensor flow
<p>You can add class weights to the loss function, by multiplying logits. Regular cross entropy loss is this:</p> <pre><code>loss(x, class) = -log(exp(x[class]) / (\sum_j exp(x[j]))) = -x[class] + log(\sum_j exp(x[j])) </code></pre> <p>in weighted case:</p> <pre><code>loss(x, class) = weights[class] * -x[class] + log(\sum_j exp(weights[class] * x[j])) </code></pre> <p>So by multiplying logits, you are re-scaling predictions of each class by its class weight.</p> <p>For example:</p> <pre><code>ratio = 31.0 / (500.0 + 31.0) class_weight = tf.constant([ratio, 1.0 - ratio]) logits = ... # shape [batch_size, 2] weighted_logits = tf.mul(logits, class_weight) # shape [batch_size, 2] xent = tf.nn.softmax_cross_entropy_with_logits( weighted_logits, labels, name="xent_raw") </code></pre> <p>There is a standard losses function now that supports weights per batch:</p> <pre><code>tf.losses.sparse_softmax_cross_entropy(labels=label, logits=logits, weights=weights) </code></pre> <p>Where weights should be transformed from class weights to a weight per example (with shape [batch_size]). See <a href="https://www.tensorflow.org/api_docs/python/tf/nn/weighted_cross_entropy_with_logits" rel="nofollow noreferrer">documentation here</a>.</p>
you are the stackoverflow assistant
Tensorflow._api.v2.train has no attribute &#39;AdamOptimizer&#39;
<pre class="lang-py prettyprint-override"><code>tf.train.AdamOptimizer() =&gt; tf.optimizers.Adam() </code></pre> <p>From <a href="https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/optimizers" rel="noreferrer">https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/optimizers</a></p>
you are the stackoverflow assistant
Tensorflow Compile Runs For A Long Time
<p>Unfortunately, some programs can take a long time to compile. A couple of hours of compilation is not strange for tensorflow on your setup.</p> <p>There are reports of it taking 50 minutes <a href="https://gist.github.com/Brainiarc7/6d6c3f23ea057775b72c52817759b25c#gistcomment-2621082" rel="nofollow noreferrer">on a considerably faster machine</a></p> <p>A solution to this problem is to use pre-compiled binaries that are available with pip, instructions can be found here: <a href="https://www.tensorflow.org/install/pip.html" rel="nofollow noreferrer">https://www.tensorflow.org/install/pip.html</a></p> <p>Basically you can do this:</p> <pre><code>pip install tensorflow </code></pre> <p>If you require a specific older version, like 1.15, you can do this:</p> <pre><code>pip install tensorflow==1.15 </code></pre> <p>For gpu support you add <code>[and-cuda]</code> to the package name, like this:</p> <pre><code>pip install tensorflow[and-cuda] </code></pre> <p>And:</p> <pre><code>pip install tensorflow[and-cuda]==1.15 </code></pre>
you are the stackoverflow assistant
What is the difference between a sigmoid followed by the cross entropy and sigmoid_cross_entropy_with_logits in TensorFlow?
<p>You're confusing the cross-entropy for <em>binary</em> and <em>multi-class</em> problems.</p> <h2>Multi-class cross-entropy</h2> <p>The formula that you use is correct and it directly corresponds to <a href="https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits" rel="noreferrer"><code>tf.nn.softmax_cross_entropy_with_logits</code></a>:</p> <pre class="lang-py prettyprint-override"><code>-tf.reduce_sum(p * tf.log(q), axis=1) </code></pre> <p><code>p</code> and <code>q</code> are expected to be probability distributions over N classes. In particular, N can be 2, as in the following example:</p> <pre class="lang-py prettyprint-override"><code>p = tf.placeholder(tf.float32, shape=[None, 2]) logit_q = tf.placeholder(tf.float32, shape=[None, 2]) q = tf.nn.softmax(logit_q) feed_dict = { p: [[0, 1], [1, 0], [1, 0]], logit_q: [[0.2, 0.8], [0.7, 0.3], [0.5, 0.5]] } prob1 = -tf.reduce_sum(p * tf.log(q), axis=1) prob2 = tf.nn.softmax_cross_entropy_with_logits(labels=p, logits=logit_q) print(prob1.eval(feed_dict)) # [ 0.43748799 0.51301527 0.69314718] print(prob2.eval(feed_dict)) # [ 0.43748799 0.51301527 0.69314718] </code></pre> <p>Note that <code>q</code> is computing <a href="https://www.tensorflow.org/api_docs/python/tf/nn/softmax" rel="noreferrer"><code>tf.nn.softmax</code></a>, i.e. outputs a probability distribution. So it's still multi-class cross-entropy formula, only for N = 2.</p> <h2>Binary cross-entropy</h2> <p>This time the correct formula is</p> <pre class="lang-py prettyprint-override"><code>p * -tf.log(q) + (1 - p) * -tf.log(1 - q) </code></pre> <p>Though mathematically it's a partial case of the multi-class case, the <em>meaning</em> of <code>p</code> and <code>q</code> is different. In the simplest case, each <code>p</code> and <code>q</code> is a number, corresponding to a probability of the class A. </p> <p><strong>Important</strong>: Don't get confused by the common <code>p * -tf.log(q)</code> part and the sum. Previous <code>p</code> was a one-hot vector, now it's a number, zero or one. Same for <code>q</code> - it was a probability distribution, now's it's a number (probability).</p> <p>If <code>p</code> is a vector, each individual component is considered an <em>independent binary classification</em>. See <a href="https://stackoverflow.com/a/47034889/712995">this answer</a> that outlines the difference between softmax and sigmoid functions in tensorflow. So the definition <code>p = [0, 0, 0, 1, 0]</code> doesn't mean a one-hot vector, but 5 different features, 4 of which are off and 1 is on. The definition <code>q = [0.2, 0.2, 0.2, 0.2, 0.2]</code> means that each of 5 features is on with 20% probability.</p> <p>This explains the use of <code>sigmoid</code> function before the cross-entropy: its goal is to squash the logit to <code>[0, 1]</code> interval.</p> <p>The formula above still holds for multiple independent features, and that's exactly what <a href="https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits" rel="noreferrer"><code>tf.nn.sigmoid_cross_entropy_with_logits</code></a> computes:</p> <pre class="lang-py prettyprint-override"><code>p = tf.placeholder(tf.float32, shape=[None, 5]) logit_q = tf.placeholder(tf.float32, shape=[None, 5]) q = tf.nn.sigmoid(logit_q) feed_dict = { p: [[0, 0, 0, 1, 0], [1, 0, 0, 0, 0]], logit_q: [[0.2, 0.2, 0.2, 0.2, 0.2], [0.3, 0.3, 0.2, 0.1, 0.1]] } prob1 = -p * tf.log(q) prob2 = p * -tf.log(q) + (1 - p) * -tf.log(1 - q) prob3 = p * -tf.log(tf.sigmoid(logit_q)) + (1-p) * -tf.log(1-tf.sigmoid(logit_q)) prob4 = tf.nn.sigmoid_cross_entropy_with_logits(labels=p, logits=logit_q) print(prob1.eval(feed_dict)) print(prob2.eval(feed_dict)) print(prob3.eval(feed_dict)) print(prob4.eval(feed_dict)) </code></pre> <p>You should see that the last three tensors are equal, while the <code>prob1</code> is only a part of cross-entropy, so it contains correct value only when <code>p</code> is <code>1</code>:</p> <pre class="lang-py prettyprint-override"><code>[[ 0. 0. 0. 0.59813893 0. ] [ 0.55435514 0. 0. 0. 0. ]] [[ 0.79813886 0.79813886 0.79813886 0.59813887 0.79813886] [ 0.5543552 0.85435522 0.79813886 0.74439669 0.74439669]] [[ 0.7981388 0.7981388 0.7981388 0.59813893 0.7981388 ] [ 0.55435514 0.85435534 0.7981388 0.74439663 0.74439663]] [[ 0.7981388 0.7981388 0.7981388 0.59813893 0.7981388 ] [ 0.55435514 0.85435534 0.7981388 0.74439663 0.74439663]] </code></pre> <p>Now it should be clear that taking a sum of <code>-p * tf.log(q)</code> along <code>axis=1</code> doesn't make sense in this setting, though it'd be a valid formula in multi-class case.</p>
you are the stackoverflow assistant
Reset weights in Keras layer
<p>Save the initial weights right after compiling the model but before training it:</p> <pre><code>model.save_weights('model.h5') </code></pre> <p>and then after training, "reset" the model by reloading the initial weights:</p> <pre><code>model.load_weights('model.h5') </code></pre> <p>This gives you an apples to apples model to compare different data sets and should be quicker than recompiling the entire model.</p>
you are the stackoverflow assistant
How to make a custom activation function with only Python in Tensorflow?
<p><strong>Yes There is!</strong></p> <p><strong>Credit:</strong> It was hard to find the information and get it working but here is an example copying from the principles and code found <a href="https://github.com/tensorflow/tensorflow/issues/1095" rel="noreferrer">here</a> and <a href="https://gist.github.com/harpone/3453185b41d8d985356cbe5e57d67342" rel="noreferrer">here</a>.</p> <p><strong>Requirements:</strong> Before we start, there are two requirement for this to be able to succeed. First you need to be able to write your activation as a function on numpy arrays. Second you have to be able to write the derivative of that function either as a function in Tensorflow (easier) or in the worst case scenario as a function on numpy arrays.</p> <p><strong>Writing Activation function:</strong></p> <p>So let's take for example this function which we would want to use an activation function:</p> <pre><code>def spiky(x): r = x % 1 if r &lt;= 0.5: return r else: return 0 </code></pre> <p>Which look as follows: <a href="https://i.sstatic.net/gTUBr.png" rel="noreferrer"><img src="https://i.sstatic.net/gTUBr.png" alt="Spiky Activation" /></a></p> <p>The first step is making it into a numpy function, this is easy:</p> <pre><code>import numpy as np np_spiky = np.vectorize(spiky) </code></pre> <p>Now we should write its derivative.</p> <p><strong>Gradient of Activation:</strong> In our case it is easy, it is 1 if x mod 1 &lt; 0.5 and 0 otherwise. So:</p> <pre><code>def d_spiky(x): r = x % 1 if r &lt;= 0.5: return 1 else: return 0 np_d_spiky = np.vectorize(d_spiky) </code></pre> <p>Now for the hard part of making a TensorFlow function out of it.</p> <p><strong>Making a numpy fct to a tensorflow fct:</strong> We will start by making np_d_spiky into a tensorflow function. There is a function in tensorflow <code>tf.py_func(func, inp, Tout, stateful=stateful, name=name)</code> <a href="https://www.tensorflow.org/versions/r0.9/api_docs/python/script_ops.html" rel="noreferrer">[doc]</a> which transforms any numpy function to a tensorflow function, so we can use it:</p> <pre><code>import tensorflow as tf from tensorflow.python.framework import ops np_d_spiky_32 = lambda x: np_d_spiky(x).astype(np.float32) def tf_d_spiky(x,name=None): with tf.name_scope(name, &quot;d_spiky&quot;, [x]) as name: y = tf.py_func(np_d_spiky_32, [x], [tf.float32], name=name, stateful=False) return y[0] </code></pre> <p><code>tf.py_func</code> acts on lists of tensors (and returns a list of tensors), that is why we have <code>[x]</code> (and return <code>y[0]</code>). The <code>stateful</code> option is to tell tensorflow whether the function always gives the same output for the same input (stateful = False) in which case tensorflow can simply the tensorflow graph, this is our case and will probably be the case in most situations. One thing to be careful of at this point is that numpy used <code>float64</code> but tensorflow uses <code>float32</code> so you need to convert your function to use <code>float32</code> before you can convert it to a tensorflow function otherwise tensorflow will complain. This is why we need to make <code>np_d_spiky_32</code> first.</p> <p><strong>What about the Gradients?</strong> The problem with only doing the above is that even though we now have <code>tf_d_spiky</code> which is the tensorflow version of <code>np_d_spiky</code>, we couldn't use it as an activation function if we wanted to because tensorflow doesn't know how to calculate the gradients of that function.</p> <p><strong>Hack to get Gradients:</strong> As explained in the sources mentioned above, there is a hack to define gradients of a function using <code>tf.RegisterGradient</code> <a href="https://www.tensorflow.org/versions/r0.11/api_docs/python/framework.html#RegisterGradient" rel="noreferrer">[doc]</a> and <code>tf.Graph.gradient_override_map</code> <a href="https://www.tensorflow.org/versions/r0.11/api_docs/python/framework.html" rel="noreferrer">[doc]</a>. Copying the code from <a href="https://gist.github.com/harpone/3453185b41d8d985356cbe5e57d67342" rel="noreferrer">harpone</a> we can modify the <code>tf.py_func</code> function to make it define the gradient at the same time:</p> <pre><code>def py_func(func, inp, Tout, stateful=True, name=None, grad=None): # Need to generate a unique name to avoid duplicates: rnd_name = 'PyFuncGrad' + str(np.random.randint(0, 1E+8)) tf.RegisterGradient(rnd_name)(grad) # see _MySquareGrad for grad example g = tf.get_default_graph() with g.gradient_override_map({&quot;PyFunc&quot;: rnd_name}): return tf.py_func(func, inp, Tout, stateful=stateful, name=name) </code></pre> <p>Now we are almost done, the only thing is that the grad function we need to pass to the above py_func function needs to take a special form. It needs to take in an operation, and the previous gradients before the operation and propagate the gradients backward after the operation.</p> <p><strong>Gradient Function:</strong> So for our spiky activation function that is how we would do it:</p> <pre><code>def spikygrad(op, grad): x = op.inputs[0] n_gr = tf_d_spiky(x) return grad * n_gr </code></pre> <p>The activation function has only one input, that is why <code>x = op.inputs[0]</code>. If the operation had many inputs, we would need to return a tuple, one gradient for each input. For example if the operation was <code>a-b</code>the gradient with respect to <code>a</code> is <code>+1</code> and with respect to <code>b</code> is <code>-1</code> so we would have <code>return +1*grad,-1*grad</code>. Notice that we need to return tensorflow functions of the input, that is why need <code>tf_d_spiky</code>, <code>np_d_spiky</code> would not have worked because it cannot act on tensorflow tensors. Alternatively we could have written the derivative using tensorflow functions:</p> <pre><code>def spikygrad2(op, grad): x = op.inputs[0] r = tf.mod(x,1) n_gr = tf.to_float(tf.less_equal(r, 0.5)) return grad * n_gr </code></pre> <p><strong>Combining it all together:</strong> Now that we have all the pieces, we can combine them all together:</p> <pre><code>np_spiky_32 = lambda x: np_spiky(x).astype(np.float32) def tf_spiky(x, name=None): with tf.name_scope(name, &quot;spiky&quot;, [x]) as name: y = py_func(np_spiky_32, [x], [tf.float32], name=name, grad=spikygrad) # &lt;-- here's the call to the gradient return y[0] </code></pre> <p>And now we are done. And we can test it.</p> <p><strong>Test:</strong></p> <pre><code>with tf.Session() as sess: x = tf.constant([0.2,0.7,1.2,1.7]) y = tf_spiky(x) tf.initialize_all_variables().run() print(x.eval(), y.eval(), tf.gradients(y, [x])[0].eval()) </code></pre> <blockquote> <p>[ 0.2 0.69999999 1.20000005 1.70000005] [ 0.2 0. 0.20000005 0.] [ 1. 0. 1. 0.]</p> </blockquote> <p><strong>Success!</strong></p>
you are the stackoverflow assistant
Cuda 12 + tf-nightly 2.12: Could not find cuda drivers on your machine, GPU will not be used, while every checking is fine and in torch it works
<p>I think that, as of March 2023, the only tensorflow distribution for cuda 12 is the docker package from NVIDIA.</p> <p>A tf package for cuda 12 should show the following info</p> <pre><code>&gt;&gt;&gt; tf.sysconfig.get_build_info() OrderedDict([('cpu_compiler', '/usr/bin/x86_64-linux-gnu-gcc-11'), ('cuda_compute_capabilities', ['compute_86']), ('cuda_version', '12.0'), ('cudnn_version', '8'), ('is_cuda_build', True), ('is_rocm_build', False), ('is_tensorrt_build', True)]) </code></pre> <p>But if we run tf.sysconfig.get_build_info() on any tensorflow package installed via pip, it stills tells that cuda_version is 11.x</p> <p>So your alternatives are:</p> <ul> <li>install docker with the nvidia cloud instructions and run one of the recent containers</li> <li>compile tensorflow from source, either nightly or last release. Caveat, it takes a lot of RAM and some time, as all good compilations do, and the occasional error to be corrected on the run. In my case, to define kFP8, the new 8-bits float.</li> <li>wait</li> </ul>
you are the stackoverflow assistant
Keras: change learning rate
<p>You can change the learning rate as follows:</p> <pre><code>from keras import backend as K K.set_value(model.optimizer.learning_rate, 0.001) </code></pre> <p>Included into your complete example it looks as follows:</p> <pre><code>from keras.models import Sequential from keras.layers import Dense from keras import backend as K import keras import numpy as np model = Sequential() model.add(Dense(1, input_shape=(10,))) optimizer = keras.optimizers.Adam(lr=0.01) model.compile(loss='mse', optimizer=optimizer) print("Learning rate before first fit:", model.optimizer.learning_rate.numpy()) model.fit(np.random.randn(50,10), np.random.randn(50), epochs=50, verbose=0) # Change learning rate to 0.001 and train for 50 more epochs K.set_value(model.optimizer.learning_rate, 0.001) print("Learning rate before second fit:", model.optimizer.learning_rate.numpy()) model.fit(np.random.randn(50,10), np.random.randn(50), initial_epoch=50, epochs=50, verbose=0) </code></pre> <p>I've just tested this with keras 2.3.1. Not sure why the approach didn't seem to work for you.</p>
you are the stackoverflow assistant
How to extract data/labels back from TensorFlow dataset
<p>In case your <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset" rel="noreferrer"><code>tf.data.Dataset</code></a> is batched, the following code will retrieve all the y labels:</p> <pre><code>y = np.concatenate([y for x, y in ds], axis=0) </code></pre> <p><strong>Quick explanation:</strong> <code>[y for x, y in ds]</code> is known as “list comprehension” in python. If dataset is batched, this expression will loop thru each batch and put each batch y (a TF 1D tensor) in the list, and return it. Then, np.concatenate will take this list of 1-D tensor (implicitly casting to numpy) and stack it in the 0-axis to produce a single long vector. In summary, it is just converting a bunch of 1-d little vector into one long vector.</p> <p><strong>Note:</strong> if your y is more complex, this answer will need some minor modification.</p>
you are the stackoverflow assistant
Tensorflow estimator ValueError: logits and labels must have the same shape ((?, 1) vs (?,))
<p>You should reshape your labels as 2d-tensor (the first dimension will be the batch dimension and the second the scalar label):</p> <pre><code># Our vectorized labels y_train = np.asarray(train_labels).astype('float32').reshape((-1,1)) y_test = np.asarray(test_labels).astype('float32').reshape((-1,1)) </code></pre>
you are the stackoverflow assistant
How to Properly Combine TensorFlow&#39;s Dataset API and Keras?
<h3>Update June 09, 2018</h3> <ul> <li>Starting from Tensorflow 1.9, one can pass <code>tf.data.Dataset</code> object directly into <code>keras.Model.fit()</code> and it would act similar to <code>fit_generator</code>. </li> <li>A complete example can be found on this <strong><a href="https://gist.github.com/datlife/abfe263803691a8864b7a2d4f87c4ab8" rel="noreferrer">gist</a></strong>.</li> </ul> <pre class="lang-py prettyprint-override"><code># Load mnist training data (x_train, y_train), _ = tf.keras.datasets.mnist.load_data() training_set = tfdata_generator(x_train, y_train,is_training=True) model = # your keras model here model.fit( training_set.make_one_shot_iterator(), steps_per_epoch=len(x_train) // 128, epochs=5, verbose = 1) </code></pre> <ul> <li><code>tfdata_generator</code> is a function that returns an iterable <code>tf.data.Dataset</code>.</li> </ul> <pre class="lang-py prettyprint-override"><code>def tfdata_generator(images, labels, is_training, batch_size=128): '''Construct a data generator using `tf.Dataset`. ''' def map_fn(image, label): '''Preprocess raw data to trainable input. ''' x = tf.reshape(tf.cast(image, tf.float32), (28, 28, 1)) y = tf.one_hot(tf.cast(label, tf.uint8), _NUM_CLASSES) return x, y dataset = tf.data.Dataset.from_tensor_slices((images, labels)) if is_training: dataset = dataset.shuffle(1000) # depends on sample size dataset = dataset.map(map_fn) dataset = dataset.batch(batch_size) dataset = dataset.repeat() dataset = dataset.prefetch(tf.contrib.data.AUTOTUNE) return dataset </code></pre> <h2>Old Solution:</h2> <p>In addition to @Yu-Yang's answer, you can also modify <code>tf.data.Dataset</code> to become a generator for <code>fit_generator</code> as following</p> <pre class="lang-py prettyprint-override"><code>from tensorflow.contrib.learn.python.learn.datasets import mnist data = mnist.load_mnist() model = # your Keras model model.fit_generator(generator = tfdata_generator(data.train.images, data.train.labels), steps_per_epoch=200, workers = 0 , # This is important verbose = 1) def tfdata_generator(images, labels, batch_size=128, shuffle=True,): def map_func(image, label): '''A transformation function''' x_train = tf.reshape(tf.cast(image, tf.float32), image_shape) y_train = tf.one_hot(tf.cast(label, tf.uint8), num_classes) return [x_train, y_train] dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset = dataset.map(map_func) dataset = dataset.shuffle().batch(batch_size).repeat() iterator = dataset.make_one_shot_iterator() next_batch = iterator.get_next() while True: yield K.get_session().run(next_batch) </code></pre>
you are the stackoverflow assistant
Understanding the ResourceExhaustedError: OOM when allocating tensor with shape
<p>Let's divide the issues one by one:</p> <p>About tensorflow to allocate all memory in advance, you can use following code snippet to let tensorflow allocate memory whenever it is needed. So that you can understand how the things are going.</p> <pre><code>gpu_options = tf.GPUOptions(allow_growth=True) session = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options)) </code></pre> <p>This works equally with <code>tf.Session()</code> instead of <code>tf.InteractiveSession()</code> if you prefer.</p> <p>Second thing about the sizes, As there is no information about your network size, we cannot estimate what is going wrong. However, you can alternatively debug step by step all the network. For example, create a network only with one layer, get its output, create session and feed values once and visualize how much memory you consume. Iterate this debugging session until you see the point where you are going out of memory. </p> <p>Please be aware that 3840 x 155229 output is really, REALLY a big output. It means ~600M neurons, and ~2.22GB per one layer only. If you have any similar size layers, all of them will add up to fill your GPU memory pretty fast. </p> <p>Also, this is only for forward direction, if you are using this layer for training, the back propagation and layers added by optimizer will multiply this size by 2. So, for training you consume ~5 GB just for output layer. </p> <p>I suggest you to revise your network and try to reduce batch size / parameter counts to fit your model to GPU</p>
you are the stackoverflow assistant
What is the purpose of tf.global_variables_initializer?
<p>A more complete description is given <a href="https://www.tensorflow.org/programmers_guide/variables" rel="noreferrer">here</a>. </p> <p>Only after running <code>tf.global_variables_initializer()</code> in a session will your variables hold the values you told them to hold when you declare them (<code>tf.Variable(tf.zeros(...))</code>, <code>tf.Variable(tf.random_normal(...))</code>,...).</p> <p>From the TF doc : </p> <blockquote> <p>Calling tf.Variable() adds several ops to the graph:</p> <ul> <li>A variable op that holds the variable value.</li> <li>An initializer op that sets the variable to its initial value. This is actually a tf.assign op.</li> <li>The ops for the initial value, such as the zeros op for the biases variable in the example are also added to the graph.</li> </ul> </blockquote> <p>And also:</p> <blockquote> <p>Variable initializers must be run explicitly before other ops in your model can be run. The easiest way to do that is to add an op that runs all the variable initializers, and run that op before using the model.</p> </blockquote>
you are the stackoverflow assistant
&quot;freeze&quot; some variables/scopes in tensorflow: stop_gradient vs passing variables to minimize
<p>The easiest way to achieve this, as you mention in your question, is to create two optimizer operations using separate calls to <code>opt.minimize(cost, ...)</code>. By default, the optimizer will use all of the variables in <a href="https://www.tensorflow.org/versions/master/api_docs/python/state_ops.html#trainable_variables"><code>tf.trainable_variables()</code></a>. If you want to filter the variables to a particular scope, you can use the optional <code>scope</code> argument to <a href="https://www.tensorflow.org/versions/master/api_docs/python/framework.html#get_collection"><code>tf.get_collection()</code></a> as follows:</p> <pre><code>optimizer = tf.train.AdagradOptimzer(0.01) first_train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "scope/prefix/for/first/vars") first_train_op = optimizer.minimize(cost, var_list=first_train_vars) second_train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "scope/prefix/for/second/vars") second_train_op = optimizer.minimize(cost, var_list=second_train_vars) </code></pre>
you are the stackoverflow assistant
Tensorflow python : Accessing individual elements in a tensor
<p>There are two main ways to access subsets of the elements in a tensor, either of which should work for your example.</p> <ol> <li><p>Use the indexing operator (based on <a href="https://www.tensorflow.org/versions/0.6.0/api_docs/python/array_ops.html#slice"><code>tf.slice()</code></a>) to extract a contiguous slice from the tensor.</p> <pre><code>input = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) output = input[0, :] print sess.run(output) # ==&gt; [1 2 3] </code></pre> <p>The indexing operator supports many of the same slice specifications as NumPy does.</p></li> <li><p>Use the <a href="https://www.tensorflow.org/versions/0.6.0/api_docs/python/array_ops.html#gather"><code>tf.gather()</code></a> op to select a non-contiguous slice from the tensor.</p> <pre><code>input = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) output = tf.gather(input, 0) print sess.run(output) # ==&gt; [1 2 3] output = tf.gather(input, [0, 2]) print sess.run(output) # ==&gt; [[1 2 3] [7 8 9]] </code></pre> <p>Note that <code>tf.gather()</code> only allows you to select whole slices in the 0th dimension (whole rows in the example of a matrix), so you may need to <a href="https://www.tensorflow.org/versions/0.6.0/api_docs/python/array_ops.html#reshape"><code>tf.reshape()</code></a> or <a href="https://www.tensorflow.org/versions/0.6.0/api_docs/python/array_ops.html#transpose"><code>tf.transpose()</code></a> your input to obtain the appropriate elements.</p></li> </ol>
you are the stackoverflow assistant