DPL_issues / data /testing_github_dataset.json
Eren5717's picture
Upload 2 files
6e5e8df verified
[
{
"instruction": "Multiple sequences",
"output": "<p dir=\"auto\">It wasn't the LSTM, it was some other level in the network. Reshaping and flattening where required fixes most issues. Thanks :)</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Working with large datasets like Imagenet",
"output": "<p dir=\"auto\">Keras models absolutely do support batch training. The CIFAR10 example offers an example of this.</p>\n<p dir=\"auto\">What's more, you can use the image preprocessing module (data augmentation and normalization) on batches as well. Here's a quick example:</p>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"datagen = ImageDataGenerator(\n featurewise_center=True, # set input mean to 0 over the dataset\n samplewise_center=False, # set each sample mean to 0\n featurewise_std_normalization=True, # divide inputs by std of the dataset\n samplewise_std_normalization=False, # divide each input by its std\n zca_whitening=False, # apply ZCA whitening\n rotation_range=20, # randomly rotate images in the range (degrees, 0 to 180)\n width_shift_range=0.2, # randomly shift images horizontally (fraction of total width)\n height_shift_range=0.2, # randomly shift images vertically (fraction of total height)\n horizontal_flip=True, # randomly flip images\n vertical_flip=False) # randomly flip images\n\ndatagen.fit(X_sample) # let's say X_sample is a small-ish but statistically representative sample of your data\n\n# let's say you have an ImageNet generator that yields ~10k samples at a time.\nfor e in range(nb_epoch):\n print(&quot;epoch %d&quot; % e)\n for X_train, Y_train in ImageNet(): # these are chunks of ~10k pictures\n for X_batch, Y_batch in datagen.flow(X_train, Y_train, batch_size=32): # these are chunks of 32 samples\n loss = model.train(X_batch, Y_batch)\n\n# Alternatively, without data augmentation / normalization:\nfor e in range(nb_epoch):\n print(&quot;epoch %d&quot; % e)\n for X_train, Y_train in ImageNet(): # these are chunks of ~10k pictures\n model.fit(X_batch, Y_batch, batch_size=32, nb_epoch=1)\"><pre class=\"notranslate\"><span class=\"pl-s1\">datagen</span> <span class=\"pl-c1\">=</span> <span class=\"pl-v\">ImageDataGenerator</span>(\n <span class=\"pl-s1\">featurewise_center</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">True</span>, <span class=\"pl-c\"># set input mean to 0 over the dataset</span>\n <span class=\"pl-s1\">samplewise_center</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">False</span>, <span class=\"pl-c\"># set each sample mean to 0</span>\n <span class=\"pl-s1\">featurewise_std_normalization</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">True</span>, <span class=\"pl-c\"># divide inputs by std of the dataset</span>\n <span class=\"pl-s1\">samplewise_std_normalization</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">False</span>, <span class=\"pl-c\"># divide each input by its std</span>\n <span class=\"pl-s1\">zca_whitening</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">False</span>, <span class=\"pl-c\"># apply ZCA whitening</span>\n <span class=\"pl-s1\">rotation_range</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">20</span>, <span class=\"pl-c\"># randomly rotate images in the range (degrees, 0 to 180)</span>\n <span class=\"pl-s1\">width_shift_range</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">0.2</span>, <span class=\"pl-c\"># randomly shift images horizontally (fraction of total width)</span>\n <span class=\"pl-s1\">height_shift_range</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">0.2</span>, <span class=\"pl-c\"># randomly shift images vertically (fraction of total height)</span>\n <span class=\"pl-s1\">horizontal_flip</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">True</span>, <span class=\"pl-c\"># randomly flip images</span>\n <span class=\"pl-s1\">vertical_flip</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">False</span>) <span class=\"pl-c\"># randomly flip images</span>\n\n<span class=\"pl-s1\">datagen</span>.<span class=\"pl-en\">fit</span>(<span class=\"pl-v\">X_sample</span>) <span class=\"pl-c\"># let's say X_sample is a small-ish but statistically representative sample of your data</span>\n\n<span class=\"pl-c\"># let's say you have an ImageNet generator that yields ~10k samples at a time.</span>\n<span class=\"pl-k\">for</span> <span class=\"pl-s1\">e</span> <span class=\"pl-c1\">in</span> <span class=\"pl-en\">range</span>(<span class=\"pl-s1\">nb_epoch</span>):\n <span class=\"pl-en\">print</span>(<span class=\"pl-s\">\"epoch %d\"</span> <span class=\"pl-c1\">%</span> <span class=\"pl-s1\">e</span>)\n <span class=\"pl-k\">for</span> <span class=\"pl-v\">X_train</span>, <span class=\"pl-v\">Y_train</span> <span class=\"pl-c1\">in</span> <span class=\"pl-v\">ImageNet</span>(): <span class=\"pl-c\"># these are chunks of ~10k pictures</span>\n <span class=\"pl-k\">for</span> <span class=\"pl-v\">X_batch</span>, <span class=\"pl-v\">Y_batch</span> <span class=\"pl-c1\">in</span> <span class=\"pl-s1\">datagen</span>.<span class=\"pl-en\">flow</span>(<span class=\"pl-v\">X_train</span>, <span class=\"pl-v\">Y_train</span>, <span class=\"pl-s1\">batch_size</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">32</span>): <span class=\"pl-c\"># these are chunks of 32 samples</span>\n <span class=\"pl-s1\">loss</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">model</span>.<span class=\"pl-en\">train</span>(<span class=\"pl-v\">X_batch</span>, <span class=\"pl-v\">Y_batch</span>)\n\n<span class=\"pl-c\"># Alternatively, without data augmentation / normalization:</span>\n<span class=\"pl-k\">for</span> <span class=\"pl-s1\">e</span> <span class=\"pl-c1\">in</span> <span class=\"pl-en\">range</span>(<span class=\"pl-s1\">nb_epoch</span>):\n <span class=\"pl-en\">print</span>(<span class=\"pl-s\">\"epoch %d\"</span> <span class=\"pl-c1\">%</span> <span class=\"pl-s1\">e</span>)\n <span class=\"pl-k\">for</span> <span class=\"pl-v\">X_train</span>, <span class=\"pl-v\">Y_train</span> <span class=\"pl-c1\">in</span> <span class=\"pl-v\">ImageNet</span>(): <span class=\"pl-c\"># these are chunks of ~10k pictures</span>\n <span class=\"pl-s1\">model</span>.<span class=\"pl-en\">fit</span>(<span class=\"pl-v\">X_batch</span>, <span class=\"pl-v\">Y_batch</span>, <span class=\"pl-s1\">batch_size</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">32</span>, <span class=\"pl-s1\">nb_epoch</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">1</span>)</pre></div>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Adding Batch Size as explicit parameter for Batch Normalization layer",
"output": "<p dir=\"auto\">You're right</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Setting up tests",
"output": "<p dir=\"auto\">Could you add a code coverage button next to the build status in the README as well?</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Model serialization",
"output": "<p dir=\"auto\">For people coming here from Google looking for a way to serialize a Keras model: <a href=\"http://keras.io/getting-started/faq/#how-can-i-save-a-keras-model\" rel=\"nofollow\">How can I save a Keras model?</a></p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Autoencoder Architechture",
"output": "<p dir=\"auto\">Implemented <a class=\"issue-link js-issue-link\" data-error-text=\"Failed to load title\" data-id=\"83515755\" data-permission-text=\"Title is private\" data-url=\"https://github.com/keras-team/keras/issues/180\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"/keras-team/keras/pull/180/hovercard\" href=\"https://github.com/keras-team/keras/pull/180\">#180</a></p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Requirements for 1Dconvolution",
"output": "<p dir=\"auto\"><a class=\"user-mention notranslate\" data-hovercard-type=\"user\" data-hovercard-url=\"/users/sjhddh/hovercard\" data-octo-click=\"hovercard-link-click\" data-octo-dimensions=\"link_type:self\" href=\"https://github.com/sjhddh\">@sjhddh</a> unfortunately I got busy with other things and never finished working on it. My code assumed the rows/columns of the embeddings were flipped from how keras stores them. That branch is now also several hundred commits behind master and I don't know how much else has changed about keras...</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "cifar10.py - imports cPickle error",
"output": "<p dir=\"auto\">The change you propose do not appear to be changing anything to what the code is actually doing (it's still the same module being imported). I tried it anyway, but the encoding issue still pops up. We'd probably have to re-save the pkl files to fix it, it's a pain...</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Accessing internal states",
"output": "<p dir=\"auto\">Sure, you can. See the answer to this question: <a class=\"issue-link js-issue-link\" data-error-text=\"Failed to load title\" data-id=\"67245973\" data-permission-text=\"Title is private\" data-url=\"https://github.com/keras-team/keras/issues/41\" data-hovercard-type=\"issue\" data-hovercard-url=\"/keras-team/keras/issues/41/hovercard\" href=\"https://github.com/keras-team/keras/issues/41\">#41</a></p>\n<p dir=\"auto\">It will probably be made more straightforward in future API upgrades.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "glorot_normal init should be glorot_uniform?",
"output": "<p dir=\"auto\">Benchmarking on MNIST gives me better results with glorot_uniform compared to glorot_normal. glorot_uniform also appears to perform about as well as lecun_uniform.</p>\n<p dir=\"auto\">glorot_uniform:</p>\n<div class=\"snippet-clipboard-content notranslate position-relative overflow-auto\" data-snippet-clipboard-copy-content=\"Train on 37800 samples, validate on 4200 samples\nEpoch 0\nloss: 0.0257 - acc.: 0.7500 - val. loss: 0.0123 - val. acc.: 0.9348\nEpoch 1\nloss: 0.0092 - acc.: 1.0000 - val. loss: 0.0081 - val. acc.: 0.9512\nEpoch 2\nloss: 0.0112 - acc.: 0.8750 - val. loss: 0.0070 - val. acc.: 0.9590\nEpoch 3\nloss: 0.0031 - acc.: 1.0000 - val. loss: 0.0061 - val. acc.: 0.9631\nEpoch 4\nloss: 0.0029 - acc.: 1.0000 - val. loss: 0.0054 - val. acc.: 0.9664\nEpoch 5\nloss: 0.0027 - acc.: 1.0000 - val. loss: 0.0051 - val. acc.: 0.9674\nEpoch 6\nloss: 0.0047 - acc.: 1.0000 - val. loss: 0.0050 - val. acc.: 0.9657\nEpoch 7\nloss: 0.0012 - acc.: 1.0000 - val. loss: 0.0050 - val. acc.: 0.9679\nEpoch 8\nloss: 0.0119 - acc.: 0.8750 - val. loss: 0.0048 - val. acc.: 0.9700\nEpoch 9\nloss: 0.0011 - acc.: 1.0000 - val. loss: 0.0045 - val. acc.: 0.9712\"><pre class=\"notranslate\"><code class=\"notranslate\">Train on 37800 samples, validate on 4200 samples\nEpoch 0\nloss: 0.0257 - acc.: 0.7500 - val. loss: 0.0123 - val. acc.: 0.9348\nEpoch 1\nloss: 0.0092 - acc.: 1.0000 - val. loss: 0.0081 - val. acc.: 0.9512\nEpoch 2\nloss: 0.0112 - acc.: 0.8750 - val. loss: 0.0070 - val. acc.: 0.9590\nEpoch 3\nloss: 0.0031 - acc.: 1.0000 - val. loss: 0.0061 - val. acc.: 0.9631\nEpoch 4\nloss: 0.0029 - acc.: 1.0000 - val. loss: 0.0054 - val. acc.: 0.9664\nEpoch 5\nloss: 0.0027 - acc.: 1.0000 - val. loss: 0.0051 - val. acc.: 0.9674\nEpoch 6\nloss: 0.0047 - acc.: 1.0000 - val. loss: 0.0050 - val. acc.: 0.9657\nEpoch 7\nloss: 0.0012 - acc.: 1.0000 - val. loss: 0.0050 - val. acc.: 0.9679\nEpoch 8\nloss: 0.0119 - acc.: 0.8750 - val. loss: 0.0048 - val. acc.: 0.9700\nEpoch 9\nloss: 0.0011 - acc.: 1.0000 - val. loss: 0.0045 - val. acc.: 0.9712\n</code></pre></div>\n<p dir=\"auto\">glorot_normal:</p>\n<div class=\"snippet-clipboard-content notranslate position-relative overflow-auto\" data-snippet-clipboard-copy-content=\"Train on 37800 samples, validate on 4200 samples\nEpoch 0\nloss: 0.0208 - acc.: 0.8750 - val. loss: 0.0127 - val. acc.: 0.9367\nEpoch 1\nloss: 0.0113 - acc.: 1.0000 - val. loss: 0.0088 - val. acc.: 0.9490\nEpoch 2\nloss: 0.0045 - acc.: 1.0000 - val. loss: 0.0076 - val. acc.: 0.9548\nEpoch 3\nloss: 0.0245 - acc.: 0.7500 - val. loss: 0.0070 - val. acc.: 0.9598\nEpoch 4\nloss: 0.0090 - acc.: 0.8750 - val. loss: 0.0062 - val. acc.: 0.9643\nEpoch 5\nloss: 0.0032 - acc.: 1.0000 - val. loss: 0.0057 - val. acc.: 0.9660\nEpoch 6\nloss: 0.0009 - acc.: 1.0000 - val. loss: 0.0058 - val. acc.: 0.9650\nEpoch 7\nloss: 0.0032 - acc.: 1.0000 - val. loss: 0.0057 - val. acc.: 0.9643\nEpoch 8\nloss: 0.0155 - acc.: 0.8750 - val. loss: 0.0053 - val. acc.: 0.9679\nEpoch 9\nloss: 0.0053 - acc.: 1.0000 - val. loss: 0.0052 - val. acc.: 0.9679\"><pre class=\"notranslate\"><code class=\"notranslate\">Train on 37800 samples, validate on 4200 samples\nEpoch 0\nloss: 0.0208 - acc.: 0.8750 - val. loss: 0.0127 - val. acc.: 0.9367\nEpoch 1\nloss: 0.0113 - acc.: 1.0000 - val. loss: 0.0088 - val. acc.: 0.9490\nEpoch 2\nloss: 0.0045 - acc.: 1.0000 - val. loss: 0.0076 - val. acc.: 0.9548\nEpoch 3\nloss: 0.0245 - acc.: 0.7500 - val. loss: 0.0070 - val. acc.: 0.9598\nEpoch 4\nloss: 0.0090 - acc.: 0.8750 - val. loss: 0.0062 - val. acc.: 0.9643\nEpoch 5\nloss: 0.0032 - acc.: 1.0000 - val. loss: 0.0057 - val. acc.: 0.9660\nEpoch 6\nloss: 0.0009 - acc.: 1.0000 - val. loss: 0.0058 - val. acc.: 0.9650\nEpoch 7\nloss: 0.0032 - acc.: 1.0000 - val. loss: 0.0057 - val. acc.: 0.9643\nEpoch 8\nloss: 0.0155 - acc.: 0.8750 - val. loss: 0.0053 - val. acc.: 0.9679\nEpoch 9\nloss: 0.0053 - acc.: 1.0000 - val. loss: 0.0052 - val. acc.: 0.9679\n</code></pre></div>\n<p dir=\"auto\">Code is at <a href=\"https://www.kaggle.com/users/123235/fchollet/digit-recognizer/simple-deep-mlp-with-keras\" rel=\"nofollow\">https://www.kaggle.com/users/123235/fchollet/digit-recognizer/simple-deep-mlp-with-keras</a></p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Add interrupt handlers",
"output": "<p dir=\"auto\">Any chance this can be reopened? It's fair that programs not ending on SIGINT are annoying, but I think that somebody that would add this callback explicitly would be very much aware of this fact, and that they'll just need to send SIGINT twice for the running command to stop immediately.</p>\n<p dir=\"auto\">When iterating with a model running in the cloud, I need to run some clean-up code once training ends, and I often realize that I used too many epochs too late into this process. This would help with these cases.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Fix in cifar example",
"output": "<p dir=\"auto\">The API for Flatten changed recently. Thanks for pointing this out! It's fixed now.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "standardize_y does not support using alternative classes as datasets",
"output": "<p dir=\"auto\">Closed by <a class=\"issue-link js-issue-link\" data-error-text=\"Failed to load title\" data-id=\"68782798\" data-permission-text=\"Title is private\" data-url=\"https://github.com/keras-team/keras/issues/48\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"/keras-team/keras/pull/48/hovercard\" href=\"https://github.com/keras-team/keras/pull/48\">#48</a></p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "How to convert a caffe model to Keras type?",
"output": "<p dir=\"auto\">This Caffe-to-Keras weight converter is what you are looking for:</p>\n<p dir=\"auto\"><a href=\"https://github.com/pierluigiferrari/caffe_weight_converter\">https://github.com/pierluigiferrari/caffe_weight_converter</a></p>\n<p dir=\"auto\">It converts <code class=\"notranslate\">.caffemodel</code> files to <code class=\"notranslate\">.h5</code> weight files. It converts weights only, not the model definition, but the weights are really all you need anyway.</p>\n<p dir=\"auto\">For any given model, the model definition either requires only Keras core library layers, in which case it's super easy to write in Keras manually, or the model definition is complex and has custom layer types, in which case a model definition converter would probably fail anyway.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "How to stack convolutional layer and lstm?",
"output": "<ul dir=\"auto\">\n<li>Put all the pictures in a sequence in a single batch</li>\n<li>insert the following custom layer after your convolution + flatten stage:</li>\n</ul>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"class MyReshape(Layer):\n def get_output(self, train):\n X = self.get_input(train)\n nshape = (1,) + X.shape \n return theano.tensor.reshape(X, nshape)\"><pre class=\"notranslate\"><span class=\"pl-k\">class</span> <span class=\"pl-v\">MyReshape</span>(<span class=\"pl-v\">Layer</span>):\n <span class=\"pl-k\">def</span> <span class=\"pl-en\">get_output</span>(<span class=\"pl-s1\">self</span>, <span class=\"pl-s1\">train</span>):\n <span class=\"pl-v\">X</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">self</span>.<span class=\"pl-en\">get_input</span>(<span class=\"pl-s1\">train</span>)\n <span class=\"pl-s1\">nshape</span> <span class=\"pl-c1\">=</span> (<span class=\"pl-c1\">1</span>,) <span class=\"pl-c1\">+</span> <span class=\"pl-v\">X</span>.<span class=\"pl-s1\">shape</span> \n <span class=\"pl-k\">return</span> <span class=\"pl-s1\">theano</span>.<span class=\"pl-s1\">tensor</span>.<span class=\"pl-en\">reshape</span>(<span class=\"pl-v\">X</span>, <span class=\"pl-s1\">nshape</span>)</pre></div>\n<p dir=\"auto\">It turns a batch of N vectors into a batch of size 1 containing a sequence of N vectors.</p>\n<p dir=\"auto\">Note that if you do something like that, the length of the input and of the labels won't match, so you won't be able to use the <code class=\"notranslate\">model.fit()</code> method for training. However, you should be able to use the <code class=\"notranslate\">model.train(X, y)</code> method just fine, on small minibatches.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "How to use GPU",
"output": "<p dir=\"auto\">Check out the comments of examples.<br>\n<a href=\"https://github.com/fchollet/keras/search?utf8=%E2%9C%93&amp;q=gpu\">https://github.com/fchollet/keras/search?utf8=\u2713&amp;q=gpu</a></p>\n<p dir=\"auto\">GPU/CPU setting is handled by Theano. so you can specify in command line or setting file ~/.theanorc<br>\n<a href=\"http://deeplearning.net/software/theano/tutorial/using_gpu.html\" rel=\"nofollow\">http://deeplearning.net/software/theano/tutorial/using_gpu.html</a></p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "MLP example doesn't make progress",
"output": "<p dir=\"auto\">I totally missed that. Must read more docs. Thank you.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Using the model parameters which gave best validation performance",
"output": "<p dir=\"auto\">you can just rewrite the callback class and save the best model parameters in on_batch_end() function.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Can we train the Recurrent Neural Network with sequential targets?",
"output": "<p dir=\"auto\">This is possible, you just have to use the <code class=\"notranslate\">return_sequences</code> constructor argument of any recurrent unit (I suggested using GRU or LSTM as recurrent unit...).</p>\n<p dir=\"auto\">With <code class=\"notranslate\">return_sequences = True</code>, the output of the unit will be a sequence of vectors (each of size <code class=\"notranslate\">output_dim</code>), one per element in the input sequence.</p>\n<p dir=\"auto\">More here: <a href=\"http://keras.io/layers/recurrent/\" rel=\"nofollow\">http://keras.io/layers/recurrent/</a></p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "missing some files which caused two examples failed!",
"output": "<p dir=\"auto\">You will have to download that particular dataset yourself. We are not hosting the download as part of the Keras datasets because the file is quite large.</p>\n<p dir=\"auto\">The header comments on this particular example have instructions on how to download the dataset.</p>\n<p dir=\"auto\">For the Otto example, you will need to get the data from Kaggle: <a href=\"https://www.kaggle.com/c/otto-group-product-classification-challenge/data\" rel=\"nofollow\">https://www.kaggle.com/c/otto-group-product-classification-challenge/data</a></p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "How could we use Leaky ReLU and Parametric ReLU as activation function ?",
"output": "<p dir=\"auto\">There's a PReLU example in the Kaggle Otto example; it can be used as a template for all of the Advanced Activation:</p>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"from keras.layers.advanced_activations import LeakyReLU, PReLU\n..\n..\nmodel.add(Dense(512, 512, activation='linear')) # Add any layer, with the default of an identity/linear squashing function (no squashing)\nmodel.add(LeakyReLU(alpha=.001)) # add an advanced activation\n...\n...\nmodel.add(Dense(512, 123, activation='linear')) # Add any layer, with the default of an identity/linear squashing function (no squashing)\nmodel.add(PReLU((123,))) # add an advanced activation\"><pre class=\"notranslate\"><span class=\"pl-k\">from</span> <span class=\"pl-s1\">keras</span>.<span class=\"pl-s1\">layers</span>.<span class=\"pl-s1\">advanced_activations</span> <span class=\"pl-k\">import</span> <span class=\"pl-v\">LeakyReLU</span>, <span class=\"pl-v\">PReLU</span>\n..\n..\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">add</span>(<span class=\"pl-v\">Dense</span>(<span class=\"pl-c1\">512</span>, <span class=\"pl-c1\">512</span>, <span class=\"pl-s1\">activation</span><span class=\"pl-c1\">=</span><span class=\"pl-s\">'linear'</span>)) <span class=\"pl-c\"># Add any layer, with the default of an identity/linear squashing function (no squashing)</span>\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">add</span>(<span class=\"pl-v\">LeakyReLU</span>(<span class=\"pl-s1\">alpha</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">.001</span>)) <span class=\"pl-c\"># add an advanced activation</span>\n...\n...\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">add</span>(<span class=\"pl-v\">Dense</span>(<span class=\"pl-c1\">512</span>, <span class=\"pl-c1\">123</span>, <span class=\"pl-s1\">activation</span><span class=\"pl-c1\">=</span><span class=\"pl-s\">'linear'</span>)) <span class=\"pl-c\"># Add any layer, with the default of an identity/linear squashing function (no squashing)</span>\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">add</span>(<span class=\"pl-v\">PReLU</span>((<span class=\"pl-c1\">123</span>,))) <span class=\"pl-c\"># add an advanced activation</span></pre></div>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "What does inner_init do ?",
"output": "<p dir=\"auto\"><a class=\"user-mention notranslate\" data-hovercard-type=\"user\" data-hovercard-url=\"/users/fchollet/hovercard\" data-octo-click=\"hovercard-link-click\" data-octo-dimensions=\"link_type:self\" href=\"https://github.com/fchollet\">@fchollet</a> Thanks for your reply !</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Is there a way to access gradients of each layer?",
"output": "<p dir=\"auto\">Even I am looking for the same thing. Is it true even if we use keras on top of tensorflow</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Early stopping options for Keras",
"output": "<p dir=\"auto\"><code class=\"notranslate\">earlyStopping=keras.callbacks.EarlyStopping(monitor='val_loss', patience=0, verbose=0, mode='auto')</code><br>\n<code class=\"notranslate\">model.fit(X, y, batch_size=128, nb_epoch=100, verbose=1, callbacks=[earlyStopping], validation_split=0.0, validation_data=None, shuffle=True, show_accuracy=False, class_weight=None, sample_weight=None)</code></p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "How can I get the activation value of each layer?",
"output": "<p dir=\"auto\">Sure, you can. See the answer to this question: <a class=\"issue-link js-issue-link\" data-error-text=\"Failed to load title\" data-id=\"67245973\" data-permission-text=\"Title is private\" data-url=\"https://github.com/keras-team/keras/issues/41\" data-hovercard-type=\"issue\" data-hovercard-url=\"/keras-team/keras/issues/41/hovercard\" href=\"https://github.com/keras-team/keras/issues/41\">#41</a></p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Can't get a simple XOR problem network to work, answer always array([0])",
"output": "<p dir=\"auto\">If I weren't interested in learning how to use Keras, I wouldn't have raised this issue. If you don't have the time to help, then you shouldn't spend it writing a passive-aggressive retort either.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "[help wanted] any way to update learning rate and momentum",
"output": "<p dir=\"auto\">How is the decay applied to the learning rate? Is it multiplied by the decay factor after each epoch? Is the decay subtracted from it? Is it exponential?</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Regression problems / continuous target",
"output": "<p dir=\"auto\">Hi Dan,</p>\n<p dir=\"auto\">Keras can in fact work with regression problems, and even multidimensional regression (e.g. autoencoders). In such cases, you would use <code class=\"notranslate\">.predict()</code> to get the output, and everything that is classification-related (<code class=\"notranslate\">class_mode</code>, <code class=\"notranslate\">show_accuracy</code>) would be irrelevant (i.e. if you tried to display classification accuracy it would be ~0 all the way).</p>\n<p dir=\"auto\">Important to note: for regression cases, you would need to use <code class=\"notranslate\">mse</code> or <code class=\"notranslate\">mae</code> as the loss, and you could't use softmax as activation (since the output of the model isn't supposed to be probabilities). I think it would be useful to introduce a regression task in the examples, to point out these gotchas...</p>\n<p dir=\"auto\">Here's a simple 2-layer unidimensional regression:</p>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"from keras.models import Sequential\nfrom keras.layers.core import Dense, Activation\n\nmodel = Sequential()\nmodel.add(Dense(10, 64))\nmodel.add(Activation('tanh'))\nmodel.add(Dense(64, 1))\nmodel.compile(loss='mean_absolute_error', optimizer='rmsprop')\n\nmodel.fit(X_train, y_train, nb_epoch=20, batch_size=16)\nscore = model.evaluate(X_test, y_test, batch_size=16)\"><pre class=\"notranslate\"><span class=\"pl-k\">from</span> <span class=\"pl-s1\">keras</span>.<span class=\"pl-s1\">models</span> <span class=\"pl-k\">import</span> <span class=\"pl-v\">Sequential</span>\n<span class=\"pl-k\">from</span> <span class=\"pl-s1\">keras</span>.<span class=\"pl-s1\">layers</span>.<span class=\"pl-s1\">core</span> <span class=\"pl-k\">import</span> <span class=\"pl-v\">Dense</span>, <span class=\"pl-v\">Activation</span>\n\n<span class=\"pl-s1\">model</span> <span class=\"pl-c1\">=</span> <span class=\"pl-v\">Sequential</span>()\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">add</span>(<span class=\"pl-v\">Dense</span>(<span class=\"pl-c1\">10</span>, <span class=\"pl-c1\">64</span>))\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">add</span>(<span class=\"pl-v\">Activation</span>(<span class=\"pl-s\">'tanh'</span>))\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">add</span>(<span class=\"pl-v\">Dense</span>(<span class=\"pl-c1\">64</span>, <span class=\"pl-c1\">1</span>))\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">compile</span>(<span class=\"pl-s1\">loss</span><span class=\"pl-c1\">=</span><span class=\"pl-s\">'mean_absolute_error'</span>, <span class=\"pl-s1\">optimizer</span><span class=\"pl-c1\">=</span><span class=\"pl-s\">'rmsprop'</span>)\n\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">fit</span>(<span class=\"pl-v\">X_train</span>, <span class=\"pl-s1\">y_train</span>, <span class=\"pl-s1\">nb_epoch</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">20</span>, <span class=\"pl-s1\">batch_size</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">16</span>)\n<span class=\"pl-s1\">score</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">model</span>.<span class=\"pl-en\">evaluate</span>(<span class=\"pl-v\">X_test</span>, <span class=\"pl-s1\">y_test</span>, <span class=\"pl-s1\">batch_size</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">16</span>)</pre></div>\n<p dir=\"auto\">And here's an autoencoder:</p>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"from keras.models import Sequential\nfrom keras.layers.core import Dense, Activation\n\nmodel = Sequential()\nmodel.add(Dense(10, 5))\nmodel.add(Activation('tanh'))\nmodel.add(Dense(5, 10))\nmodel.compile(loss='mean_squared_error', optimizer='rmsprop')\n\nmodel.fit(X_train, X_train, nb_epoch=20, batch_size=16)\nscore = model.evaluate(X_test, X_test, batch_size=16)\"><pre class=\"notranslate\"><span class=\"pl-k\">from</span> <span class=\"pl-s1\">keras</span>.<span class=\"pl-s1\">models</span> <span class=\"pl-k\">import</span> <span class=\"pl-v\">Sequential</span>\n<span class=\"pl-k\">from</span> <span class=\"pl-s1\">keras</span>.<span class=\"pl-s1\">layers</span>.<span class=\"pl-s1\">core</span> <span class=\"pl-k\">import</span> <span class=\"pl-v\">Dense</span>, <span class=\"pl-v\">Activation</span>\n\n<span class=\"pl-s1\">model</span> <span class=\"pl-c1\">=</span> <span class=\"pl-v\">Sequential</span>()\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">add</span>(<span class=\"pl-v\">Dense</span>(<span class=\"pl-c1\">10</span>, <span class=\"pl-c1\">5</span>))\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">add</span>(<span class=\"pl-v\">Activation</span>(<span class=\"pl-s\">'tanh'</span>))\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">add</span>(<span class=\"pl-v\">Dense</span>(<span class=\"pl-c1\">5</span>, <span class=\"pl-c1\">10</span>))\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">compile</span>(<span class=\"pl-s1\">loss</span><span class=\"pl-c1\">=</span><span class=\"pl-s\">'mean_squared_error'</span>, <span class=\"pl-s1\">optimizer</span><span class=\"pl-c1\">=</span><span class=\"pl-s\">'rmsprop'</span>)\n\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">fit</span>(<span class=\"pl-v\">X_train</span>, <span class=\"pl-v\">X_train</span>, <span class=\"pl-s1\">nb_epoch</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">20</span>, <span class=\"pl-s1\">batch_size</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">16</span>)\n<span class=\"pl-s1\">score</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">model</span>.<span class=\"pl-en\">evaluate</span>(<span class=\"pl-v\">X_test</span>, <span class=\"pl-v\">X_test</span>, <span class=\"pl-s1\">batch_size</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">16</span>)</pre></div>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Could Keras handle a large dataset, for instance more than 50GB?",
"output": "<p dir=\"auto\">Keras can work with datasets that don't fit in memory, through the use of batch training.</p>\n<p dir=\"auto\">There are two ways to make this work:</p>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"\n# let's say you have a BatchGenerator that yields a large batch of samples at a time\n# (but still small enough for the GPU memory)\nfor e in range(nb_epoch):\n print(&quot;epoch %d&quot; % e)\n for X_train, Y_train in BatchGenerator(): \n model.fit(X_batch, Y_batch, batch_size=32, nb_epoch=1)\n\n\n# Alternatively, let's say you have a MiniBatchGenerator that yields 32-64 samples at a time:\nfor e in range(nb_epoch):\n print(&quot;epoch %d&quot; % e)\n for X_train, Y_train in MiniBatchGenerator(): # these are chunks of ~10k pictures\n model.train(X_batch, Y_batch)\"><pre class=\"notranslate\"><span class=\"pl-c\"># let's say you have a BatchGenerator that yields a large batch of samples at a time</span>\n<span class=\"pl-c\"># (but still small enough for the GPU memory)</span>\n<span class=\"pl-k\">for</span> <span class=\"pl-s1\">e</span> <span class=\"pl-c1\">in</span> <span class=\"pl-en\">range</span>(<span class=\"pl-s1\">nb_epoch</span>):\n <span class=\"pl-en\">print</span>(<span class=\"pl-s\">\"epoch %d\"</span> <span class=\"pl-c1\">%</span> <span class=\"pl-s1\">e</span>)\n <span class=\"pl-k\">for</span> <span class=\"pl-v\">X_train</span>, <span class=\"pl-v\">Y_train</span> <span class=\"pl-c1\">in</span> <span class=\"pl-v\">BatchGenerator</span>(): \n <span class=\"pl-s1\">model</span>.<span class=\"pl-en\">fit</span>(<span class=\"pl-v\">X_batch</span>, <span class=\"pl-v\">Y_batch</span>, <span class=\"pl-s1\">batch_size</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">32</span>, <span class=\"pl-s1\">nb_epoch</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">1</span>)\n\n\n<span class=\"pl-c\"># Alternatively, let's say you have a MiniBatchGenerator that yields 32-64 samples at a time:</span>\n<span class=\"pl-k\">for</span> <span class=\"pl-s1\">e</span> <span class=\"pl-c1\">in</span> <span class=\"pl-en\">range</span>(<span class=\"pl-s1\">nb_epoch</span>):\n <span class=\"pl-en\">print</span>(<span class=\"pl-s\">\"epoch %d\"</span> <span class=\"pl-c1\">%</span> <span class=\"pl-s1\">e</span>)\n <span class=\"pl-k\">for</span> <span class=\"pl-v\">X_train</span>, <span class=\"pl-v\">Y_train</span> <span class=\"pl-c1\">in</span> <span class=\"pl-v\">MiniBatchGenerator</span>(): <span class=\"pl-c\"># these are chunks of ~10k pictures</span>\n <span class=\"pl-s1\">model</span>.<span class=\"pl-en\">train</span>(<span class=\"pl-v\">X_batch</span>, <span class=\"pl-v\">Y_batch</span>)</pre></div>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "does it automatically use multiple gpu, if availabe?",
"output": "<p dir=\"auto\"><a class=\"user-mention notranslate\" data-hovercard-type=\"user\" data-hovercard-url=\"/users/fyears/hovercard\" data-octo-click=\"hovercard-link-click\" data-octo-dimensions=\"link_type:self\" href=\"https://github.com/fyears\">@fyears</a> <a class=\"user-mention notranslate\" data-hovercard-type=\"user\" data-hovercard-url=\"/users/fchollet/hovercard\" data-octo-click=\"hovercard-link-click\" data-octo-dimensions=\"link_type:self\" href=\"https://github.com/fchollet\">@fchollet</a> I have the following code for data parallelism for Keras. The idea is to manually synchronize the model (by averaging) from multiple GPUs after each training batch:</p>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"import warnings\nimport multiprocessing\nimport numpy as np\nfrom datetime import datetime\n\n# constant\nNUM_GPU=4\nNUM_EPOCH=100\nMINI_BATCH=128\n\ndef train_model(gpu_id, data_queue, model_queue, num_epoch=NUM_EPOCH, num_batch=1):\n import theano.sandbox.cuda\n theano.sandbox.cuda.use(gpu_id)\n import theano\n import theano.tensor as T\n from keras.models import Sequential\n from keras.layers.convolutional import Convolution2D\n from demosaic_cnn import Crop2D, mean_absolute_error\n\n # define the model\n model=Sequential()\n # put your model definition here\n\n # compile the model\n model.compile(loss=mean_absolute_error, optimizer='RMSprop')\n\n # train the model\n best_loss=np.inf\n best_save='_'.join((gpu_id,datetime.now().strftime('%Y_%m_%d_%H_%M_%S.h5')))\n for epoch in range(num_epoch):\n print gpu_id,'@epoch',epoch\n for batch in range(num_batch):\n print gpu_id,'@batch',batch\n data=data_queue.get()\n loss=model.train_on_batch(data[0], data[1])\n # after a batch a data, synchronize the model\n model_weight=[layer.get_weights() for layer in model.layers]\n # we need to send NUM_GPU-1 copies out\n for i in range(1,NUM_GPU):\n model_queue[gpu_id].put(model_weight)\n for k in model_queue:\n if k==gpu_id:\n continue\n # obtain the model from other GPU\n weight=model_queue[k].get()\n # sum it\n for l,w in enumerate(weight):\n model_weight[l]=[w1+w2 for w1,w2 in zip(model_weight[l],w)]\n # average it\n for l,w in enumerate(model_weight):\n model.layers[l].set_weights([d/NUM_GPU for d in w])\n # after each epoch, try to save the current best model\n if best_loss&gt;loss:\n model.save_weights(best_save, overwrite=True)\n best_loss=loss\n model_queue[gpu_id].close()\n\nif __name__=='__main__':\n data=[]\n label=[]\n num_data=len(data)\n gpu_list=['gpu{}'.format(i) for i in range(NUM_GPU)]\n # for send the data\n data_queue=multiprocessing.Queue(20)\n # for synchronize the model, we create a queue for each model\n model_queue={gpu_id:multiprocessing.Queue(2) for gpu_id in gpu_list}\n threads=[multiprocessing.Process(target=train_model(gpu_id, data_queue, model_queue, num_batch=(num_data)/MINI_BATCH/NUM_GPU)) for gpu_id in gpu_list]\n for thread in threads:\n thread.start()\n for epoch in range(NUM_EPOCH):\n print 'data@epoch',epoch\n for start in range(0,num_data,MINI_BATCH):\n print 'data@batch',start/MINI_BATCH\n data_queue.put((data[start:(start+MINI_BATCH)],label[start:(start+MINI_BATCH)]))\n data_queue.close()\n for thread in threads:\n thread.join()\"><pre class=\"notranslate\"><span class=\"pl-k\">import</span> <span class=\"pl-s1\">warnings</span>\n<span class=\"pl-k\">import</span> <span class=\"pl-s1\">multiprocessing</span>\n<span class=\"pl-k\">import</span> <span class=\"pl-s1\">numpy</span> <span class=\"pl-k\">as</span> <span class=\"pl-s1\">np</span>\n<span class=\"pl-k\">from</span> <span class=\"pl-s1\">datetime</span> <span class=\"pl-k\">import</span> <span class=\"pl-s1\">datetime</span>\n\n<span class=\"pl-c\"># constant</span>\n<span class=\"pl-v\">NUM_GPU</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">4</span>\n<span class=\"pl-v\">NUM_EPOCH</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">100</span>\n<span class=\"pl-v\">MINI_BATCH</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">128</span>\n\n<span class=\"pl-k\">def</span> <span class=\"pl-en\">train_model</span>(<span class=\"pl-s1\">gpu_id</span>, <span class=\"pl-s1\">data_queue</span>, <span class=\"pl-s1\">model_queue</span>, <span class=\"pl-s1\">num_epoch</span><span class=\"pl-c1\">=</span><span class=\"pl-v\">NUM_EPOCH</span>, <span class=\"pl-s1\">num_batch</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">1</span>):\n <span class=\"pl-k\">import</span> <span class=\"pl-s1\">theano</span>.<span class=\"pl-s1\">sandbox</span>.<span class=\"pl-s1\">cuda</span>\n <span class=\"pl-s1\">theano</span>.<span class=\"pl-s1\">sandbox</span>.<span class=\"pl-s1\">cuda</span>.<span class=\"pl-en\">use</span>(<span class=\"pl-s1\">gpu_id</span>)\n <span class=\"pl-k\">import</span> <span class=\"pl-s1\">theano</span>\n <span class=\"pl-k\">import</span> <span class=\"pl-s1\">theano</span>.<span class=\"pl-s1\">tensor</span> <span class=\"pl-k\">as</span> <span class=\"pl-v\">T</span>\n <span class=\"pl-k\">from</span> <span class=\"pl-s1\">keras</span>.<span class=\"pl-s1\">models</span> <span class=\"pl-k\">import</span> <span class=\"pl-v\">Sequential</span>\n <span class=\"pl-k\">from</span> <span class=\"pl-s1\">keras</span>.<span class=\"pl-s1\">layers</span>.<span class=\"pl-s1\">convolutional</span> <span class=\"pl-k\">import</span> <span class=\"pl-v\">Convolution2D</span>\n <span class=\"pl-k\">from</span> <span class=\"pl-s1\">demosaic_cnn</span> <span class=\"pl-k\">import</span> <span class=\"pl-v\">Crop2D</span>, <span class=\"pl-s1\">mean_absolute_error</span>\n\n <span class=\"pl-c\"># define the model</span>\n <span class=\"pl-s1\">model</span><span class=\"pl-c1\">=</span><span class=\"pl-v\">Sequential</span>()\n <span class=\"pl-c\"># put your model definition here</span>\n\n <span class=\"pl-c\"># compile the model</span>\n <span class=\"pl-s1\">model</span>.<span class=\"pl-en\">compile</span>(<span class=\"pl-s1\">loss</span><span class=\"pl-c1\">=</span><span class=\"pl-s1\">mean_absolute_error</span>, <span class=\"pl-s1\">optimizer</span><span class=\"pl-c1\">=</span><span class=\"pl-s\">'RMSprop'</span>)\n\n <span class=\"pl-c\"># train the model</span>\n <span class=\"pl-s1\">best_loss</span><span class=\"pl-c1\">=</span><span class=\"pl-s1\">np</span>.<span class=\"pl-s1\">inf</span>\n <span class=\"pl-s1\">best_save</span><span class=\"pl-c1\">=</span><span class=\"pl-s\">'_'</span>.<span class=\"pl-en\">join</span>((<span class=\"pl-s1\">gpu_id</span>,<span class=\"pl-s1\">datetime</span>.<span class=\"pl-en\">now</span>().<span class=\"pl-en\">strftime</span>(<span class=\"pl-s\">'%Y_%m_%d_%H_%M_%S.h5'</span>)))\n <span class=\"pl-k\">for</span> <span class=\"pl-s1\">epoch</span> <span class=\"pl-c1\">in</span> <span class=\"pl-en\">range</span>(<span class=\"pl-s1\">num_epoch</span>):\n <span class=\"pl-k\">print</span> <span class=\"pl-s1\">gpu_id</span>,<span class=\"pl-s\">'@epoch'</span>,<span class=\"pl-s1\">epoch</span>\n <span class=\"pl-k\">for</span> <span class=\"pl-s1\">batch</span> <span class=\"pl-c1\">in</span> <span class=\"pl-en\">range</span>(<span class=\"pl-s1\">num_batch</span>):\n <span class=\"pl-k\">print</span> <span class=\"pl-s1\">gpu_id</span>,<span class=\"pl-s\">'@batch'</span>,<span class=\"pl-s1\">batch</span>\n <span class=\"pl-s1\">data</span><span class=\"pl-c1\">=</span><span class=\"pl-s1\">data_queue</span>.<span class=\"pl-en\">get</span>()\n <span class=\"pl-s1\">loss</span><span class=\"pl-c1\">=</span><span class=\"pl-s1\">model</span>.<span class=\"pl-en\">train_on_batch</span>(<span class=\"pl-s1\">data</span>[<span class=\"pl-c1\">0</span>], <span class=\"pl-s1\">data</span>[<span class=\"pl-c1\">1</span>])\n <span class=\"pl-c\"># after a batch a data, synchronize the model</span>\n <span class=\"pl-s1\">model_weight</span><span class=\"pl-c1\">=</span>[<span class=\"pl-s1\">layer</span>.<span class=\"pl-en\">get_weights</span>() <span class=\"pl-k\">for</span> <span class=\"pl-s1\">layer</span> <span class=\"pl-c1\">in</span> <span class=\"pl-s1\">model</span>.<span class=\"pl-s1\">layers</span>]\n <span class=\"pl-c\"># we need to send NUM_GPU-1 copies out</span>\n <span class=\"pl-k\">for</span> <span class=\"pl-s1\">i</span> <span class=\"pl-c1\">in</span> <span class=\"pl-en\">range</span>(<span class=\"pl-c1\">1</span>,<span class=\"pl-v\">NUM_GPU</span>):\n <span class=\"pl-s1\">model_queue</span>[<span class=\"pl-s1\">gpu_id</span>].<span class=\"pl-en\">put</span>(<span class=\"pl-s1\">model_weight</span>)\n <span class=\"pl-k\">for</span> <span class=\"pl-s1\">k</span> <span class=\"pl-c1\">in</span> <span class=\"pl-s1\">model_queue</span>:\n <span class=\"pl-k\">if</span> <span class=\"pl-s1\">k</span><span class=\"pl-c1\">==</span><span class=\"pl-s1\">gpu_id</span>:\n <span class=\"pl-k\">continue</span>\n <span class=\"pl-c\"># obtain the model from other GPU</span>\n <span class=\"pl-s1\">weight</span><span class=\"pl-c1\">=</span><span class=\"pl-s1\">model_queue</span>[<span class=\"pl-s1\">k</span>].<span class=\"pl-en\">get</span>()\n <span class=\"pl-c\"># sum it</span>\n <span class=\"pl-k\">for</span> <span class=\"pl-s1\">l</span>,<span class=\"pl-s1\">w</span> <span class=\"pl-c1\">in</span> <span class=\"pl-en\">enumerate</span>(<span class=\"pl-s1\">weight</span>):\n <span class=\"pl-s1\">model_weight</span>[<span class=\"pl-s1\">l</span>]<span class=\"pl-c1\">=</span>[<span class=\"pl-s1\">w1</span><span class=\"pl-c1\">+</span><span class=\"pl-s1\">w2</span> <span class=\"pl-k\">for</span> <span class=\"pl-s1\">w1</span>,<span class=\"pl-s1\">w2</span> <span class=\"pl-c1\">in</span> <span class=\"pl-en\">zip</span>(<span class=\"pl-s1\">model_weight</span>[<span class=\"pl-s1\">l</span>],<span class=\"pl-s1\">w</span>)]\n <span class=\"pl-c\"># average it</span>\n <span class=\"pl-k\">for</span> <span class=\"pl-s1\">l</span>,<span class=\"pl-s1\">w</span> <span class=\"pl-c1\">in</span> <span class=\"pl-en\">enumerate</span>(<span class=\"pl-s1\">model_weight</span>):\n <span class=\"pl-s1\">model</span>.<span class=\"pl-s1\">layers</span>[<span class=\"pl-s1\">l</span>].<span class=\"pl-en\">set_weights</span>([<span class=\"pl-s1\">d</span><span class=\"pl-c1\">/</span><span class=\"pl-v\">NUM_GPU</span> <span class=\"pl-k\">for</span> <span class=\"pl-s1\">d</span> <span class=\"pl-c1\">in</span> <span class=\"pl-s1\">w</span>])\n <span class=\"pl-c\"># after each epoch, try to save the current best model</span>\n <span class=\"pl-k\">if</span> <span class=\"pl-s1\">best_loss</span><span class=\"pl-c1\">&gt;</span><span class=\"pl-s1\">loss</span>:\n <span class=\"pl-s1\">model</span>.<span class=\"pl-en\">save_weights</span>(<span class=\"pl-s1\">best_save</span>, <span class=\"pl-s1\">overwrite</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">True</span>)\n <span class=\"pl-s1\">best_loss</span><span class=\"pl-c1\">=</span><span class=\"pl-s1\">loss</span>\n <span class=\"pl-s1\">model_queue</span>[<span class=\"pl-s1\">gpu_id</span>].<span class=\"pl-en\">close</span>()\n\n<span class=\"pl-k\">if</span> <span class=\"pl-s1\">__name__</span><span class=\"pl-c1\">==</span><span class=\"pl-s\">'__main__'</span>:\n <span class=\"pl-s1\">data</span><span class=\"pl-c1\">=</span>[]\n <span class=\"pl-s1\">label</span><span class=\"pl-c1\">=</span>[]\n <span class=\"pl-s1\">num_data</span><span class=\"pl-c1\">=</span><span class=\"pl-en\">len</span>(<span class=\"pl-s1\">data</span>)\n <span class=\"pl-s1\">gpu_list</span><span class=\"pl-c1\">=</span>[<span class=\"pl-s\">'gpu{}'</span>.<span class=\"pl-en\">format</span>(<span class=\"pl-s1\">i</span>) <span class=\"pl-k\">for</span> <span class=\"pl-s1\">i</span> <span class=\"pl-c1\">in</span> <span class=\"pl-en\">range</span>(<span class=\"pl-v\">NUM_GPU</span>)]\n <span class=\"pl-c\"># for send the data</span>\n <span class=\"pl-s1\">data_queue</span><span class=\"pl-c1\">=</span><span class=\"pl-s1\">multiprocessing</span>.<span class=\"pl-v\">Queue</span>(<span class=\"pl-c1\">20</span>)\n <span class=\"pl-c\"># for synchronize the model, we create a queue for each model</span>\n <span class=\"pl-s1\">model_queue</span><span class=\"pl-c1\">=</span>{<span class=\"pl-s1\">gpu_id</span>:<span class=\"pl-s1\">multiprocessing</span>.<span class=\"pl-v\">Queue</span>(<span class=\"pl-c1\">2</span>) <span class=\"pl-k\">for</span> <span class=\"pl-s1\">gpu_id</span> <span class=\"pl-c1\">in</span> <span class=\"pl-s1\">gpu_list</span>}\n <span class=\"pl-s1\">threads</span><span class=\"pl-c1\">=</span>[<span class=\"pl-s1\">multiprocessing</span>.<span class=\"pl-v\">Process</span>(<span class=\"pl-s1\">target</span><span class=\"pl-c1\">=</span><span class=\"pl-en\">train_model</span>(<span class=\"pl-s1\">gpu_id</span>, <span class=\"pl-s1\">data_queue</span>, <span class=\"pl-s1\">model_queue</span>, <span class=\"pl-s1\">num_batch</span><span class=\"pl-c1\">=</span>(<span class=\"pl-s1\">num_data</span>)<span class=\"pl-c1\">/</span><span class=\"pl-v\">MINI_BATCH</span><span class=\"pl-c1\">/</span><span class=\"pl-v\">NUM_GPU</span>)) <span class=\"pl-k\">for</span> <span class=\"pl-s1\">gpu_id</span> <span class=\"pl-c1\">in</span> <span class=\"pl-s1\">gpu_list</span>]\n <span class=\"pl-k\">for</span> <span class=\"pl-s1\">thread</span> <span class=\"pl-c1\">in</span> <span class=\"pl-s1\">threads</span>:\n <span class=\"pl-s1\">thread</span>.<span class=\"pl-en\">start</span>()\n <span class=\"pl-k\">for</span> <span class=\"pl-s1\">epoch</span> <span class=\"pl-c1\">in</span> <span class=\"pl-en\">range</span>(<span class=\"pl-v\">NUM_EPOCH</span>):\n <span class=\"pl-k\">print</span> <span class=\"pl-s\">'data@epoch'</span>,<span class=\"pl-s1\">epoch</span>\n <span class=\"pl-k\">for</span> <span class=\"pl-s1\">start</span> <span class=\"pl-c1\">in</span> <span class=\"pl-en\">range</span>(<span class=\"pl-c1\">0</span>,<span class=\"pl-s1\">num_data</span>,<span class=\"pl-v\">MINI_BATCH</span>):\n <span class=\"pl-k\">print</span> <span class=\"pl-s\">'data@batch'</span>,<span class=\"pl-s1\">start</span><span class=\"pl-c1\">/</span><span class=\"pl-v\">MINI_BATCH</span>\n <span class=\"pl-s1\">data_queue</span>.<span class=\"pl-en\">put</span>((<span class=\"pl-s1\">data</span>[<span class=\"pl-s1\">start</span>:(<span class=\"pl-s1\">start</span><span class=\"pl-c1\">+</span><span class=\"pl-v\">MINI_BATCH</span>)],<span class=\"pl-s1\">label</span>[<span class=\"pl-s1\">start</span>:(<span class=\"pl-s1\">start</span><span class=\"pl-c1\">+</span><span class=\"pl-v\">MINI_BATCH</span>)]))\n <span class=\"pl-s1\">data_queue</span>.<span class=\"pl-en\">close</span>()\n <span class=\"pl-k\">for</span> <span class=\"pl-s1\">thread</span> <span class=\"pl-c1\">in</span> <span class=\"pl-s1\">threads</span>:\n <span class=\"pl-s1\">thread</span>.<span class=\"pl-en\">join</span>()</pre></div>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "API thoughts on supporting non-sequential models",
"output": "<p dir=\"auto\">It is very convenient, I want to concatenate 2 branches convolutional layers' feature maps, the code like this:<br>\nmodel1 = Sequential()<br>\nmodel1.add(Convolution2D(2, 1, 5, 5, border_mode='valid'))<br>\nmodel1.add(Activation('tanh'))</p>\n<p dir=\"auto\">model1.add(Convolution2D(4, 2, 3, 3, border_mode='valid'))<br>\nmodel1.add(Activation('tanh'))<br>\nmodel1.add(MaxPooling2D(poolsize=(2, 2))) # get feature maps(num = 4,size = 11*11)</p>\n<p dir=\"auto\">model2 = Sequential()<br>\nmodel2.add(Convolution2D(4, 1, 7, 7, border_mode='valid'))<br>\nmodel2.add(Activation('tanh'))<br>\nmodel2.add(MaxPooling2D(poolsize=(2, 2)))# get feature maps(num = 4,size = 11*11)</p>\n<p dir=\"auto\">model = Sequential()<br>\nmodel.add(Merge([model1, model2], mode = 'concat')) # concatenate feature maps(num = 8,size = 11*11)</p>\n<p dir=\"auto\">model.add(Convolution2D(16, 8, 3, 3, border_mode='valid'))<br>\nmodel.add(Activation('tanh'))<br>\nmodel.add(MaxPooling2D(poolsize=(2, 2)))</p>\n<p dir=\"auto\">model.add(Flatten())<br>\nmodel.add(Dense(16_4_4, 128, init='normal'))<br>\nmodel.add(Activation('tanh'))</p>\n<p dir=\"auto\">model.add(Dense(128, 10, init='normal'))<br>\nmodel.add(Activation('softmax'))</p>\n<p dir=\"auto\">sgd = SGD(l2=0.0,lr=0.05, decay=1e-6, momentum=0.9, nesterov=True)<br>\nmodel.compile(loss='categorical_crossentropy', optimizer='sgd',class_mode=\"categorical\")<br>\nmodel.fit([data,data], label, batch_size=100, nb_epoch=10,shuffle=True,verbose=1,show_accuracy=True,validation_split=0.2)</p>\n<p dir=\"auto\">It can't work well. If I use the 'sum', no problem, like this:<br>\n....<br>\nmodel = Sequential()<br>\nmodel.add(Merge([model1, model2], mode = 'sum'))</p>\n<p dir=\"auto\">model.add(Convolution2D(16, 4, 3, 3, border_mode='valid'))<br>\nmodel.add(Activation('tanh'))<br>\nmodel.add(MaxPooling2D(poolsize=(2, 2)))<br>\n....</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "suggestion: save the training history in model.train_history_",
"output": "<p dir=\"auto\">Hi,<br>\nI implemented this wrapper class for tracking the training history.</p>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"from keras.models import Sequential\nfrom keras.models import load_model\n\nfrom collections import defaultdict\n\nimport pickle\n\n\ndef _merge_dict(dict_list):\n dd = defaultdict(list) \n for d in dict_list:\n for key, value in d.items():\n if not hasattr(value, '__iter__'):\n value = (value,)\n [dd[key].append(v) for v in value]\n return dict(dd)\n\ndef save(obj, name):\n try:\n filename = open(name + &quot;.pickle&quot;,&quot;wb&quot;)\n pickle.dump(obj, filename)\n filename.close()\n return(True)\n except:\n return(False)\n\ndef load(name):\n filename = open(name + &quot;.pickle&quot;,&quot;rb&quot;)\n obj = pickle.load(filename)\n filename.close() \n return(obj)\n\ndef load_model_w(name):\n model_k = load_model(name)\n history = load(name)\n model = Sequential_wrapper(model_k)\n model.history = history\n return(model)\n\nclass Sequential_wrapper():\n &quot;&quot;&quot;\n %s\n &quot;&quot;&quot;%Sequential.__doc__\n \n def __init__(self, model=Sequential()):\n self.history = {}\n self.model = model\n \n # method shortcuts\n methods = dir(self.model)\n for method in methods:\n if method.startswith('_'): continue\n if method in ['model','fit','save']: continue\n try:\n exec('self.%s = self.model.%s' % (method,method))\n except:\n pass\n \n def _update_history(self,history):\n if len(self.history)==0:\n self.history = history\n else:\n self.history = _merge_dict([self.history,history])\n \n def fit(self, x, y, batch_size=32, epochs=10, verbose=1, callbacks=None,\n validation_split=0.0, validation_data=None, shuffle=True,\n class_weight=None, sample_weight=None,\n initial_epoch=0, **kwargs):\n &quot;&quot;&quot;\n %s\n &quot;&quot;&quot;%self.model.fit.__doc__\n h = self.model.fit(x, y, batch_size, epochs, verbose, callbacks,\n validation_split, validation_data, shuffle,\n class_weight, sample_weight,\n initial_epoch, **kwargs)\n self._update_history(h.history)\n return h\n \n def save(self, filepath, overwrite=True):\n &quot;&quot;&quot;\n %s\n &quot;&quot;&quot;%self.model.save.__doc__\n save(self.history,filepath)\n self.model.save(filepath, overwrite)\"><pre class=\"notranslate\"><span class=\"pl-k\">from</span> <span class=\"pl-s1\">keras</span>.<span class=\"pl-s1\">models</span> <span class=\"pl-k\">import</span> <span class=\"pl-v\">Sequential</span>\n<span class=\"pl-k\">from</span> <span class=\"pl-s1\">keras</span>.<span class=\"pl-s1\">models</span> <span class=\"pl-k\">import</span> <span class=\"pl-s1\">load_model</span>\n\n<span class=\"pl-k\">from</span> <span class=\"pl-s1\">collections</span> <span class=\"pl-k\">import</span> <span class=\"pl-s1\">defaultdict</span>\n\n<span class=\"pl-k\">import</span> <span class=\"pl-s1\">pickle</span>\n\n\n<span class=\"pl-k\">def</span> <span class=\"pl-en\">_merge_dict</span>(<span class=\"pl-s1\">dict_list</span>):\n <span class=\"pl-s1\">dd</span> <span class=\"pl-c1\">=</span> <span class=\"pl-en\">defaultdict</span>(<span class=\"pl-s1\">list</span>) \n <span class=\"pl-k\">for</span> <span class=\"pl-s1\">d</span> <span class=\"pl-c1\">in</span> <span class=\"pl-s1\">dict_list</span>:\n <span class=\"pl-k\">for</span> <span class=\"pl-s1\">key</span>, <span class=\"pl-s1\">value</span> <span class=\"pl-c1\">in</span> <span class=\"pl-s1\">d</span>.<span class=\"pl-en\">items</span>():\n <span class=\"pl-k\">if</span> <span class=\"pl-c1\">not</span> <span class=\"pl-en\">hasattr</span>(<span class=\"pl-s1\">value</span>, <span class=\"pl-s\">'__iter__'</span>):\n <span class=\"pl-s1\">value</span> <span class=\"pl-c1\">=</span> (<span class=\"pl-s1\">value</span>,)\n [<span class=\"pl-s1\">dd</span>[<span class=\"pl-s1\">key</span>].<span class=\"pl-en\">append</span>(<span class=\"pl-s1\">v</span>) <span class=\"pl-k\">for</span> <span class=\"pl-s1\">v</span> <span class=\"pl-c1\">in</span> <span class=\"pl-s1\">value</span>]\n <span class=\"pl-k\">return</span> <span class=\"pl-en\">dict</span>(<span class=\"pl-s1\">dd</span>)\n\n<span class=\"pl-k\">def</span> <span class=\"pl-en\">save</span>(<span class=\"pl-s1\">obj</span>, <span class=\"pl-s1\">name</span>):\n <span class=\"pl-k\">try</span>:\n <span class=\"pl-s1\">filename</span> <span class=\"pl-c1\">=</span> <span class=\"pl-en\">open</span>(<span class=\"pl-s1\">name</span> <span class=\"pl-c1\">+</span> <span class=\"pl-s\">\".pickle\"</span>,<span class=\"pl-s\">\"wb\"</span>)\n <span class=\"pl-s1\">pickle</span>.<span class=\"pl-en\">dump</span>(<span class=\"pl-s1\">obj</span>, <span class=\"pl-s1\">filename</span>)\n <span class=\"pl-s1\">filename</span>.<span class=\"pl-en\">close</span>()\n <span class=\"pl-k\">return</span>(<span class=\"pl-c1\">True</span>)\n <span class=\"pl-k\">except</span>:\n <span class=\"pl-k\">return</span>(<span class=\"pl-c1\">False</span>)\n\n<span class=\"pl-k\">def</span> <span class=\"pl-en\">load</span>(<span class=\"pl-s1\">name</span>):\n <span class=\"pl-s1\">filename</span> <span class=\"pl-c1\">=</span> <span class=\"pl-en\">open</span>(<span class=\"pl-s1\">name</span> <span class=\"pl-c1\">+</span> <span class=\"pl-s\">\".pickle\"</span>,<span class=\"pl-s\">\"rb\"</span>)\n <span class=\"pl-s1\">obj</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">pickle</span>.<span class=\"pl-en\">load</span>(<span class=\"pl-s1\">filename</span>)\n <span class=\"pl-s1\">filename</span>.<span class=\"pl-en\">close</span>() \n <span class=\"pl-k\">return</span>(<span class=\"pl-s1\">obj</span>)\n\n<span class=\"pl-k\">def</span> <span class=\"pl-en\">load_model_w</span>(<span class=\"pl-s1\">name</span>):\n <span class=\"pl-s1\">model_k</span> <span class=\"pl-c1\">=</span> <span class=\"pl-en\">load_model</span>(<span class=\"pl-s1\">name</span>)\n <span class=\"pl-s1\">history</span> <span class=\"pl-c1\">=</span> <span class=\"pl-en\">load</span>(<span class=\"pl-s1\">name</span>)\n <span class=\"pl-s1\">model</span> <span class=\"pl-c1\">=</span> <span class=\"pl-v\">Sequential_wrapper</span>(<span class=\"pl-s1\">model_k</span>)\n <span class=\"pl-s1\">model</span>.<span class=\"pl-s1\">history</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">history</span>\n <span class=\"pl-k\">return</span>(<span class=\"pl-s1\">model</span>)\n\n<span class=\"pl-k\">class</span> <span class=\"pl-v\">Sequential_wrapper</span>():\n <span class=\"pl-s\">\"\"\"</span>\n<span class=\"pl-s\"> %s</span>\n<span class=\"pl-s\"> \"\"\"</span><span class=\"pl-c1\">%</span><span class=\"pl-v\">Sequential</span>.<span class=\"pl-s1\">__doc__</span>\n \n <span class=\"pl-k\">def</span> <span class=\"pl-en\">__init__</span>(<span class=\"pl-s1\">self</span>, <span class=\"pl-s1\">model</span><span class=\"pl-c1\">=</span><span class=\"pl-v\">Sequential</span>()):\n <span class=\"pl-s1\">self</span>.<span class=\"pl-s1\">history</span> <span class=\"pl-c1\">=</span> {}\n <span class=\"pl-s1\">self</span>.<span class=\"pl-s1\">model</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">model</span>\n \n <span class=\"pl-c\"># method shortcuts</span>\n <span class=\"pl-s1\">methods</span> <span class=\"pl-c1\">=</span> <span class=\"pl-en\">dir</span>(<span class=\"pl-s1\">self</span>.<span class=\"pl-s1\">model</span>)\n <span class=\"pl-k\">for</span> <span class=\"pl-s1\">method</span> <span class=\"pl-c1\">in</span> <span class=\"pl-s1\">methods</span>:\n <span class=\"pl-k\">if</span> <span class=\"pl-s1\">method</span>.<span class=\"pl-en\">startswith</span>(<span class=\"pl-s\">'_'</span>): <span class=\"pl-k\">continue</span>\n <span class=\"pl-k\">if</span> <span class=\"pl-s1\">method</span> <span class=\"pl-c1\">in</span> [<span class=\"pl-s\">'model'</span>,<span class=\"pl-s\">'fit'</span>,<span class=\"pl-s\">'save'</span>]: <span class=\"pl-k\">continue</span>\n <span class=\"pl-k\">try</span>:\n <span class=\"pl-en\">exec</span>(<span class=\"pl-s\">'self.%s = self.model.%s'</span> <span class=\"pl-c1\">%</span> (<span class=\"pl-s1\">method</span>,<span class=\"pl-s1\">method</span>))\n <span class=\"pl-k\">except</span>:\n <span class=\"pl-k\">pass</span>\n \n <span class=\"pl-k\">def</span> <span class=\"pl-en\">_update_history</span>(<span class=\"pl-s1\">self</span>,<span class=\"pl-s1\">history</span>):\n <span class=\"pl-k\">if</span> <span class=\"pl-en\">len</span>(<span class=\"pl-s1\">self</span>.<span class=\"pl-s1\">history</span>)<span class=\"pl-c1\">==</span><span class=\"pl-c1\">0</span>:\n <span class=\"pl-s1\">self</span>.<span class=\"pl-s1\">history</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">history</span>\n <span class=\"pl-k\">else</span>:\n <span class=\"pl-s1\">self</span>.<span class=\"pl-s1\">history</span> <span class=\"pl-c1\">=</span> <span class=\"pl-en\">_merge_dict</span>([<span class=\"pl-s1\">self</span>.<span class=\"pl-s1\">history</span>,<span class=\"pl-s1\">history</span>])\n \n <span class=\"pl-k\">def</span> <span class=\"pl-en\">fit</span>(<span class=\"pl-s1\">self</span>, <span class=\"pl-s1\">x</span>, <span class=\"pl-s1\">y</span>, <span class=\"pl-s1\">batch_size</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">32</span>, <span class=\"pl-s1\">epochs</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">10</span>, <span class=\"pl-s1\">verbose</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">1</span>, <span class=\"pl-s1\">callbacks</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">None</span>,\n <span class=\"pl-s1\">validation_split</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">0.0</span>, <span class=\"pl-s1\">validation_data</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">None</span>, <span class=\"pl-s1\">shuffle</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">True</span>,\n <span class=\"pl-s1\">class_weight</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">None</span>, <span class=\"pl-s1\">sample_weight</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">None</span>,\n <span class=\"pl-s1\">initial_epoch</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">0</span>, <span class=\"pl-c1\">**</span><span class=\"pl-s1\">kwargs</span>):\n <span class=\"pl-s\">\"\"\"</span>\n<span class=\"pl-s\"> %s</span>\n<span class=\"pl-s\"> \"\"\"</span><span class=\"pl-c1\">%</span><span class=\"pl-s1\">self</span>.<span class=\"pl-s1\">model</span>.<span class=\"pl-s1\">fit</span>.<span class=\"pl-s1\">__doc__</span>\n <span class=\"pl-s1\">h</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">self</span>.<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">fit</span>(<span class=\"pl-s1\">x</span>, <span class=\"pl-s1\">y</span>, <span class=\"pl-s1\">batch_size</span>, <span class=\"pl-s1\">epochs</span>, <span class=\"pl-s1\">verbose</span>, <span class=\"pl-s1\">callbacks</span>,\n <span class=\"pl-s1\">validation_split</span>, <span class=\"pl-s1\">validation_data</span>, <span class=\"pl-s1\">shuffle</span>,\n <span class=\"pl-s1\">class_weight</span>, <span class=\"pl-s1\">sample_weight</span>,\n <span class=\"pl-s1\">initial_epoch</span>, <span class=\"pl-c1\">**</span><span class=\"pl-s1\">kwargs</span>)\n <span class=\"pl-s1\">self</span>.<span class=\"pl-en\">_update_history</span>(<span class=\"pl-s1\">h</span>.<span class=\"pl-s1\">history</span>)\n <span class=\"pl-k\">return</span> <span class=\"pl-s1\">h</span>\n \n <span class=\"pl-k\">def</span> <span class=\"pl-en\">save</span>(<span class=\"pl-s1\">self</span>, <span class=\"pl-s1\">filepath</span>, <span class=\"pl-s1\">overwrite</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">True</span>):\n <span class=\"pl-s\">\"\"\"</span>\n<span class=\"pl-s\"> %s</span>\n<span class=\"pl-s\"> \"\"\"</span><span class=\"pl-c1\">%</span><span class=\"pl-s1\">self</span>.<span class=\"pl-s1\">model</span>.<span class=\"pl-s1\">save</span>.<span class=\"pl-s1\">__doc__</span>\n <span class=\"pl-en\">save</span>(<span class=\"pl-s1\">self</span>.<span class=\"pl-s1\">history</span>,<span class=\"pl-s1\">filepath</span>)\n <span class=\"pl-s1\">self</span>.<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">save</span>(<span class=\"pl-s1\">filepath</span>, <span class=\"pl-s1\">overwrite</span>)</pre></div>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "verbose=1 significantly slow down the training process.",
"output": "<p dir=\"auto\">What's your batch size?</p>\n<p dir=\"auto\">All that <code class=\"notranslate\">verbose=1</code> does is printing a log line after every batch. Printing does take a few milliseconds, so you don't want it to happen hundreds of times per second. In most cases the training time on each batch will be large enough that you won't see a difference. If the training time per batch is very small, you will see the difference, though.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Unable to install Keras to Anaconda3",
"output": "<p dir=\"auto\">Agreed that Keras should work with Anaconda. But in this case it sounds<br>\nlike an Anaconda issue, not a Keras issue. Keras <em>does</em> require h5py, so<br>\nthat line belongs there.</p>\n<p dir=\"auto\">On 3 December 2015 at 06:12, i5han <a href=\"mailto:notifications@github.com\">notifications@github.com</a> wrote:</p>\n<blockquote>\n<p dir=\"auto\">I belive that keras should work for anaconda as well. Although i cant<br>\nprovide any hard numbers i think thats also a significant amount of users.<br>\nAsking those users to switch to winpython just use Keras is not the most<br>\nuser friendly solution and it also acts as a hinderence to spreading Keras<br>\n:)</p>\n<p dir=\"auto\">\u2014<br>\nReply to this email directly or view it on GitHub<br>\n<a class=\"issue-link js-issue-link\" data-error-text=\"Failed to load title\" data-id=\"73842958\" data-permission-text=\"Title is private\" data-url=\"https://github.com/keras-team/keras/issues/101\" data-hovercard-type=\"issue\" data-hovercard-url=\"/keras-team/keras/issues/101/hovercard?comment_id=161651440&amp;comment_type=issue_comment\" href=\"https://github.com/keras-team/keras/issues/101#issuecomment-161651440\">#101 (comment)</a>.</p>\n</blockquote>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Roadmap",
"output": "<p dir=\"auto\">Are there any plans for MultiNEAT/HyperNEAT type support? I've loved Keras, and the ability to utilize NEAT functionality inline would be tremendously useful.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "\"real time\" recurrent nets",
"output": "<p dir=\"auto\">I take it it can't really be done then. Safest bet is to just export the weights and reimplement the computation in python?</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Can i add a dropout after the input layer?",
"output": "<p dir=\"auto\">A note to future readers. Because we need to explicitly specify the size of the input we must use the \"input_shape\" parameter, ie. if before we used</p>\n<p dir=\"auto\">model = Sequential()<br>\nmodel.add(Dense(64, input_dim=20, init='uniform'))</p>\n<p dir=\"auto\">Now we would use</p>\n<p dir=\"auto\">model = Sequential()<br>\nmodel.add(Dropout(0.5, input_shape=(20,)))<br>\nmodel.add(Dense(64, init='uniform'))</p>\n<p dir=\"auto\">(This took me a while to figure out, since I haven't tried anything yet like convolutions which use the \"input_shape\" parameter, and since I hadn't read about the Abstract base layer class in the documentation, which Dropout inherits from.)</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Is it possible to merge two different input layers into one?",
"output": "<p dir=\"auto\">I am looking for a recommendation to merge vector information with CNN output into a regression model.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "l1, l2 regularization",
"output": "<p dir=\"auto\">The keyword syntax was broken, but it will be fixed/changed in PR <a class=\"issue-link js-issue-link\" data-error-text=\"Failed to load title\" data-id=\"70485674\" data-permission-text=\"Title is private\" data-url=\"https://github.com/keras-team/keras/issues/77\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"/keras-team/keras/pull/77/hovercard\" href=\"https://github.com/keras-team/keras/pull/77\">#77</a></p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "early stopping",
"output": "<p dir=\"auto\">What you can do is display training and testing loss and accuracy at every epoch (using the arguments validation_split, verbose, show_accuracy of the .fit method). Then you can decide for yourself what is an appropriate number of epochs.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Extract weight matrix",
"output": "<p dir=\"auto\">Sure. The method model.save_weights() will do it for you and store the weights to hdf5.</p>\n<p dir=\"auto\">If you want to do it manually, you'd do something like:</p>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"for layer in model.layers:\n weights = layer.get_weights() # list of numpy arrays\"><pre class=\"notranslate\"><span class=\"pl-k\">for</span> <span class=\"pl-s1\">layer</span> <span class=\"pl-c1\">in</span> <span class=\"pl-s1\">model</span>.<span class=\"pl-s1\">layers</span>:\n <span class=\"pl-s1\">weights</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">layer</span>.<span class=\"pl-en\">get_weights</span>() <span class=\"pl-c\"># list of numpy arrays</span></pre></div>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Problem with return_sequences=True",
"output": "<p dir=\"auto\">So, while this now works in the current version (at least it runs, I haven't checked the results):</p>\n<div class=\"snippet-clipboard-content notranslate position-relative overflow-auto\" data-snippet-clipboard-copy-content=\"model = Sequential()\nmodel.add(GRU(256, 256, return_sequences=True))\nmodel.compile(loss='mean_squared_error', optimizer='sgd')\n\ninz = np.random.randn(100,100,256)\nrez = model.predict_proba(inz) # 3d array\nmodel.fit(inz,rez)\"><pre class=\"notranslate\"><code class=\"notranslate\">model = Sequential()\nmodel.add(GRU(256, 256, return_sequences=True))\nmodel.compile(loss='mean_squared_error', optimizer='sgd')\n\ninz = np.random.randn(100,100,256)\nrez = model.predict_proba(inz) # 3d array\nmodel.fit(inz,rez)\n</code></pre></div>\n<p dir=\"auto\">as soon as the number of samples and number of time steps are different, it does not work any more:</p>\n<div class=\"snippet-clipboard-content notranslate position-relative overflow-auto\" data-snippet-clipboard-copy-content=\"model = Sequential()\nmodel.add(GRU(256, 256, return_sequences=True))\nmodel.compile(loss='mean_squared_error', optimizer='sgd')\n\ninz = np.random.randn(100,68,256)\nrez = model.predict_proba(inz) # 3d array\nmodel.fit(inz,rez)\"><pre class=\"notranslate\"><code class=\"notranslate\">model = Sequential()\nmodel.add(GRU(256, 256, return_sequences=True))\nmodel.compile(loss='mean_squared_error', optimizer='sgd')\n\ninz = np.random.randn(100,68,256)\nrez = model.predict_proba(inz) # 3d array\nmodel.fit(inz,rez)\n</code></pre></div>\n<div class=\"snippet-clipboard-content notranslate position-relative overflow-auto\" data-snippet-clipboard-copy-content=\"AssertionError Traceback (most recent call last)\n&lt;ipython-input-89-5c6ce0d8a880&gt; in &lt;module&gt;()\n----&gt; 1 model.fit(inz,rez)\n\n/usr/local/lib/python2.7/site-packages/Keras-0.1.1-py2.7.egg/keras/models.pyc in fit(self, X, y, batch_size, nb_epoch, verbose, callbacks, validation_split, validation_data, shuffle, show_accuracy, class_weight)\n 205 batch_logs['accuracy'] = acc\n 206 else:\n--&gt; 207 loss = self._train(*ins)\n 208 batch_logs['loss'] = loss\n 209 \n...\nAssertionError: Theano Assert failed!\nApply node that caused the error: Assert(InplaceDimShuffle{1,0,2}.0, Elemwise{Composite{EQ(i0, ((i1 + i2) - Switch(LT(i3, (i1 + i2)), i4, (i1 + i2))))}}.0)\nInputs types: [TensorType(float64, 3D), TensorType(int8, scalar)]\nInputs shapes: [(68, 100, 256), ()]\nInputs strides: [(2048, 139264, 8), ()]\nInputs values: ['not shown', array(0, dtype=int8)]\"><pre class=\"notranslate\"><code class=\"notranslate\">AssertionError Traceback (most recent call last)\n&lt;ipython-input-89-5c6ce0d8a880&gt; in &lt;module&gt;()\n----&gt; 1 model.fit(inz,rez)\n\n/usr/local/lib/python2.7/site-packages/Keras-0.1.1-py2.7.egg/keras/models.pyc in fit(self, X, y, batch_size, nb_epoch, verbose, callbacks, validation_split, validation_data, shuffle, show_accuracy, class_weight)\n 205 batch_logs['accuracy'] = acc\n 206 else:\n--&gt; 207 loss = self._train(*ins)\n 208 batch_logs['loss'] = loss\n 209 \n...\nAssertionError: Theano Assert failed!\nApply node that caused the error: Assert(InplaceDimShuffle{1,0,2}.0, Elemwise{Composite{EQ(i0, ((i1 + i2) - Switch(LT(i3, (i1 + i2)), i4, (i1 + i2))))}}.0)\nInputs types: [TensorType(float64, 3D), TensorType(int8, scalar)]\nInputs shapes: [(68, 100, 256), ()]\nInputs strides: [(2048, 139264, 8), ()]\nInputs values: ['not shown', array(0, dtype=int8)]\n</code></pre></div>\n<p dir=\"auto\">Is this a bug, or am I missing something?</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "How to get the output of Conv layer and FC layer?",
"output": "<p dir=\"auto\">Hey, I think I can answer your questions.</p>\n<ol dir=\"auto\">\n<li>In order to visualize the feature maps, just use the get_weights method of each layer and plot it using matplotlib. I have a working implementation of the same, and hope I'll be able to generalize it soon and create a PR.</li>\n<li>For this, the best method that works for me is to create 2 models. Train the 1st model with your labels. The 2nd model is identical to the 1st except, it does not contain the last (or all fully connected) layer (don't forget to flatten). Using get_weights method above, get the weights of the 1st model and using set_weights assign it to the 2nd model. Then use predict_proba to get the feature vectors.</li>\n</ol>\n<p dir=\"auto\">Hope it helps.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "importing package fails",
"output": "<p dir=\"auto\">I think I have figured this out. NVM</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Activation penalties",
"output": "<p dir=\"auto\">This is definitely something that would be nice to have, but to add it we will have to find a truly modular, transparent way to make it fit in. I am pretty sure this is possible, but it might take a few days.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Connecting one layer with two other layers",
"output": "<p dir=\"auto\">Now possible through the use of the Merge layer: <a href=\"http://keras.io/layers/core/#merge\" rel=\"nofollow\">http://keras.io/layers/core/#merge</a></p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "LSTM - Sequences with different num of time steps",
"output": "<p dir=\"auto\">There are two simple and most often implemented ways of handling this:</p>\n<ol dir=\"auto\">\n<li>Bucketing and Padding\n<ol dir=\"auto\">\n<li>Separate input sample into buckets that have similar length, ideally such that each bucket has a number of samples that is a multiple of the mini-batch size</li>\n<li>For each bucket, pad the samples to the length of the longest sample in that bucket with a <em>neutral</em> number. 0's are frequent, but for something like speech data, a representation of silence is used which is often not zeros (e.g. the FFT of a silent portion of audio is used as a neutral padding).</li>\n</ol>\n</li>\n<li>Bucketing\n<ol dir=\"auto\">\n<li>Separate input samples into buckets of exactly the same length\n<ul dir=\"auto\">\n<li>removes the need for determining what a neutral padding is</li>\n<li>however, the size of the buckets in this case will frequently not be a multiple of the mini-batch size, so in each epoch, multiple times the updates will not be based on a full mini-batch.</li>\n</ul>\n</li>\n</ol>\n</li>\n</ol>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Fix batch normalization during test time",
"output": "<p dir=\"auto\"><a class=\"user-mention notranslate\" data-hovercard-type=\"user\" data-hovercard-url=\"/users/fchollet/hovercard\" data-octo-click=\"hovercard-link-click\" data-octo-dimensions=\"link_type:self\" href=\"https://github.com/fchollet\">@fchollet</a> This looks like a really old bug that has been fixed. Is this true?</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "any plan to add the support of maxout",
"output": "<p dir=\"auto\">I think this can be closed as per <a class=\"issue-link js-issue-link\" data-error-text=\"Failed to load title\" data-id=\"77081409\" data-permission-text=\"Title is private\" data-url=\"https://github.com/keras-team/keras/issues/134\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"/keras-team/keras/pull/134/hovercard\" href=\"https://github.com/keras-team/keras/pull/134\">#134</a> ?</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Reconfiguring a model after training",
"output": "<p dir=\"auto\">At first I want to say I'm sorry that I post this here, but I could not find a solution to my problem on the net and this seems to be the best place for it.</p>\n<p dir=\"auto\">Let's say I want to train a GRU and because I need stateful=true the batch-size has to be known beforehand and the timesteps also have to be known in advance</p>\n<p dir=\"auto\">Using the functional API I would have an Input as follows:</p>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"input_1 = Input(batch_shape=(batch_size, time_steps, features))\"><pre class=\"notranslate\"><span class=\"pl-s1\">input_1</span> <span class=\"pl-c1\">=</span> <span class=\"pl-v\">Input</span>(<span class=\"pl-s1\">batch_shape</span><span class=\"pl-c1\">=</span>(<span class=\"pl-s1\">batch_size</span>, <span class=\"pl-s1\">time_steps</span>, <span class=\"pl-s1\">features</span>))</pre></div>\n<p dir=\"auto\">But when I evaluate the model I don't want to pass my test data in batches with fixed timesteps. My<br>\nsolution at the moment is to load the saved model and rebuild it with:</p>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"input_1 = Input(shape=(None, num_input_dim))\"><pre class=\"notranslate\"><span class=\"pl-s1\">input_1</span> <span class=\"pl-c1\">=</span> <span class=\"pl-v\">Input</span>(<span class=\"pl-s1\">shape</span><span class=\"pl-c1\">=</span>(<span class=\"pl-c1\">None</span>, <span class=\"pl-s1\">num_input_dim</span>))</pre></div>\n<p dir=\"auto\">To do that though I need a method that goes through every layer of the model and then<br>\nset the weights afterwards.</p>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\" input_1 = Input(shape=(None, num_input_dim))\n x1 = input_1\n weights = []\n for l in range(0, len(layers)):\n if isinstance(layers[l], keras.layers.GRU):\n x1 = GRU(layers[l].output_shape[-1], return_sequences=True)(x1)\n weights.append(layers[l].get_weights())\n elif isinstance(layers[l], keras.layers.Dense):\n x1 = Dense(layers[l].output_shape[-1], activation='tanh')(x1)\n weights.append(layers[l].get_weights())\n else:\n continue\"><pre class=\"notranslate\"> <span class=\"pl-s1\">input_1</span> <span class=\"pl-c1\">=</span> <span class=\"pl-v\">Input</span>(<span class=\"pl-s1\">shape</span><span class=\"pl-c1\">=</span>(<span class=\"pl-c1\">None</span>, <span class=\"pl-s1\">num_input_dim</span>))\n <span class=\"pl-s1\">x1</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">input_1</span>\n <span class=\"pl-s1\">weights</span> <span class=\"pl-c1\">=</span> []\n <span class=\"pl-k\">for</span> <span class=\"pl-s1\">l</span> <span class=\"pl-c1\">in</span> <span class=\"pl-en\">range</span>(<span class=\"pl-c1\">0</span>, <span class=\"pl-en\">len</span>(<span class=\"pl-s1\">layers</span>)):\n <span class=\"pl-k\">if</span> <span class=\"pl-en\">isinstance</span>(<span class=\"pl-s1\">layers</span>[<span class=\"pl-s1\">l</span>], <span class=\"pl-s1\">keras</span>.<span class=\"pl-s1\">layers</span>.<span class=\"pl-v\">GRU</span>):\n <span class=\"pl-s1\">x1</span> <span class=\"pl-c1\">=</span> <span class=\"pl-v\">GRU</span>(<span class=\"pl-s1\">layers</span>[<span class=\"pl-s1\">l</span>].<span class=\"pl-s1\">output_shape</span>[<span class=\"pl-c1\">-</span><span class=\"pl-c1\">1</span>], <span class=\"pl-s1\">return_sequences</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">True</span>)(<span class=\"pl-s1\">x1</span>)\n <span class=\"pl-s1\">weights</span>.<span class=\"pl-en\">append</span>(<span class=\"pl-s1\">layers</span>[<span class=\"pl-s1\">l</span>].<span class=\"pl-en\">get_weights</span>())\n <span class=\"pl-k\">elif</span> <span class=\"pl-en\">isinstance</span>(<span class=\"pl-s1\">layers</span>[<span class=\"pl-s1\">l</span>], <span class=\"pl-s1\">keras</span>.<span class=\"pl-s1\">layers</span>.<span class=\"pl-v\">Dense</span>):\n <span class=\"pl-s1\">x1</span> <span class=\"pl-c1\">=</span> <span class=\"pl-v\">Dense</span>(<span class=\"pl-s1\">layers</span>[<span class=\"pl-s1\">l</span>].<span class=\"pl-s1\">output_shape</span>[<span class=\"pl-c1\">-</span><span class=\"pl-c1\">1</span>], <span class=\"pl-s1\">activation</span><span class=\"pl-c1\">=</span><span class=\"pl-s\">'tanh'</span>)(<span class=\"pl-s1\">x1</span>)\n <span class=\"pl-s1\">weights</span>.<span class=\"pl-en\">append</span>(<span class=\"pl-s1\">layers</span>[<span class=\"pl-s1\">l</span>].<span class=\"pl-en\">get_weights</span>())\n <span class=\"pl-k\">else</span>:\n <span class=\"pl-k\">continue</span></pre></div>\n<p dir=\"auto\">(This is just an example and I find this solution very unelegant.)</p>\n<p dir=\"auto\">There must be a better way to redefine the input shape. Can somebody help me out here<br>\nplease.</p>\n<hr>\n<p dir=\"auto\">From the discussion before I take it that I do not have to redefine the layers to</p>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"stateful = False\"><pre class=\"notranslate\"><span class=\"pl-s1\">stateful</span> <span class=\"pl-c1\">=</span> <span class=\"pl-c1\">False</span></pre></div>\n<p dir=\"auto\">for testing purposes.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "no pip yet?",
"output": "<p dir=\"auto\">You can install it with pip using the following command:</p>\n<p dir=\"auto\"><code class=\"notranslate\">pip install git+git://github.com/fchollet/keras.git</code></p>\n<p dir=\"auto\">The library is not available at PyPI yet as it is still alpha. We should probably think about releasing it on PyPI after we reach a stable version.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Move regularizers to layer definitions?",
"output": "<p dir=\"auto\">I created a pull request with a working implementation. Let me know what you think.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Batch Embedding",
"output": "<p dir=\"auto\">Nevermind, I figured it out eventually: moving from embedding/feedforward/recurrent requires a lot of reshape and flatten inbetween.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Rename Time Distributed dense/softmax",
"output": "<p dir=\"auto\">I personally have no opinion. Maybe \"time distributed\" is less ambiguous, as \"distributed\" implied that you are doing the same thing over different timesteps, rather than doing a temporal operation (like a temporal convolution, for instance).<br>\nBut maybe that's also clear with \"temporal\".</p>\n<p dir=\"auto\">It would be interesting to gather a few opinions from users, about which name sounds more natural and explicit.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "__init__ methods in layers/embeddings.py make reference to argument that is not in the argument list",
"output": "<p dir=\"auto\">Just fixed this.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Initiate a ToDo List ",
"output": "<p dir=\"auto\">+1 for tagging issues with a few milestones. Knowing what's mission critical before a v1 / PyPI release will help make sure development effort is focused in the right direction.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "How can I get hidden layer representation of the given data?",
"output": "<p dir=\"auto\">One simple way to do it is to use the weights of your model to build a new model that's truncated at the layer you want to read. Then you can run the <code class=\"notranslate\">._predict(X_batch)</code> method to get the activations for a batch of inputs.</p>\n<p dir=\"auto\">Example:</p>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"# this is your initial model\nmodel = Sequential()\nmodel.add(Dense(20, 64, init='uniform'))\nmodel.add(Activation('tanh'))\nmodel.add(Dense(64, 1, init='uniform'))\nmodel.add(Activation('softmax'))\n\n# we train it\nmodel.compile(loss='mse', optimizer='sgd')\nmodel.fit(X_train, y_train, nb_epoch=20, batch_size=16)\n\n# we build a new model with the activations of the old model\n# this model is truncated after the first layer\nmodel2 = Sequential()\nmodel2.add(Dense(20, 64, weights=model.layers[0].get_weights()))\nmodel2.add(Activation('tanh'))\n\nactivations = model2._predict(X_batch)\"><pre class=\"notranslate\"><span class=\"pl-c\"># this is your initial model</span>\n<span class=\"pl-s1\">model</span> <span class=\"pl-c1\">=</span> <span class=\"pl-v\">Sequential</span>()\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">add</span>(<span class=\"pl-v\">Dense</span>(<span class=\"pl-c1\">20</span>, <span class=\"pl-c1\">64</span>, <span class=\"pl-s1\">init</span><span class=\"pl-c1\">=</span><span class=\"pl-s\">'uniform'</span>))\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">add</span>(<span class=\"pl-v\">Activation</span>(<span class=\"pl-s\">'tanh'</span>))\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">add</span>(<span class=\"pl-v\">Dense</span>(<span class=\"pl-c1\">64</span>, <span class=\"pl-c1\">1</span>, <span class=\"pl-s1\">init</span><span class=\"pl-c1\">=</span><span class=\"pl-s\">'uniform'</span>))\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">add</span>(<span class=\"pl-v\">Activation</span>(<span class=\"pl-s\">'softmax'</span>))\n\n<span class=\"pl-c\"># we train it</span>\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">compile</span>(<span class=\"pl-s1\">loss</span><span class=\"pl-c1\">=</span><span class=\"pl-s\">'mse'</span>, <span class=\"pl-s1\">optimizer</span><span class=\"pl-c1\">=</span><span class=\"pl-s\">'sgd'</span>)\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">fit</span>(<span class=\"pl-v\">X_train</span>, <span class=\"pl-s1\">y_train</span>, <span class=\"pl-s1\">nb_epoch</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">20</span>, <span class=\"pl-s1\">batch_size</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">16</span>)\n\n<span class=\"pl-c\"># we build a new model with the activations of the old model</span>\n<span class=\"pl-c\"># this model is truncated after the first layer</span>\n<span class=\"pl-s1\">model2</span> <span class=\"pl-c1\">=</span> <span class=\"pl-v\">Sequential</span>()\n<span class=\"pl-s1\">model2</span>.<span class=\"pl-en\">add</span>(<span class=\"pl-v\">Dense</span>(<span class=\"pl-c1\">20</span>, <span class=\"pl-c1\">64</span>, <span class=\"pl-s1\">weights</span><span class=\"pl-c1\">=</span><span class=\"pl-s1\">model</span>.<span class=\"pl-s1\">layers</span>[<span class=\"pl-c1\">0</span>].<span class=\"pl-en\">get_weights</span>()))\n<span class=\"pl-s1\">model2</span>.<span class=\"pl-en\">add</span>(<span class=\"pl-v\">Activation</span>(<span class=\"pl-s\">'tanh'</span>))\n\n<span class=\"pl-s1\">activations</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">model2</span>.<span class=\"pl-en\">_predict</span>(<span class=\"pl-v\">X_batch</span>)</pre></div>\n<p dir=\"auto\">Note: I haven't tested it.</p>\n<p dir=\"auto\">Another way to do it would be to define a Theano function to get the layer's output:</p>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"import theano\nget_activations = theano.function([model.layers[0].input], model.layers[1].output(train=False), allow_input_downcast=True)\nactivations = get_activations(X_batch) # same result as above\"><pre class=\"notranslate\"><span class=\"pl-k\">import</span> <span class=\"pl-s1\">theano</span>\n<span class=\"pl-s1\">get_activations</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">theano</span>.<span class=\"pl-en\">function</span>([<span class=\"pl-s1\">model</span>.<span class=\"pl-s1\">layers</span>[<span class=\"pl-c1\">0</span>].<span class=\"pl-s1\">input</span>], <span class=\"pl-s1\">model</span>.<span class=\"pl-s1\">layers</span>[<span class=\"pl-c1\">1</span>].<span class=\"pl-en\">output</span>(<span class=\"pl-s1\">train</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">False</span>), <span class=\"pl-s1\">allow_input_downcast</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">True</span>)\n<span class=\"pl-s1\">activations</span> <span class=\"pl-c1\">=</span> <span class=\"pl-en\">get_activations</span>(<span class=\"pl-v\">X_batch</span>) <span class=\"pl-c\"># same result as above</span></pre></div>\n<p dir=\"auto\">Note: also untested.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Recurrent Models with sequences of mixed length",
"output": "<p dir=\"auto\">In addition, here are a few quick examples of solutions to your problem:</p>\n<h3 dir=\"auto\">Zero-padding</h3>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"X = keras.preprocessing.sequence.pad_sequences(sequences, maxlen=100)\nmodel.fit(X, y, batch_size=32, nb_epoch=10)\"><pre class=\"notranslate\"><span class=\"pl-v\">X</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">keras</span>.<span class=\"pl-s1\">preprocessing</span>.<span class=\"pl-s1\">sequence</span>.<span class=\"pl-en\">pad_sequences</span>(<span class=\"pl-s1\">sequences</span>, <span class=\"pl-s1\">maxlen</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">100</span>)\n<span class=\"pl-s1\">model</span>.<span class=\"pl-en\">fit</span>(<span class=\"pl-v\">X</span>, <span class=\"pl-s1\">y</span>, <span class=\"pl-s1\">batch_size</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">32</span>, <span class=\"pl-s1\">nb_epoch</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">10</span>)</pre></div>\n<h3 dir=\"auto\">Batches of size 1</h3>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"for seq, label in zip(sequences, y):\n model.train(np.array([seq]), [label])\"><pre class=\"notranslate\"><span class=\"pl-k\">for</span> <span class=\"pl-s1\">seq</span>, <span class=\"pl-s1\">label</span> <span class=\"pl-c1\">in</span> <span class=\"pl-en\">zip</span>(<span class=\"pl-s1\">sequences</span>, <span class=\"pl-s1\">y</span>):\n <span class=\"pl-s1\">model</span>.<span class=\"pl-en\">train</span>(<span class=\"pl-s1\">np</span>.<span class=\"pl-en\">array</span>([<span class=\"pl-s1\">seq</span>]), [<span class=\"pl-s1\">label</span>])</pre></div>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "New datasets and application examples",
"output": "<p dir=\"auto\">As <a class=\"user-mention notranslate\" data-hovercard-type=\"user\" data-hovercard-url=\"/users/morenoh149/hovercard\" data-octo-click=\"hovercard-link-click\" data-octo-dimensions=\"link_type:self\" href=\"https://github.com/morenoh149\">@morenoh149</a> says, this issue is resolved by the <code class=\"notranslate\">CONTRIBUTING.md</code>.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "How to use the pretrained model such as imagenet-vgg-f?",
"output": "<p dir=\"auto\">Any progress on this?</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "How can I compute a meaningful clip_norm threshold for my particular Network?",
"output": "<p dir=\"auto\">Plot some gradient histograms of the concatenated values all variables if you want to set this threshold for all gradients at once; or for each variable separately. That will give you a good idea how to set the clipping parameter.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "How to save and load model?",
"output": "<p dir=\"auto\">So far I've been using cPickle to save/load models, but I'm looking into writing a fast &amp; compact save/load function based on HDF5. Not very high on the priority list though...</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Model training diverges after some level ?",
"output": "<p dir=\"auto\">yeah perfectly rational then it should be corrected.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "SimpleRNN Error",
"output": "<p dir=\"auto\">Awesome, thanks for the update.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "preprocessing utils would greatly benefit from sklearn",
"output": "<div class=\"snippet-clipboard-content notranslate position-relative overflow-auto\" data-snippet-clipboard-copy-content=\"from sklearn.feature_extraction.text import CountVectorizer\nvectorizer = CountVectorizer(max_features=max_words,binary=True)\nvectorizer.fit(phrases)\nsparse = vectorizer.transform(phrases)\nX=[row.indices for row in sparse] \n# ^^ sparse matrix to sequences\"><pre class=\"notranslate\"><code class=\"notranslate\">from sklearn.feature_extraction.text import CountVectorizer\nvectorizer = CountVectorizer(max_features=max_words,binary=True)\nvectorizer.fit(phrases)\nsparse = vectorizer.transform(phrases)\nX=[row.indices for row in sparse] \n# ^^ sparse matrix to sequences\n</code></pre></div>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Does this support validation set while training to see generalization bound of the model?",
"output": "<p dir=\"auto\">Sure! To monitor train loss and test loss after each batch, you could use:</p>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"for e in range(nb_epoch):\n for batch_index in range(0, len(X_train)/batch_size+1):\n batch = range(batch_index*batch_size, min(len(X_train), (batch_index+1)*batch_size))\n train_loss = model.train(X_train[batch], y_train[batch])\n test_loss = model.evaluate(X_test, y_test, batch_size=batch_size)\"><pre class=\"notranslate\"><span class=\"pl-k\">for</span> <span class=\"pl-s1\">e</span> <span class=\"pl-c1\">in</span> <span class=\"pl-en\">range</span>(<span class=\"pl-s1\">nb_epoch</span>):\n <span class=\"pl-k\">for</span> <span class=\"pl-s1\">batch_index</span> <span class=\"pl-c1\">in</span> <span class=\"pl-en\">range</span>(<span class=\"pl-c1\">0</span>, <span class=\"pl-en\">len</span>(<span class=\"pl-v\">X_train</span>)<span class=\"pl-c1\">/</span><span class=\"pl-s1\">batch_size</span><span class=\"pl-c1\">+</span><span class=\"pl-c1\">1</span>):\n <span class=\"pl-s1\">batch</span> <span class=\"pl-c1\">=</span> <span class=\"pl-en\">range</span>(<span class=\"pl-s1\">batch_index</span><span class=\"pl-c1\">*</span><span class=\"pl-s1\">batch_size</span>, <span class=\"pl-en\">min</span>(<span class=\"pl-en\">len</span>(<span class=\"pl-v\">X_train</span>), (<span class=\"pl-s1\">batch_index</span><span class=\"pl-c1\">+</span><span class=\"pl-c1\">1</span>)<span class=\"pl-c1\">*</span><span class=\"pl-s1\">batch_size</span>))\n <span class=\"pl-s1\">train_loss</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">model</span>.<span class=\"pl-en\">train</span>(<span class=\"pl-v\">X_train</span>[<span class=\"pl-s1\">batch</span>], <span class=\"pl-s1\">y_train</span>[<span class=\"pl-s1\">batch</span>])\n <span class=\"pl-s1\">test_loss</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">model</span>.<span class=\"pl-en\">evaluate</span>(<span class=\"pl-v\">X_test</span>, <span class=\"pl-s1\">y_test</span>, <span class=\"pl-s1\">batch_size</span><span class=\"pl-c1\">=</span><span class=\"pl-s1\">batch_size</span>)</pre></div>\n<p dir=\"auto\">Or alternatively, monitor train loss and test loss at each epoch (more practical):</p>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"for e in range(nb_epoch):\n train_loss = model.fit(X_train, y_train, batch_size=batch_size, nb_epoch=1)\n test_loss = model.evaluate(X_test, y_test, batch_size=batch_size)\"><pre class=\"notranslate\"><span class=\"pl-k\">for</span> <span class=\"pl-s1\">e</span> <span class=\"pl-c1\">in</span> <span class=\"pl-en\">range</span>(<span class=\"pl-s1\">nb_epoch</span>):\n <span class=\"pl-s1\">train_loss</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">model</span>.<span class=\"pl-en\">fit</span>(<span class=\"pl-v\">X_train</span>, <span class=\"pl-s1\">y_train</span>, <span class=\"pl-s1\">batch_size</span><span class=\"pl-c1\">=</span><span class=\"pl-s1\">batch_size</span>, <span class=\"pl-s1\">nb_epoch</span><span class=\"pl-c1\">=</span><span class=\"pl-c1\">1</span>)\n <span class=\"pl-s1\">test_loss</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">model</span>.<span class=\"pl-en\">evaluate</span>(<span class=\"pl-v\">X_test</span>, <span class=\"pl-s1\">y_test</span>, <span class=\"pl-s1\">batch_size</span><span class=\"pl-c1\">=</span><span class=\"pl-s1\">batch_size</span>)</pre></div>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Which version of Python is used?",
"output": "<p dir=\"auto\">Keras was written with Python 2.7 in mind. But I would expect 2to3 to be enough for use with Python 3.</p>\n<p dir=\"auto\">I'll update the README.md with the Python version.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Is there an example to show feature extraction?",
"output": "<p dir=\"auto\">how can I save the features of the last fully connected layer (before the softmak layer) of the trained model?</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Preserving order - Shuffling in flow of ImageDataGenerator",
"output": "<p dir=\"auto\">Awesome thanks!</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Place documentation for objects in docstrings in source code",
"output": "<p dir=\"auto\">Since documentation seems to be coming up frequently in the issues lately, maybe this would be worth thinking about, <a class=\"user-mention notranslate\" data-hovercard-type=\"user\" data-hovercard-url=\"/users/fchollet/hovercard\" data-octo-click=\"hovercard-link-click\" data-octo-dimensions=\"link_type:self\" href=\"https://github.com/fchollet\">@fchollet</a> ? If you think it is a path you want to take, let me know and I will start a branch for the transistion.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "About the dropout",
"output": "<p dir=\"auto\">Yes, for all intents and purposes you can consider that Dropout is per-sample. The Dropout mask is a tensor that is random over every dimension.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "example code error(keras/examples/reuters_mlp.py)",
"output": "<p dir=\"auto\">Thanks, this is fixed.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "bidirectional rnn",
"output": "<p dir=\"auto\">Thanks, I have a created a bidirectional LSTM with mlp layers for anyone who might be interested to use<br>\n<a href=\"https://github.com/hycis/bidirectional_RNN.git\">https://github.com/hycis/bidirectional_RNN.git</a><br>\nI will close the issue.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Fix model.fit() multiple output",
"output": "<p dir=\"auto\">Hi <a class=\"user-mention notranslate\" data-hovercard-type=\"user\" data-hovercard-url=\"/users/fchollet/hovercard\" data-octo-click=\"hovercard-link-click\" data-octo-dimensions=\"link_type:self\" href=\"https://github.com/fchollet\">@fchollet</a> and <a class=\"user-mention notranslate\" data-hovercard-type=\"user\" data-hovercard-url=\"/users/tristandeleu/hovercard\" data-octo-click=\"hovercard-link-click\" data-octo-dimensions=\"link_type:self\" href=\"https://github.com/tristandeleu\">@tristandeleu</a>,<br>\nTo copy a list, you can just do <code class=\"notranslate\">self.callbacks = list(callbacks)</code>. No need for the list comprehension here.<br>\nHowever, it would be better to use <code class=\"notranslate\">None</code> instead. See here: <a href=\"http://docs.python-guide.org/en/latest/writing/gotchas/#mutable-default-arguments\" rel=\"nofollow\">http://docs.python-guide.org/en/latest/writing/gotchas/#mutable-default-arguments</a></p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Feedback from callbacks",
"output": "<p dir=\"auto\">Closing this, discussion in <a class=\"issue-link js-issue-link\" data-error-text=\"Failed to load title\" data-id=\"87731759\" data-permission-text=\"Title is private\" data-url=\"https://github.com/keras-team/keras/issues/219\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"/keras-team/keras/pull/219/hovercard\" href=\"https://github.com/keras-team/keras/pull/219\">#219</a></p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Add a data loading example for RNNs",
"output": "<p dir=\"auto\">Here's what you would do as input to a LSTM:</p>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"import numpy as np\n# 2 samples with 3 timesteps and 1 dimension\nsamples = [[[1], [0], [1]], [[1], [1], [0]]]\nX = np.array(samples)\nprint X.shape\n\n# alternatively\nsamples = [[1, 0, 1], [1, 1, 0]]\nX = np.array(samples)\nX = np.reshape(X, (len(samples), 3, 1))\nprint X.shape\"><pre class=\"notranslate\"><span class=\"pl-k\">import</span> <span class=\"pl-s1\">numpy</span> <span class=\"pl-k\">as</span> <span class=\"pl-s1\">np</span>\n<span class=\"pl-c\"># 2 samples with 3 timesteps and 1 dimension</span>\n<span class=\"pl-s1\">samples</span> <span class=\"pl-c1\">=</span> [[[<span class=\"pl-c1\">1</span>], [<span class=\"pl-c1\">0</span>], [<span class=\"pl-c1\">1</span>]], [[<span class=\"pl-c1\">1</span>], [<span class=\"pl-c1\">1</span>], [<span class=\"pl-c1\">0</span>]]]\n<span class=\"pl-v\">X</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">np</span>.<span class=\"pl-en\">array</span>(<span class=\"pl-s1\">samples</span>)\n<span class=\"pl-k\">print</span> <span class=\"pl-v\">X</span>.<span class=\"pl-s1\">shape</span>\n\n<span class=\"pl-c\"># alternatively</span>\n<span class=\"pl-s1\">samples</span> <span class=\"pl-c1\">=</span> [[<span class=\"pl-c1\">1</span>, <span class=\"pl-c1\">0</span>, <span class=\"pl-c1\">1</span>], [<span class=\"pl-c1\">1</span>, <span class=\"pl-c1\">1</span>, <span class=\"pl-c1\">0</span>]]\n<span class=\"pl-v\">X</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">np</span>.<span class=\"pl-en\">array</span>(<span class=\"pl-s1\">samples</span>)\n<span class=\"pl-v\">X</span> <span class=\"pl-c1\">=</span> <span class=\"pl-s1\">np</span>.<span class=\"pl-en\">reshape</span>(<span class=\"pl-v\">X</span>, (<span class=\"pl-en\">len</span>(<span class=\"pl-s1\">samples</span>), <span class=\"pl-c1\">3</span>, <span class=\"pl-c1\">1</span>))\n<span class=\"pl-k\">print</span> <span class=\"pl-v\">X</span>.<span class=\"pl-s1\">shape</span></pre></div>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Compile error while using Embedding and Merge together",
"output": "<p dir=\"auto\">Having the same issue, I'm using <code class=\"notranslate\">Graph</code> to address this but it isn't very clean.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Implement stack-based RNN layers",
"output": "<p dir=\"auto\">It would quite similar to the existing layers in <code class=\"notranslate\">layers/recurrent.py</code>. Instead of a single <code class=\"notranslate\">step</code> function you'd have 3 helper functions, each for one of the 3 equations of paragraph 3.1 in the paper (separation is cleaner).</p>\n<p dir=\"auto\">You maintain a stack matrix (for performance reasons you might want to have it be of fixed-size, with a separate integer parameter indicating the position of the \"latest\" vector in the matrix). Iterating over the matrix would be done with Theano <code class=\"notranslate\">scan</code>.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "error while using save_weights()",
"output": "<p dir=\"auto\">Thank you, tdhd.</p>\n<p dir=\"auto\">I will try to upgrade my package and specify <code class=\"notranslate\">overwrite=True</code>. :)</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "How to specify the number of CPU cores used?",
"output": "<p dir=\"auto\">In a cursory search, I could find nothing in Theano about setting the core count, and I was not expecting to.</p>\n<p dir=\"auto\">Theano, and deep learning, is all about parallelism: do everything you can do in parallel. Limiting yourself to a subset of the things you can do in parallel, intentionally, goes against everything we hold dear. Personally, I'd give a finger and three toes to be able to run everything I wanted to in parallel; unlimited money can buy replacement toes but not additional time.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "NaN when accuracy reaches 1 with logistic loss",
"output": "<p dir=\"auto\">I'm getting this behaviour just by running the mnist_cnn.py example.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Add a character-based RNN example.",
"output": "<p dir=\"auto\">Here is a code sample. This code divide a long character string to chunks of 200 characters, and it learns a model for the next character given the previous ones. At the end it inefficiently generates 128 sentences, each of 200 chars.</p>\n<div class=\"snippet-clipboard-content notranslate position-relative overflow-auto\" data-snippet-clipboard-copy-content=\"import numpy\nimport sys\nsys.path.append('/home/USER/python/keras/')\n\n# Obtain the corpus of character sequence to train from.\n# Here it is just the sequence 123456789 repeated 100000 times.\nx = &quot;123456789&quot;*100000\n\n# Construct a dictionary, and the reverse dictionary for the participating chars.\n# '*&quot; is a 'start-sequence' character.\ndct = ['*'] + list(set(x))\nmax_features = len(dct)\nrev_dct = [(j, i) for i, j in enumerate(dct)]\nrev_dct = dict(rev_dct)\n\n# Convert the characters to their dct indexes. \nx = [rev_dct[ch] for ch in x]\n\n# Divide the corpuse to substrings of length 200.\nn_timestamps = 200\nx = x[:len(x)- len(x) % n_timestamps]\nx = numpy.array(x, dtype='int32').reshape((-1, n_timestamps))\n\n# Generate input and ouput per substring, as an indicator matrix.\ny = numpy.zeros((x.shape[0], x.shape[1], max_features), dtype='int32')\nfor i in numpy.arange(x.shape[0]):\n for j in numpy.arange(x.shape[1]):\n y[i, j, x[i, j]] = 1 \n\n# Shift-1 the input sequences to the right, and make them start with '*'.\nx = numpy.roll(y, 1, axis=1)\nx[:, 0, :] = 0\nx[:, 0, 0] = 1\n\n# Build the model.\nfrom keras.models import Sequential\nfrom keras.layers.core import TimeDistributedDense, Dropout, Activation\nfrom keras.layers.recurrent import LSTM\n\nmodel = Sequential()\nmodel.add(LSTM(max_features, 256, return_sequences=True))\nmodel.add(LSTM(256, 256, return_sequences=True))\nmodel.add(LSTM(256, 256, return_sequences=True))\nmodel.add(TimeDistributedDense(256, max_features))\nmodel.add(Activation('time_distributed_softmax'))\n\nmodel.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n\nmodel.fit(x, y, batch_size=64, nb_epoch=50)\n\n# Sample 128 sentences (200 characters each) from model.\n\ndef mnrnd(probs):\n rnd = numpy.random.random()\n for i in xrange(len(probs)):\n rnd -= probs[i]\n if rnd &lt;= 0:\n return i\n return i\n\nsentences = numpy.zeros((128, n_timestamps+1, max_features))\nsentences[:, 0, 0] = 1\n\n# Start sampling char-sequences. At each iteration i the probability over\n# the i-th character of each sequences is computed. \nfor i in numpy.arange(n_timestamps):\n probs = model.predict_proba(sentences)[:,i,:]\n # Go over each sequence and sample the i-th character.\n for j in numpy.arange(len(sentences)):\n sentences[j, i+1, mnrnd(probs[j, :])] = 1\nsentences = [sentence[1:].nonzero()[1] for sentence in sentences]\n\n# Convert to readable text.\ntext = []\nfor sentence in sentences:\n text.append(''.join([dct[word] for word in sentence]))\"><pre class=\"notranslate\"><code class=\"notranslate\">import numpy\nimport sys\nsys.path.append('/home/USER/python/keras/')\n\n# Obtain the corpus of character sequence to train from.\n# Here it is just the sequence 123456789 repeated 100000 times.\nx = \"123456789\"*100000\n\n# Construct a dictionary, and the reverse dictionary for the participating chars.\n# '*\" is a 'start-sequence' character.\ndct = ['*'] + list(set(x))\nmax_features = len(dct)\nrev_dct = [(j, i) for i, j in enumerate(dct)]\nrev_dct = dict(rev_dct)\n\n# Convert the characters to their dct indexes. \nx = [rev_dct[ch] for ch in x]\n\n# Divide the corpuse to substrings of length 200.\nn_timestamps = 200\nx = x[:len(x)- len(x) % n_timestamps]\nx = numpy.array(x, dtype='int32').reshape((-1, n_timestamps))\n\n# Generate input and ouput per substring, as an indicator matrix.\ny = numpy.zeros((x.shape[0], x.shape[1], max_features), dtype='int32')\nfor i in numpy.arange(x.shape[0]):\n for j in numpy.arange(x.shape[1]):\n y[i, j, x[i, j]] = 1 \n\n# Shift-1 the input sequences to the right, and make them start with '*'.\nx = numpy.roll(y, 1, axis=1)\nx[:, 0, :] = 0\nx[:, 0, 0] = 1\n\n# Build the model.\nfrom keras.models import Sequential\nfrom keras.layers.core import TimeDistributedDense, Dropout, Activation\nfrom keras.layers.recurrent import LSTM\n\nmodel = Sequential()\nmodel.add(LSTM(max_features, 256, return_sequences=True))\nmodel.add(LSTM(256, 256, return_sequences=True))\nmodel.add(LSTM(256, 256, return_sequences=True))\nmodel.add(TimeDistributedDense(256, max_features))\nmodel.add(Activation('time_distributed_softmax'))\n\nmodel.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n\nmodel.fit(x, y, batch_size=64, nb_epoch=50)\n\n# Sample 128 sentences (200 characters each) from model.\n\ndef mnrnd(probs):\n rnd = numpy.random.random()\n for i in xrange(len(probs)):\n rnd -= probs[i]\n if rnd &lt;= 0:\n return i\n return i\n\nsentences = numpy.zeros((128, n_timestamps+1, max_features))\nsentences[:, 0, 0] = 1\n\n# Start sampling char-sequences. At each iteration i the probability over\n# the i-th character of each sequences is computed. \nfor i in numpy.arange(n_timestamps):\n probs = model.predict_proba(sentences)[:,i,:]\n # Go over each sequence and sample the i-th character.\n for j in numpy.arange(len(sentences)):\n sentences[j, i+1, mnrnd(probs[j, :])] = 1\nsentences = [sentence[1:].nonzero()[1] for sentence in sentences]\n\n# Convert to readable text.\ntext = []\nfor sentence in sentences:\n text.append(''.join([dct[word] for word in sentence]))\n</code></pre></div>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Sequence of floats as an input to LSTM",
"output": "<p dir=\"auto\">So where's time in your problem? You're talking about 'sequences', which implies a time axis.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Cannot pass index array to categorical_crossentropy objective",
"output": "<p dir=\"auto\">I think this is because Sequential and Graph obtain use get_output() of the last layer to get the shape/type of the variable. If I understand correctly, Theano picks which variation of categorical_crossentropy you're using at compile time based on the predeclared shapes.</p>\n<p dir=\"auto\">This could be alleviated by having an option which lets you explicitly declare the shape/type of Sequential.y, but it would be messier for Graph.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "test_models.py fails",
"output": "<p dir=\"auto\">Thank you for pointing this out! It's fixed now. A screw-up on my part.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Host documentation on readthedocs.org",
"output": "<p dir=\"auto\">Hi,<br>\nIt works with this docker image : <a href=\"https://hub.docker.com/r/fiware/md2pdf/\" rel=\"nofollow\">https://hub.docker.com/r/fiware/md2pdf/</a>.<br>\nI generated the documentation, with Keras==2.0.8.<br>\nPDF : <a href=\"https://drive.google.com/open?id=0BxaH5r4iW8cWeEdEMG1lQ0lIMFk\" rel=\"nofollow\">https://drive.google.com/open?id=0BxaH5r4iW8cWeEdEMG1lQ0lIMFk</a> !</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Can I do joint layer training in Keras?",
"output": "<p dir=\"auto\">Sure, you'd need to use a Merge layer. Here's the corresponding doc, including a code snippet similar to what you want to do: <a href=\"http://keras.io/layers/core/#merge\" rel=\"nofollow\">http://keras.io/layers/core/#merge</a></p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Using text.tokenizer for a CSV File",
"output": "<p dir=\"auto\">Note that the Keras tokenizer only generates 1-grams, not n-grams. This might not be what you want.</p>\n<p dir=\"auto\">It is possible to pass a CSV file to a method such as <code class=\"notranslate\">tokenizer.fit_on_texts(csv)</code> or <code class=\"notranslate\">tokenizer.texts_to_matrix(csv)</code> (the tokenizer can handle any iterator over strings). The exact configuration would depend on your CSV file...</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "what if my data can note load into memory?",
"output": "<p dir=\"auto\"><a class=\"user-mention notranslate\" data-hovercard-type=\"user\" data-hovercard-url=\"/users/tleeuwenburg/hovercard\" data-octo-click=\"hovercard-link-click\" data-octo-dimensions=\"link_type:self\" href=\"https://github.com/tleeuwenburg\">@tleeuwenburg</a> I found that caffe use lmdb. i think i can read a batch from lmdb can train the model batch by batch. i will try it out. thank you</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Add h5py to the requires list",
"output": "<p dir=\"auto\">I want to load vgg16 weights in keras but I got \"<code class=\"notranslate\">load_weights</code> requires h5py\"!! I installed h5py and imported that. What should I do?</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Imbalanced classes",
"output": "<p dir=\"auto\">Closing this, discussion in <a class=\"issue-link js-issue-link\" data-error-text=\"Failed to load title\" data-id=\"84635557\" data-permission-text=\"Title is private\" data-url=\"https://github.com/keras-team/keras/issues/188\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"/keras-team/keras/pull/188/hovercard\" href=\"https://github.com/keras-team/keras/pull/188\">#188</a></p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Masks for RNNs",
"output": "<p dir=\"auto\"><a class=\"user-mention notranslate\" data-hovercard-type=\"user\" data-hovercard-url=\"/users/mbchang/hovercard\" data-octo-click=\"hovercard-link-click\" data-octo-dimensions=\"link_type:self\" href=\"https://github.com/mbchang\">@mbchang</a> in general after this discussion Keras ended up moving to a separate explicitly sent mask after all, rather than a special masking value.</p>\n<p dir=\"auto\">Embeddings take a <code class=\"notranslate\">mask_zero</code> boolean parameter which can generate that mask automatically anywhere there's a 0 on the input.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Convolution2D does not allow for weight regularization",
"output": "<p dir=\"auto\">Yes, adding it should not be an issue. I'll put it on the backlog.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Merge two CNNs has the Error:\"TypeError: 'int' object is not iterable\"",
"output": "<p dir=\"auto\">This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "[help] Constructing a synced sequence input and output RNN ",
"output": "<p dir=\"auto\">No, TimeDistrubutedDense is exactly as it sounds, simply a Dense layer that feed all of its inputs forward in time; this distinction between Dense and TimeDistributedDense is simply that a Dense layer expects 2D input (batch_size, sample_size) whereas TimeDistributedDense expects 3D input (Batch_size, time_steps, sample_size). This should be used in conjunction with TimeDistributedSoftmax for the same reason (2D vs. 3D expected input).</p>\n<p dir=\"auto\">There is a GRU layer, however: <a href=\"https://github.com/fchollet/keras/blob/master/keras/layers/recurrent.py#L156-253\">https://github.com/fchollet/keras/blob/master/keras/layers/recurrent.py#L156-253</a></p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Reinforcement Learning",
"output": "<p dir=\"auto\">This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Loading in data",
"output": "<p dir=\"auto\">this worked. thanks for the info!</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Unable to Install on Mac OS",
"output": "<p dir=\"auto\">I installed hdf5 from source, installed h5py (pip install h5py).<br>\nreinstalled python, numpy, scipy...etc. and then it finally worked.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Predicting sequence with recurrent output layer does not work",
"output": "<p dir=\"auto\">That was quick, thanks!</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "A bit of a problem with convolutional layers",
"output": "<p dir=\"auto\">got the point so have to change that layer to following (as 256*0.25=64)</p>\n<div class=\"snippet-clipboard-content notranslate position-relative overflow-auto\" data-snippet-clipboard-copy-content=\"model.add(Flatten())\nmodel.add(Dense(64*64*64, 512))\nmodel.add(Activation('relu'))\nmodel.add(Dropout(0.5))\"><pre class=\"notranslate\"><code class=\"notranslate\">model.add(Flatten())\nmodel.add(Dense(64*64*64, 512))\nmodel.add(Activation('relu'))\nmodel.add(Dropout(0.5))\n</code></pre></div>\n<p dir=\"auto\">thanks pranv</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "The validation loss and accuracy",
"output": "<p dir=\"auto\"><a href=\"https://en.wikipedia.org/wiki/Overfitting\" rel=\"nofollow\">https://en.wikipedia.org/wiki/Overfitting</a></p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Stacking multiple LSTM layers yields an error",
"output": "<p dir=\"auto\">A LSTM layer, as per the docs, will return the last vector by default rather than the entire sequence. In order to return the entire sequence (which is necessary to be able to stack LSTM), use the constructor argument <code class=\"notranslate\">return_sequences=True</code>.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Error when using TimeDistributedDense",
"output": "<p dir=\"auto\">Thanks -- I'll make sure to change that.</p>\n<p dir=\"auto\">Along that line, though, there's a snippet in the docs that made me shy away from categorical crossentropy:</p>\n<blockquote>\n<p dir=\"auto\">Note: using this objective requires that your labels are binary arrays of shape (nb_samples, nb_classes).</p>\n</blockquote>\n<p dir=\"auto\">I interpreted that as meaning it wouldn't work with the (nb_samples, nb_timeseps, nb_classes) output from TimeDistributedDense -- it seems from messing around with it, and from other code snippets I've seen, that it actually works just fine.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "dimension mismatch?",
"output": "<ul dir=\"auto\">\n<li>Explicitly specify the output dimension in LSTM (here you want to set it to input_dim / 2)</li>\n<li>The last Reshape is trying to reshape a vector input of size output_dim into a matrix of shape (output_dim, output_dim). Not going to work... I don't know what you are trying to do, but I would assume that Reshape layer is probably useless.</li>\n</ul>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Merge fails on GPU but passes on CPU",
"output": "<p dir=\"auto\">Correct, it was a Theano bug with T.Concatenate(). It has been fixed in the dev version of Theano a few weeks ago, and it will be on Pypi soon. Not worth fixing on the Keras side...</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Max margin objectives",
"output": "<p dir=\"auto\">I have a dataset with positive and negative samples. how can I train and test with max margin objective in keras? the dataset is as following:</p>\n<blockquote>\n<p dir=\"auto\">sentA, CategoryA ,+<br>\nsentA, CategoryB,+<br>\nsentA, CategoryC,-</p>\n</blockquote>\n<p dir=\"auto\">and so on<br>\nDo i need to update per sample?<br>\nThanks</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Thoughts on adding a scikit-learn compatible API?",
"output": "<p dir=\"auto\">There is a PR for this functionality here:<br>\n<a href=\"https://github.com/fchollet/keras/pull/1225/files\">https://github.com/fchollet/keras/pull/1225/files</a></p>\n<p dir=\"auto\">It wasn't merged because it looks to me like a big hack. If you think of a<br>\ncleaner, simpler way to do it, please submit a new PR.</p>\n<p dir=\"auto\">On 11 February 2016 at 11:52, Gustavo De Mari Pereira &lt;<br>\n<a href=\"mailto:notifications@github.com\">notifications@github.com</a>&gt; wrote:</p>\n<blockquote>\n<p dir=\"auto\">+1 on <a class=\"user-mention notranslate\" data-hovercard-type=\"user\" data-hovercard-url=\"/users/mbatchkarov/hovercard\" data-octo-click=\"hovercard-link-click\" data-octo-dimensions=\"link_type:self\" href=\"https://github.com/mbatchkarov\">@mbatchkarov</a> <a href=\"https://github.com/mbatchkarov\">https://github.com/mbatchkarov</a> comment</p>\n<p dir=\"auto\">\u2014<br>\nReply to this email directly or view it on GitHub<br>\n<a class=\"issue-link js-issue-link\" data-error-text=\"Failed to load title\" data-id=\"79999123\" data-permission-text=\"Title is private\" data-url=\"https://github.com/keras-team/keras/issues/149\" data-hovercard-type=\"issue\" data-hovercard-url=\"/keras-team/keras/issues/149/hovercard?comment_id=183035209&amp;comment_type=issue_comment\" href=\"https://github.com/keras-team/keras/issues/149#issuecomment-183035209\">#149 (comment)</a>.</p>\n</blockquote>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Multiple Inputs",
"output": "<p dir=\"auto\">The code snippet example in the doc page I linked provides all the info you need.</p>\n<p dir=\"auto\">You will train your model with list of inputs:</p>\n<div class=\"highlight highlight-source-python notranslate position-relative overflow-auto\" dir=\"auto\" data-snippet-clipboard-copy-content=\"model.fit([X_CNN1, X_CNN2], y) \"><pre class=\"notranslate\"><span class=\"pl-s1\">model</span>.<span class=\"pl-en\">fit</span>([<span class=\"pl-v\">X_CNN1</span>, <span class=\"pl-v\">X_CNN2</span>], <span class=\"pl-s1\">y</span>) </pre></div>\n<p dir=\"auto\">Which will feed at time t <code class=\"notranslate\">X_CNN1[t]</code> to CNN1 and <code class=\"notranslate\">X_CNN2[t]</code> to CNN2.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Variable size sequence RNN input",
"output": "<p dir=\"auto\"><a class=\"user-mention notranslate\" data-hovercard-type=\"user\" data-hovercard-url=\"/users/ssamot/hovercard\" data-octo-click=\"hovercard-link-click\" data-octo-dimensions=\"link_type:self\" href=\"https://github.com/ssamot\">@ssamot</a> Will you reference the other comment for everyone else's convenience? Thanks!</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Feeding training data in batches",
"output": "<p dir=\"auto\"><a class=\"user-mention notranslate\" data-hovercard-type=\"user\" data-hovercard-url=\"/users/fchollet/hovercard\" data-octo-click=\"hovercard-link-click\" data-octo-dimensions=\"link_type:self\" href=\"https://github.com/fchollet\">@fchollet</a> You just mentioned that passing <code class=\"notranslate\">nb_epoch=1</code> to the function <code class=\"notranslate\">model.fit(X, y)</code> will resolve the memory issue, which makes me confused. Why? The training data (X, y) still needs to be loaded into memory previously at once, right ?</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Python keras package returns an error about data dimensions",
"output": "<p dir=\"auto\">X and Y are indexable arrays. There are examples in /keras/keras/datasets/imdb.py in load_data().</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Regularization is throwing exception",
"output": "<p dir=\"auto\">incorrectly used regularizer for constraint.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Out of memory error",
"output": "<p dir=\"auto\"><code class=\"notranslate\">Error allocating 3242496000 bytes of device memory (out of memory). Driver report 876146688 bytes free and 4294246400 bytes total</code></p>\n<p dir=\"auto\">This tells you everything you need to know. Your GPU does not have enough memory for this task.</p>\n<p dir=\"auto\">Things you can try:</p>\n<ul dir=\"auto\">\n<li>close applications that might be using your GPU (your GPU has 4.3GB of memory, you're trying to allocate 3.2GB which should fit in theory)</li>\n<li>reduce the batch size (how many samples are loaded on the GPU at a time)</li>\n<li>if the batch size is already small, it means your network simply doesn't fit in your GPU memory. In that case, reduce the network size.</li>\n</ul>\n<p dir=\"auto\">Alternative solutions...</p>\n<ul dir=\"auto\">\n<li>Run your code on CPU</li>\n<li>Get a bigger GPU</li>\n</ul>\n<blockquote>\n<p dir=\"auto\">Obviously a larger vocab means a larger embed layer.</p>\n</blockquote>\n<p dir=\"auto\">Yes.</p>\n<blockquote>\n<p dir=\"auto\">Do longer sentences mean a bigger network?</p>\n</blockquote>\n<p dir=\"auto\">No, the network size will be the same, but each sample will be larger therefore you will be using more memory to load each batch.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Parallel Model",
"output": "<p dir=\"auto\">Resolved by the Graph model.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "60 secondes by epoch (cifar10_cnn.py example)",
"output": "<p dir=\"auto\">Yes, that's pretty much standard.</p>\n<p dir=\"auto\">If you want faster learning, you can try to augment the batch size, reduce the size of the network, or experiment with different optimizers (SGD should be fairly fast though, at least in time/epoch, not necessarily in convergence speed).</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "LSTM model compile time is very high",
"output": "<p dir=\"auto\">Can you delete your theano temp dir? The best would be if you could try to reproduce on another machine, e.g. Google Cloud or AWS.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "[Question] Loading images on disk for training",
"output": "<p dir=\"auto\"><a class=\"user-mention notranslate\" data-hovercard-type=\"user\" data-hovercard-url=\"/users/davidlis/hovercard\" data-octo-click=\"hovercard-link-click\" data-octo-dimensions=\"link_type:self\" href=\"https://github.com/davidlis\">@davidlis</a> did you ever find examples of how to do this?</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "model.fit(shuffle=False) gives TypeError",
"output": "<p dir=\"auto\">Good catch : ) This is fixed now.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "General questions",
"output": "<p dir=\"auto\">I am very interested in word/sentence/text embedding, so there will definitely be more to come in that area. In the mean time, I suggest you check out Gensim.</p>\n<p dir=\"auto\">Anything else planned: yes, I'm also interested in memory networks and their application to text understanding and question answering. I will most likely add Keras support for memory networks in the near future.</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "Issues loading sub-modules",
"output": "<p dir=\"auto\">Yes thanks!</p>",
"input": "",
"system": "you are the github assistant"
},
{
"instruction": "pooling size > stride",
"output": "<p dir=\"auto\">No idea about that. I think they were working on that also, but I wouldn't know since I always run my code on the GPU anyway :)</p>",
"input": "",
"system": "you are the github assistant"
}
]