text
stringlengths 0
4.99k
|
---|
model = ... # Get model (Sequential, Functional Model, or Model subclass) |
model.save('path/to/location') |
Loading the model back: |
from tensorflow import keras |
model = keras.models.load_model('path/to/location') |
Now, let's look at the details. |
Setup |
import numpy as np |
import tensorflow as tf |
from tensorflow import keras |
Whole-model saving & loading |
You can save an entire model to a single artifact. It will include: |
The model's architecture/config |
The model's weight values (which were learned during training) |
The model's compilation information (if compile() was called) |
The optimizer and its state, if any (this enables you to restart training where you left) |
APIs |
model.save() or tf.keras.models.save_model() |
tf.keras.models.load_model() |
There are two formats you can use to save an entire model to disk: the TensorFlow SavedModel format, and the older Keras H5 format. The recommended format is SavedModel. It is the default when you use model.save(). |
You can switch to the H5 format by: |
Passing save_format='h5' to save(). |
Passing a filename that ends in .h5 or .keras to save(). |
SavedModel format |
SavedModel is the more comprehensive save format that saves the model architecture, weights, and the traced Tensorflow subgraphs of the call functions. This enables Keras to restore both built-in layers as well as custom objects. |
Example: |
def get_model(): |
# Create a simple model. |
inputs = keras.Input(shape=(32,)) |
outputs = keras.layers.Dense(1)(inputs) |
model = keras.Model(inputs, outputs) |
model.compile(optimizer="adam", loss="mean_squared_error") |
return model |
model = get_model() |
# Train the model. |
test_input = np.random.random((128, 32)) |
test_target = np.random.random((128, 1)) |
model.fit(test_input, test_target) |
# Calling `save('my_model')` creates a SavedModel folder `my_model`. |
model.save("my_model") |
# It can be used to reconstruct the model identically. |
reconstructed_model = keras.models.load_model("my_model") |
# Let's check: |
np.testing.assert_allclose( |
model.predict(test_input), reconstructed_model.predict(test_input) |
) |
# The reconstructed model is already compiled and has retained the optimizer |
# state, so training can resume: |
reconstructed_model.fit(test_input, test_target) |
4/4 [==============================] - 0s 833us/step - loss: 0.2464 |
<tensorflow.python.keras.callbacks.History at 0x1511b87d0> |
What the SavedModel contains |
Calling model.save('my_model') creates a folder named my_model, containing the following: |
!ls my_model |
[34massets[m[m saved_model.pb [34mvariables[m[m |
The model architecture, and training configuration (including the optimizer, losses, and metrics) are stored in saved_model.pb. The weights are saved in the variables/ directory. |
For detailed information on the SavedModel format, see the SavedModel guide (The SavedModel format on disk). |
How SavedModel handles custom objects |
When saving the model and its layers, the SavedModel format stores the class name, call function, losses, and weights (and the config, if implemented). The call function defines the computation graph of the model/layer. |
In the absence of the model/layer config, the call function is used to create a model that exists like the original model which can be trained, evaluated, and used for inference. |
Nevertheless, it is always a good practice to define the get_config and from_config methods when writing a custom model or layer class. This allows you to easily update the computation later if needed. See the section about Custom objects for more information. |
Example: |
class CustomModel(keras.Model): |
def __init__(self, hidden_units): |
super(CustomModel, self).__init__() |
self.hidden_units = hidden_units |
self.dense_layers = [keras.layers.Dense(u) for u in hidden_units] |
def call(self, inputs): |
x = inputs |
for layer in self.dense_layers: |
x = layer(x) |
return x |
def get_config(self): |
return {"hidden_units": self.hidden_units} |
@classmethod |