text
stringlengths
0
1.46k
>>> loss = tf.keras.losses.poisson(y_true, y_pred)
>>> assert loss.shape == (2,)
>>> y_pred = y_pred + 1e-7
>>> assert np.allclose(
... loss.numpy(), np.mean(y_pred - y_true * np.log(y_pred), axis=-1),
... atol=1e-5)
Arguments
y_true: Ground truth values. shape = [batch_size, d0, .. dN].
y_pred: The predicted values. shape = [batch_size, d0, .. dN].
Returns
Poisson loss value. shape = [batch_size, d0, .. dN-1].
Raises
InvalidArgumentError: If y_true and y_pred have incompatible shapes.
KLDivergence class
tf.keras.losses.KLDivergence(reduction="auto", name="kl_divergence")
Computes Kullback-Leibler divergence loss between y_true and y_pred.
loss = y_true * log(y_true / y_pred)
See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence
Standalone usage:
>>> y_true = [[0, 1], [0, 0]]
>>> y_pred = [[0.6, 0.4], [0.4, 0.6]]
>>> # Using 'auto'/'sum_over_batch_size' reduction type.
>>> kl = tf.keras.losses.KLDivergence()
>>> kl(y_true, y_pred).numpy()
0.458
>>> # Calling with 'sample_weight'.
>>> kl(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy()
0.366
>>> # Using 'sum' reduction type.
>>> kl = tf.keras.losses.KLDivergence(
... reduction=tf.keras.losses.Reduction.SUM)
>>> kl(y_true, y_pred).numpy()
0.916
>>> # Using 'none' reduction type.
>>> kl = tf.keras.losses.KLDivergence(
... reduction=tf.keras.losses.Reduction.NONE)
>>> kl(y_true, y_pred).numpy()
array([0.916, -3.08e-06], dtype=float32)
Usage with the compile() API:
model.compile(optimizer='sgd', loss=tf.keras.losses.KLDivergence())
kl_divergence function
tf.keras.losses.kl_divergence(y_true, y_pred)
Computes Kullback-Leibler divergence loss between y_true and y_pred.
loss = y_true * log(y_true / y_pred)
See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence
Standalone usage:
>>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)
>>> y_pred = np.random.random(size=(2, 3))
>>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)
>>> assert loss.shape == (2,)
>>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1)
>>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)
>>> assert np.array_equal(
... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))
Arguments
y_true: Tensor of true targets.
y_pred: Tensor of predicted targets.
Returns
A Tensor with loss.
Raises
TypeError: If y_true cannot be cast to the y_pred.dtype.
Backend utilities
clear_session function
tf.keras.backend.clear_session()
Resets all state generated by Keras.
Keras manages a global state, which it uses to implement the Functional model-building API and to uniquify autogenerated layer names.
If you are creating many models in a loop, this global state will consume an increasing amount of memory over time, and you may want to clear it. Calling clear_session() releases the global state: this helps avoid clutter from old models and layers, especially when memory is limited.
Example 1: calling clear_session() when creating models in a loop