text
stringlengths 0
1.46k
|
---|
>>> # Using 'none' reduction type. |
>>> p = tf.keras.losses.Poisson( |
... reduction=tf.keras.losses.Reduction.NONE) |
>>> p(y_true, y_pred).numpy() |
array([0.999, 0.], dtype=float32) |
Usage with the compile() API: |
model.compile(optimizer='sgd', loss=tf.keras.losses.Poisson()) |
binary_crossentropy function |
tf.keras.losses.binary_crossentropy( |
y_true, y_pred, from_logits=False, label_smoothing=0 |
) |
Computes the binary crossentropy loss. |
Standalone usage: |
>>> y_true = [[0, 1], [0, 0]] |
>>> y_pred = [[0.6, 0.4], [0.4, 0.6]] |
>>> loss = tf.keras.losses.binary_crossentropy(y_true, y_pred) |
>>> assert loss.shape == (2,) |
>>> loss.numpy() |
array([0.916 , 0.714], dtype=float32) |
Arguments |
y_true: Ground truth values. shape = [batch_size, d0, .. dN]. |
y_pred: The predicted values. shape = [batch_size, d0, .. dN]. |
from_logits: Whether y_pred is expected to be a logits tensor. By default, we assume that y_pred encodes a probability distribution. |
label_smoothing: Float in [0, 1]. If > 0 then smooth the labels by squeezing them towards 0.5 That is, using 1. - 0.5 * label_smoothing for the target class and 0.5 * label_smoothing for the non-target class. |
Returns |
Binary crossentropy loss value. shape = [batch_size, d0, .. dN-1]. |
categorical_crossentropy function |
tf.keras.losses.categorical_crossentropy( |
y_true, y_pred, from_logits=False, label_smoothing=0 |
) |
Computes the categorical crossentropy loss. |
Standalone usage: |
>>> y_true = [[0, 1, 0], [0, 0, 1]] |
>>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]] |
>>> loss = tf.keras.losses.categorical_crossentropy(y_true, y_pred) |
>>> assert loss.shape == (2,) |
>>> loss.numpy() |
array([0.0513, 2.303], dtype=float32) |
Arguments |
y_true: Tensor of one-hot true targets. |
y_pred: Tensor of predicted targets. |
from_logits: Whether y_pred is expected to be a logits tensor. By default, we assume that y_pred encodes a probability distribution. |
label_smoothing: Float in [0, 1]. If > 0 then smooth the labels. For example, if 0.1, use 0.1 / num_classes for non-target labels and 0.9 + 0.1 / num_classes for target labels. |
Returns |
Categorical crossentropy loss value. |
sparse_categorical_crossentropy function |
tf.keras.losses.sparse_categorical_crossentropy( |
y_true, y_pred, from_logits=False, axis=-1 |
) |
Computes the sparse categorical crossentropy loss. |
Standalone usage: |
>>> y_true = [1, 2] |
>>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]] |
>>> loss = tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred) |
>>> assert loss.shape == (2,) |
>>> loss.numpy() |
array([0.0513, 2.303], dtype=float32) |
Arguments |
y_true: Ground truth values. |
y_pred: The predicted values. |
from_logits: Whether y_pred is expected to be a logits tensor. By default, we assume that y_pred encodes a probability distribution. |
axis: (Optional) Defaults to -1. The dimension along which the entropy is computed. |
Returns |
Sparse categorical crossentropy loss value. |
poisson function |
tf.keras.losses.poisson(y_true, y_pred) |
Computes the Poisson loss between y_true and y_pred. |
The Poisson loss is the mean of the elements of the Tensor y_pred - y_true * log(y_pred). |
Standalone usage: |
>>> y_true = np.random.randint(0, 2, size=(2, 3)) |
>>> y_pred = np.random.random(size=(2, 3)) |