Computes the pinball loss between `y_true` and `y_pred`.
loss_pinball( tau = 0.5, reduction = tf$keras$losses$Reduction$AUTO, name = "pinball_loss" )
tau | (Optional) Float in [0, 1] or a tensor taking values in [0, 1] and shape = [d0,..., dn]. It defines the slope of the pinball loss. In the context of quantile regression, the value of tau determines the conditional quantile level. When tau = 0.5, this amounts to l1 regression, an estimator of the conditional median (0.5 quantile). |
---|---|
reduction | (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf$keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see https://www.tensorflow.org/alpha/tutorials/distribute/training_loops for more details on this. |
name | Optional name for the op. |
pinball_loss: 1-D float `Tensor` with shape [batch_size].
pinball_loss: 1-D float `Tensor` with shape [batch_size].
`loss = maximum(tau * (y_true - y_pred), (tau - 1) * (y_true - y_pred))` In the context of regression this, loss yields an estimator of the tau conditional quantile. See: https://en.wikipedia.org/wiki/Quantile_regression Usage: ```python loss = pinball_loss([0., 0., 1., 1.], [1., 1., 1., 0.], tau=.1) # loss = max(0.1 * (y_true - y_pred), (0.1 - 1) * (y_true - y_pred)) # = (0.9 + 0.9 + 0 + 0.1) / 4 print('Loss: ', loss$numpy()) # Loss: 0.475 ```
```python_loss = pinball_loss([0., 0., 1., 1.], [1., 1., 1., 0.], tau=.1) ````
- https://en.wikipedia.org/wiki/Quantile_regression - https://projecteuclid.org/download/pdfview_1/euclid.bj/1297173840
if (FALSE) { keras_model_sequential() %>% layer_dense(4, input_shape = c(784)) %>% compile( optimizer = 'sgd', loss=loss_pinball(), metrics='accuracy' ) }