Gaussian Error Linear Unit.

activation_gelu(x, approximate = TRUE)

Arguments

x

A `Tensor`. Must be one of the following types: `float16`, `float32`, `float64`.

approximate

bool, whether to enable approximation. Returns: A `Tensor`. Has the same type as `x`.

Value

A `Tensor`. Has the same type as `x`.

Details

Computes gaussian error linear: `0.5 * x * (1 + tanh(sqrt(2 / pi) * (x + 0.044715 * x^3)))` or `x * P(X <= x) = 0.5 * x * (1 + erf(x / sqrt(2)))`, where P(X) ~ N(0, 1), depending on whether approximation is enabled. See [Gaussian Error Linear Units (GELUs)](https://arxiv.org/abs/1606.08415) and [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805).

Computes gaussian error linear

`0.5 * x * (1 + tanh(sqrt(2 / pi) * (x + 0.044715 * x^3)))` or `x * P(X <= x) = 0.5 * x * (1 + erf(x / sqrt(2)))`, where P(X) ~ N(0, 1), depending on whether approximation is enabled.

Examples

if (FALSE) { library(keras) library(tfaddons) model = keras_model_sequential() %>% layer_conv_2d(filters = 10, kernel_size = c(3,3),input_shape = c(28,28,1), activation = activation_gelu) }