Filter response normalization layer.

layer_filter_response_normalization(
  object,
  epsilon = 1e-06,
  axis = c(1, 2),
  beta_initializer = "zeros",
  gamma_initializer = "ones",
  beta_regularizer = NULL,
  gamma_regularizer = NULL,
  beta_constraint = NULL,
  gamma_constraint = NULL,
  learned_epsilon = FALSE,
  learned_epsilon_constraint = NULL,
  name = NULL
)

Arguments

object

Model or layer object

epsilon

Small positive float value added to variance to avoid dividing by zero.

axis

List of axes that should be normalized. This should represent the spatial dimensions.

beta_initializer

Initializer for the beta weight.

gamma_initializer

Initializer for the gamma weight.

beta_regularizer

Optional regularizer for the beta weight.

gamma_regularizer

Optional regularizer for the gamma weight.

beta_constraint

Optional constraint for the beta weight.

gamma_constraint

Optional constraint for the gamma weight.

learned_epsilon

(bool) Whether to add another learnable epsilon parameter or not.

learned_epsilon_constraint

learned_epsilon_constraint

name

Optional name for the layer

Value

A tensor

Details

Filter Response Normalization (FRN), a normalization method that enables models trained with per-channel normalization to achieve high accuracy. It performs better than all other normalization techniques for small batches and is par with Batch Normalization for bigger batch sizes.

Note

Input shape Arbitrary. Use the keyword argument `input_shape` (list of integers, does not include the samples axis) when using this layer as the first layer in a model. This layer, as of now, works on a 4-D tensor where the tensor should have the shape [N X H X W X C] TODO: Add support for NCHW data format and FC layers. Output shape Same shape as input. References - [Filter Response Normalization Layer: Eliminating Batch Dependence in the training of Deep Neural Networks] (https://arxiv.org/abs/1911.09737)