Perform dynamic decoding with `decoder`.

decode_dynamic(
  decoder,
  output_time_major = FALSE,
  impute_finished = FALSE,
  maximum_iterations = NULL,
  parallel_iterations = 32L,
  swap_memory = FALSE,
  training = NULL,
  scope = NULL,
  ...
)

Arguments

decoder

A `Decoder` instance.

output_time_major

boolean. Default: `FALSE` (batch major). If `TRUE`, outputs are returned as time major tensors (this mode is faster). Otherwise, outputs are returned as batch major tensors (this adds extra time to the computation).

impute_finished

boolean. If `TRUE`, then states for batch entries which are marked as finished get copied through and the corresponding outputs get zeroed out. This causes some slowdown at each time step, but ensures that the final state and outputs have the correct values and that backprop ignores time steps that were marked as finished.

maximum_iterations

`int32` scalar, maximum allowed number of decoding steps. Default is `NULL` (decode until the decoder is fully done).

parallel_iterations

Argument passed to `tf$while_loop`.

swap_memory

Argument passed to `tf$while_loop`.

training

boolean. Indicates whether the layer should behave in training mode or in inference mode. Only relevant when `dropout` or `recurrent_dropout` is used.

scope

Optional variable scope to use.

...

A list, other keyword arguments for dynamic_decode. It might contain arguments for `BaseDecoder` to initialize, which takes all tensor inputs during `call()`.

Value

`(final_outputs, final_state, final_sequence_lengths)`.

Details

Calls `initialize()` once and `step()` repeatedly on the Decoder object.

Raises

TypeError: if `decoder` is not an instance of `Decoder`. ValueError: if `maximum_iterations` is provided but is not a scalar.