Handles everything you need to assemble a mini-batch of inputs and targets, as well as decode the dictionary produced as a byproduct of the tokenization process in the `encodes` method.

HF_BeforeBatchTransform(
  hf_arch,
  hf_tokenizer,
  max_length = NULL,
  padding = TRUE,
  truncation = TRUE,
  is_split_into_words = FALSE,
  n_tok_inps = 1,
  ...
)

Arguments

hf_arch

architecture

hf_tokenizer

tokenizer

max_length

maximum length

padding

padding or not

truncation

truncation or not

is_split_into_words

to split into words

n_tok_inps

number tok inputs

...

additional arguments

Value

None