Handles everything you need to assemble a mini-batch of inputs and targets, as well as decode the dictionary produced
HF_QABeforeBatchTransform(
hf_arch,
hf_tokenizer,
max_length = NULL,
padding = TRUE,
truncation = TRUE,
is_split_into_words = FALSE,
n_tok_inps = 1,
...
)
architecture
tokenizer
maximum length
padding or not
truncation or not
into split into words or not
number of tok inputs
additional arguments
None
as a byproduct of the tokenization process in the `encodes` method.