es_fi_all / tokenizer_config.json
nouman-10's picture
Training in progress, step 1000
a622e1a
raw
history blame contribute delete
280 Bytes
{
"clean_up_tokenization_spaces": true,
"eos_token": "</s>",
"model_max_length": 512,
"pad_token": "<pad>",
"separate_vocabs": false,
"source_lang": "es",
"sp_model_kwargs": {},
"target_lang": "fi",
"tokenizer_class": "MarianTokenizer",
"unk_token": "<unk>"
}