bert-base-japanese-char / tokenizer_config.json
singletongue's picture
Updates incorrect tokenizer configuration file (#2)
53587f3 verified
raw
history blame contribute delete
120 Bytes
{"do_lower_case": false, "word_tokenizer_type": "mecab", "subword_tokenizer_type": "character", "model_max_length": 512}