Update README.md
Browse files
README.md
CHANGED
@@ -95,7 +95,7 @@ print(tokenizer.decode(output))
|
|
95 |
## Tokenizer
|
96 |
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
|
97 |
The vocabulary entries were converted from [`llm-jp-tokenizer v2.1 (50k)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.1).
|
98 |
-
Please refer to
|
99 |
- **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model which requires `tokenizers>=0.14.0`
|
100 |
- **Training algorithm:** SentencePiece Unigram byte-fallback
|
101 |
- **Training data:** A subset of the datasets for model pre-training
|
|
|
95 |
## Tokenizer
|
96 |
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
|
97 |
The vocabulary entries were converted from [`llm-jp-tokenizer v2.1 (50k)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.1).
|
98 |
+
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for details on the vocabulary construction procedure.
|
99 |
- **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model which requires `tokenizers>=0.14.0`
|
100 |
- **Training algorithm:** SentencePiece Unigram byte-fallback
|
101 |
- **Training data:** A subset of the datasets for model pre-training
|