ArthurConmy
commited on
Commit
•
80feb2b
1
Parent(s):
b389e23
Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,9 @@
|
|
1 |
This is https://huggingface.co/NeelNanda/gpt-neox-tokenizer-digits but with a fix that makes this work on `tokenizers == 0.14` after a breaking change involving `added_tokens`: https://github.com/huggingface/tokenizers/issues/1358
|
2 |
|
|
|
|
|
|
|
|
|
3 |
# Neel's README
|
4 |
|
5 |
This is a fork of the GPT NeoX 20B tokenizer, edited to split every numerical digit into a separate token. This has the goal of making it easier for the model to learn arithmetic capabilities and to hopefully be more interpretable, and copies the idea from the [PaLM tokenizer](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html).
|
|
|
1 |
This is https://huggingface.co/NeelNanda/gpt-neox-tokenizer-digits but with a fix that makes this work on `tokenizers == 0.14` after a breaking change involving `added_tokens`: https://github.com/huggingface/tokenizers/issues/1358
|
2 |
|
3 |
+
The two changes from NeelNanda/gpt-neox-tokenizer-digits are
|
4 |
+
1) (Important) we remove the space tokens from the "added_tokens" key in `tokenizer.json` here: https://huggingface.co/ArthurConmy/alternative-neel-tokenizer/blob/main/tokenizer.json . These caused the breaking change along with the tokenizers PR above
|
5 |
+
2) (Not important) we use `GPTNeoXTokenizer` rather than `PretrainedTokenizerFast` in `tokenizer_config.json` as this seemed to match what GPT-NeoX did
|
6 |
+
|
7 |
# Neel's README
|
8 |
|
9 |
This is a fork of the GPT NeoX 20B tokenizer, edited to split every numerical digit into a separate token. This has the goal of making it easier for the model to learn arithmetic capabilities and to hopefully be more interpretable, and copies the idea from the [PaLM tokenizer](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html).
|