What is the difference between raw and non-raw wikitext datasets?
#2
by
tepsijash
- opened
The examples given look the same, but the dataset size is slightly different, so I'm wondering what the actual difference is.
Looks like the difference is that raw version contains raw text while non-raw contains word-level tokenization (e.g. , ...).
tepsijash
changed discussion status to
closed
Thanks for pointing out this missing information.
As described in their website: https://blog.salesforceairesearch.com/the-wikitext-long-term-dependency-language-modeling-dataset/
- Raw (for character level work) datasets contain the raw tokens, before the addition of the (unknown) tokens
- Non-raw (for word level work) datasets contain only the tokens in their vocabulary (wiki.train.tokens, wiki.valid.tokens, and wiki.test.tokens). The out-of-vocabulary tokens have been replaced with the the token
Maybe we should include this clarification in the dataset card.
See PR: #3.