Update README.md
Browse files
README.md
CHANGED
@@ -48,4 +48,9 @@ size_categories:
|
|
48 |
Preprocessed version of [WikiSplit](https://arxiv.org/abs/1808.09468).
|
49 |
Since the [original WikiSplit dataset](https://huggingface.co/datasets/wiki_split) was tokenized and had some noises, we have used the [Moses detokenizer](https://github.com/moses-smt/mosesdecoder/blob/c41ff18111f58907f9259165e95e657605f4c457/scripts/tokenizer/detokenizer.perl) for detokenization and removed text fragments.
|
50 |
For detailed information on the preprocessing steps, please see [here](https://github.com/nttcslab-nlp/wikisplit-pp/src/datasets/common.py).
|
51 |
-
This preprocessed dataset serves as the basis for [WikiSplit++](https://huggingface.co/datasets/cl-nagoya/wikisplit-pp).
|
|
|
|
|
|
|
|
|
|
|
|
48 |
Preprocessed version of [WikiSplit](https://arxiv.org/abs/1808.09468).
|
49 |
Since the [original WikiSplit dataset](https://huggingface.co/datasets/wiki_split) was tokenized and had some noises, we have used the [Moses detokenizer](https://github.com/moses-smt/mosesdecoder/blob/c41ff18111f58907f9259165e95e657605f4c457/scripts/tokenizer/detokenizer.perl) for detokenization and removed text fragments.
|
50 |
For detailed information on the preprocessing steps, please see [here](https://github.com/nttcslab-nlp/wikisplit-pp/src/datasets/common.py).
|
51 |
+
This preprocessed dataset serves as the basis for [WikiSplit++](https://huggingface.co/datasets/cl-nagoya/wikisplit-pp).
|
52 |
+
|
53 |
+
## Dataset Description
|
54 |
+
|
55 |
+
- **Repository:** https://github.com/nttcslab-nlp/wikisplit-pp
|
56 |
+
- **Used Paper:** https://arxiv.org/abs/2404.09002
|