File size: 1,221 Bytes
fabfe4c a28bda1 fabfe4c a28bda1 fabfe4c a28bda1 fabfe4c 65e6994 627b146 65e6994 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
---
language:
- en
license: cc-by-sa-4.0
dataset_info:
- config_name: max_len-1024
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 270688438
num_examples: 449722
download_size: 148512885
dataset_size: 270688438
- config_name: max_len-448
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 272285366
num_examples: 930858
download_size: 151141635
dataset_size: 272285366
configs:
- config_name: max_len-1024
data_files:
- split: train
path: max_len-1024/train-*
- config_name: max_len-448
data_files:
- split: train
path: max_len-448/train-*
---
# Wikipedia simple splitted
Wikipedia simple data splitted using Langchain's RecursiveCharacterTextSplitter
## Usage
- This dataset is meant to be an ultra high quality dataset.
- This can be used for Annealing LLMs.
## Why its different
- This dataset is split with a max length of 448 (128*3.5) and 1024 (256*4) characters
- And rather than splitting by length, it is split using RecursiveCharacterTextSplitter. So, the chunks don't end randomly.
- Can use very large batch sizes.
## License
[CC-BY-SA](https://creativecommons.org/licenses/by-sa/4.0/deed.en) |