Aarushhh's picture
Update README.md
627b146 verified
metadata
language:
  - en
license: cc-by-sa-4.0
dataset_info:
  - config_name: max_len-1024
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 270688438
        num_examples: 449722
    download_size: 148512885
    dataset_size: 270688438
  - config_name: max_len-448
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 272285366
        num_examples: 930858
    download_size: 151141635
    dataset_size: 272285366
configs:
  - config_name: max_len-1024
    data_files:
      - split: train
        path: max_len-1024/train-*
  - config_name: max_len-448
    data_files:
      - split: train
        path: max_len-448/train-*

Wikipedia simple splitted

Wikipedia simple data splitted using Langchain's RecursiveCharacterTextSplitter

Usage

  • This dataset is meant to be an ultra high quality dataset.
  • This can be used for Annealing LLMs.

Why its different

  • This dataset is split with a max length of 448 (1283.5) and 1024 (2564) characters
  • And rather than splitting by length, it is split using RecursiveCharacterTextSplitter. So, the chunks don't end randomly.
  • Can use very large batch sizes.

License

CC-BY-SA