Datasets:
metadata
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 51355991596
num_examples: 15930958
download_size: 29126915011
dataset_size: 51355991596
language:
- ja
- en
- code
size_categories:
- 10M<n<100M
license: apache-2.0
Description
This dataset was used to pre-train Co-Encoder's Context Encoder when we participated in LOCAL AI HACKATHON #000.
The number of tokens (Using tokenizer of calm2-chat)
Language | The number of tokens |
---|---|
Japanese | 4.7b |
English | 5b |
Code | 0.9b |
NOTE
This dataset has not passed sentence end boundary determination or Perplexity Filtering, so there is room for improvement in quality.