Datasets:
metadata
license: cc-by-sa-4.0
dataset_info:
features:
- name: id
dtype: int64
- name: pageid
dtype: int64
- name: revid
dtype: int64
- name: title
dtype: string
- name: section
struct:
- name: dt
dtype: string
- name: h2
dtype: string
- name: h3
dtype: string
- name: h4
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7388520171
num_examples: 10473325
download_size: 3987399592
dataset_size: 7388520171
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- ja
A slightly modified version of the parsing and chunking method for singletongue/wikipedia-utils.
Pre-processing was performed using oshizo/wikipedia-utils, which is a fork of the original repository, singletongue/wikipedia-utils.
The Wikipedia data was crawled between 2023/12/5 and 2023/12/8.