Datasets:
metadata
dataset_info:
features:
- name: id
dtype: string
- name: ja
dtype: string
- name: en
dtype: string
splits:
- name: train
num_bytes: 17758809
num_examples: 147876
download_size: 10012915
dataset_size: 17758809
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-4.0
task_categories:
- translation
language:
- ja
- en
pretty_name: tanaka-corpus
size_categories:
- 100K<n<1M
HF Datasets version of Tanaka Corpus.
Preprocess for HF Datasets
以下の内容でオリジナルデータを前処理しました。
wget ftp://ftp.edrdg.org/pub/Nihongo/examples.utf.gz
gunzip examples.utf.gz
import re
from pathlib import Path
from more_itertools import chunked
import datasets as ds
data = []
with Path("examples.utf").open() as f:
for row, _ in chunked(f, 2):
ja, en, idx = re.findall(r"A: (.*?)\t(.*?)#ID=(.*$)", row)[0]
data.append(
{
"id": idx,
"ja": ja.strip(),
"en": en.strip(),
}
)
dataset = ds.Dataset.from_list(data)
dataset.push_to_hub("hpprc/tanaka-corpus")