Datasets:
metadata
dataset_info:
features:
- name: english
dtype: string
- name: kurdish
dtype: string
splits:
- name: train
num_bytes: 49594900
num_examples: 148844
download_size: 25408908
dataset_size: 49594900
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- translation
language:
- ku
- en
pretty_name: Kurdish - English Sentences
size_categories:
- 100K<n<1M
Summary
Extracted subset from Helsinki-NLP/opus-100 and reshaped it into 2 columns. Note: noticed some low quality pairs. It would be a good project to classify and select high quality pairs.
Usage
from datasets import load_dataset
ds = load_dataset("nazimali/kurdish-english-opus-100", split="train")
ds
Dataset({
features: ['english', 'kurdish'],
num_rows: 148844
})