Datasets:
metadata
dataset_info:
features:
- name: kurdish
dtype: string
- name: turkish
dtype: string
splits:
- name: train
num_bytes: 2029048
num_examples: 7325
download_size: 1166958
dataset_size: 2029048
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-sa-4.0
task_categories:
- translation
language:
- ku
- tr
size_categories:
- 1K<n<10K
Summary
Extracted from Helsinki-NLP/bianet and reshaped it into 2 columns.
Usage
from datasets import load_dataset
ds = load_dataset("nazimali/kurdish-turkish-bianet-magazine", split="train")
ds
Dataset({
features: ['kurdish', 'turkish'],
num_rows: 7325
})