metadata
dataset_info:
features:
- name: correct
dtype: string
- name: incorrect
dtype: string
splits:
- name: train
num_bytes: 1211373359.8361242
num_examples: 3161164
- name: test
num_bytes: 151421861.5819379
num_examples: 395146
- name: validation
num_bytes: 151421861.5819379
num_examples: 395146
download_size: 752362217
dataset_size: 1514217083
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
language:
- dv
license: apache-2.0
pretty_name: dv_text_erros
DV Text Errors
Dhivehi text error correction dataset containing correct sentences and synthetically generated errors. The dataset aims to test Dhivehi language error correction models and tools.
About Dataset
- Task: Text error correction
- Language: Dhivehi (dv)
Dataset Structure
Input-output pairs of Dhivehi text:
correct
: Original correct sentencesincorrect
: Sentences with synthetic errors
Statistics
- Train set: {train_examples} examples ({0.7999997975429817}%)
- Test set: {test_examples} examples ({0.10000010122850919}%)
- Validation set: {val_examples} examples ({0.10000010122850919}%)
Details:
- Unique words: {448628}
{
"total_examples": {
"train": 3161164,
"test": 395146,
"validation": 395146
},
"avg_sentence_length": {
"train": 11.968980097204701,
"test": 11.961302910822836,
"validation": 11.973824864733542
},
"error_distribution": {
"min": 0,
"max": 2411,
"avg": 64.85144965588626
}
}
Usage
from datasets import load_dataset
dataset = load_dataset("alakxender/dv-synthetic-errors")
Dataset Creation
Created using:
- Source: Collection of Dhivehi articles
- Error generation: Character and diacritic substitutions
- Error rate: 30% per word probability