davanstrien's picture
davanstrien HF staff
some metadata tweaks :)
031b345 verified
|
raw
history blame
3.39 kB
---
pretty_name: Wikipedia Seed Machine Translation Data in Persian
size_categories:
- 1K<n<10K
license: cc-by-sa-4.0
language:
- fa
---
# Wikipedia Seed Machine Translation Data in Persian
## Description
Persian (Farsi) translation of >6000 English sentences originally from Wikipedia articles, included in the [OLDI](https://oldi.org/) machine translation seed dataset.
## Workflow
<!-- What workflow was followed in creating this dataset? E.g., for a translated dataset, relevant information includes: what language the content was translated from, the number of translators, aggregate translator information (how many were native speakers in the target language, how many were highly proficient in the target languages, how many had professional translation experience), was any fraction of the data checked independently by third parties, etc. -->
I used the Gemini 1.5 Flash model over API to translate [the original seed dataset in English](https://github.com/openlanguagedata/seed), using an iterative approach, trying out with different prompt format / templates, models, etc.
Next step was to feed the initial translated data back to the same model as “low-quality” translation, asking it to improve specific translation issues X2, and while the first instance did improve the quality, the second time didn’t. this was the system prompt for the second instance:
```
You are an IAPTI Professional Translator, expert in translating between English and Persian.
You will receive a sentence in English and a low-quality translation in Persian.
Your task is to ONLY respond with an improved translation provided in Persian.
Try to aim for a natural and localized translation the way an expert translator with knowledge of cultural contexts in Iran,
such as yourself does. Also where possible, always choose well-known words by the Persian speaking public, like native Persian speakers.
```
I have also browsed through the **some parts** of the translated data myself (native speaker of the target language and highly proficient in English at C2 level of the European Language Framework) and believe the translation is indeed of high quality (I also have a university degree in translation and +10 years of experience in Translation and Interpretation), and in fact I’m so pleasantly surprised by the quality that I have tons of ideas I want to implement with synthetic data to push the entire Persian NLP society forward!
## License
<!-- Contributions to existing datasets must be released under the same license as the parent dataset. For completely new contributions, we encourage the use of an open license. At a minimum, data should be made available for research use. Please specify the license using an SPDX license identifier. -->
CC-BY-SA-4.0
## Attribution
<!-- Who should be credited for creating this dataset? Feel free to include citation data in BibTeX format. -->
```bibtex
@article{OLDI-Wikipedia-MTSeed-Persian,
title={OLDI-Wikipedia-MTSeed-Persian},
author={Reza Sayar},
year={2024},
}
```
## Language codes
<!--
* If this language is assigned an ISO 639-3 individual language code (not a macrolanguage code), specify it here.
* Please specify the script this language is written in using an ISO 15924 code.
* If this language is assigned a Glottocode, please specify it here. -->
* ISO 639-3: pes
* ISO 15924: Arab
* Glottocode: west2369