sfrontull's picture
Update README.md
180803b verified
metadata
task_categories:
  - translation
language:
  - it
  - lld
size_categories:
  - n<1K

Dataset Card: Testset 1

Overview

Dataset Name: Testset 1

Source Paper: "Rule-Based, Neural and LLM Back-Translation: Comparative Insights from a Variant of Ladin"

Description: Testset 1 consists of parallel sentences in Ladin and Italian.

Dataset Structure

  • Files:
    • statut.parquet: Contains the Italian - Ladin (Val Badia) translations.

Format

  • File Type: Parquet
  • Encoding: UTF-8

Usage

from datasets import load_dataset
data = load_dataset("sfrontull/stiftungsparkasse-lld_valbadia-ita")

Citation

If you use this dataset, please cite the following paper:

@inproceedings{frontull-moser-2024-rule,
    title = "Rule-Based, Neural and {LLM} Back-Translation: Comparative Insights from a Variant of {L}adin",
    author = "Frontull, Samuel  and
      Moser, Georg",
    editor = "Ojha, Atul Kr.  and
      Liu, Chao-hong  and
      Vylomova, Ekaterina  and
      Pirinen, Flammie  and
      Abbott, Jade  and
      Washington, Jonathan  and
      Oco, Nathaniel  and
      Malykh, Valentin  and
      Logacheva, Varvara  and
      Zhao, Xiaobing",
    booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.loresmt-1.13",
    pages = "128--138",
    abstract = "This paper explores the impact of different back-translation approaches on machine translation for Ladin, specifically the Val Badia variant. Given the limited amount of parallel data available for this language (only 18k Ladin-Italian sentence pairs), we investigate the performance of a multilingual neural machine translation model fine-tuned for Ladin-Italian. In addition to the available authentic data, we synthesise further translations by using three different models: a fine-tuned neural model, a rule-based system developed specifically for this language pair, and a large language model. Our experiments show that all approaches achieve comparable translation quality in this low-resource scenario, yet round-trip translations highlight differences in model performance.",
}