--- annotations_creators: - expert-generated language: - en - fr language_creators: - expert-generated license: - unknown multilinguality: - translation pretty_name: scat size_categories: - 10Kit

would grow more.", "context_fr": "L'air, l'eau, les continents. Donc, quel est le sujet de ton projet et quelles sont ses chances de gagner ? - Bien, mon projet est impressionnant. - Oh, bien. J'ai pris deux plantes , et je leur ai donné de l'eau et du soleil.", "fr": "Mais j'ai donné une attention particulière à une pour voir si

elle

grandit plus.", "has_supporting_context": true, } ``` In every example, the pronoun of interest and its translation are surrounded by `

...

` tags. These are guaranteed to be found in the `en` and `fr` field, respectively. Any span surrounded by `...` tags was identified by human annotators as supporting context to correctly translate the pronoun of interest. These spans can be missing altogether (i.e. no contextual information needed), or they can be found in any of the available fields. The `has_supporting_context` field indicates whether the example contains any supporting context. In the example above, the translation of the pronoun `it` (field `en`) is ambiguous, and the correct translation to the feminine French pronoun `elle` (in field `fr`) is only possible thanks to the supporting feminine noun `plantes` in the field `context_fr`. Since the example contains supporting context, the `has_supporting_context` field is set to `true`. ### Data Splits The dataset is split into `train`, `validation` and `test` sets. In the following table, we report the number of examples in the original dataset and in this filtered version in which examples containing malformed tags were removed. | Split | # Examples (original) | # Examples (**this**) | | :-----------: | :-------------------: | :-------------------: | | `train` | 11471 | 11144 | | `validation` | 145 | 144 | | `test` | 1000 | 973 | ### Dataset Creation From the original paper: >We recruited 20 freelance English-French translators on Upwork. We annotate examples from the contrastive test set by Lopes et al. (2020). This set includes 14K examples from the OpenSubtitles2018 dataset. Through our annotation effort, we obtain 14K examples of supporting context for pronoun anaphora resolution in ambiguous translations selected by professional human translators. Please refer to the original article [Do Context-Aware Translation Models Pay the Right Attention?](https://aclanthology.org/2021.acl-long.65/) for additional information on dataset creation. ## Additional Information ### Dataset Curators The original authors of SCAT are the curators of the original released dataset. For problems or updates on this 🤗 Datasets version, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com). ### Licensing Information The dataset license is unknown. ### Citation Information Please cite the authors if you use these corpus in your work. ```bibtex @inproceedings{yin-etal-2021-context, title = "Do Context-Aware Translation Models Pay the Right Attention?", author = "Yin, Kayo and Fernandes, Patrick and Pruthi, Danish and Chaudhary, Aditi and Martins, Andr{\'e} F. T. and Neubig, Graham", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.65", doi = "10.18653/v1/2021.acl-long.65", pages = "788--801", } ```