File size: 4,399 Bytes
4198834
bc39c75
3518503
 
f9a7b0d
 
24e2137
3518503
 
 
 
4198834
3518503
 
 
 
 
 
b15e300
3518503
 
 
 
 
 
279d1af
3518503
 
 
d248d6a
3518503
 
 
 
 
3b64763
 
 
a1eea0f
 
3b64763
 
3518503
 
 
a39ae99
 
 
3518503
 
 
 
 
1aef5ac
 
72d4167
3518503
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72d4167
3518503
1aef5ac
 
0c5934d
be0bc66
724521e
be0bc66
1aef5ac
 
4fa282c
 
a2208e0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
license: cc-by-4.0
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
language:
- fr
size_categories:
- n<1K
---

# Dataset Card for Dataset Name

## Dataset Description

- **Homepage:** 
- **Repository:** https://github.com/mskandalis/gqnli-french
- **Paper:** 
- **Leaderboard:** 
- **Point of Contact:** 

### Dataset Summary

This repository contains a manually translated French version of the [GQNLI](https://github.com/ruixiangcui/GQNLI) challenge dataset, originally written in English. GQNLI is an evaluation corpus that is aimed for testing language model's generalized quantifier reasoning ability.

### Supported Tasks and Leaderboards

This dataset can be used for the task of Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), which is a sentence-pair classification task.

## Dataset Structure

### Data Fields

- `uid`: Index number.
- `premise`: The translated premise in the target language.
- `hypothesis`: The translated premise in the target language.
- `label`: The classification label, with possible values 0 (`entailment`), 1 (`neutral`), 2 (`contradiction`).
- `label_text`: The classification label, with possible values `entailment` (0), `neutral` (1), `contradiction` (2).
- `premise_original`: The original premise from the English source dataset.
- `hypothesis_original`: The original hypothesis from the English source dataset.

### Data Splits

|     name    |entailment|neutral|contradiction|
|-------------|---------:|------:|------------:|
|     test    |    97    |  100  |     103     |

## Additional Information

### Citation Information

**BibTeX:**

````BibTeX
@inproceedings{cui-etal-2022-generalized-quantifiers,
    title = "Generalized Quantifiers as a Source of Error in Multilingual {NLU} Benchmarks",
    author = "Cui, Ruixiang  and
      Hershcovich, Daniel  and
      S{\o}gaard, Anders",
    booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
    month = jul,
    year = "2022",
    address = "Seattle, United States",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.naacl-main.359",
    doi = "10.18653/v1/2022.naacl-main.359",
    pages = "4875--4893",
    abstract = "Logical approaches to representing language have developed and evaluated computational models of quantifier words since the 19th century, but today{'}s NLU models still struggle to capture their semantics. We rely on Generalized Quantifier Theory for language-independent representations of the semantics of quantifier words, to quantify their contribution to the errors of NLU models. We find that quantifiers are pervasive in NLU benchmarks, and their occurrence at test time is associated with performance drops. Multilingual models also exhibit unsatisfying quantifier reasoning abilities, but not necessarily worse for non-English languages. To facilitate directly-targeted probing, we present an adversarial generalized quantifier NLI task (GQNLI) and show that pre-trained language models have a clear lack of robustness in generalized quantifier reasoning.",
}
````

**ACL:**

Maximos Skandalis, Richard Moot, Christian Retoré, and Simon Robillard. 2024. *New datasets for automatic detection of textual entailment and of contradictions between sentences in French*. The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), Turin, Italy. European Language Resources Association (ELRA) and International Committee on Computational Linguistics (ICCL).

And

Ruixiang Cui, Daniel Hershcovich, and Anders Søgaard. 2022. [Generalized Quantifiers as a Source of Error in Multilingual NLU Benchmarks](https://aclanthology.org/2022.naacl-main.359). In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 4875–4893, Seattle, United States. Association for Computational Linguistics.

### Acknowledgements

This work was supported by the Defence Innovation Agency (AID) of the Directorate General of Armament (DGA) of the French Ministry of Armed Forces, and by the ICO, _Institut Cybersécurité Occitanie_, funded by Région Occitanie, France.