maximoss commited on
Commit
72d4167
·
1 Parent(s): a1eea0f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -108,7 +108,7 @@ This repository contains a French version of the [GQNLI](https://github.com/ruix
108
 
109
  ### Citation Information
110
 
111
- ```
112
  @inproceedings{cui-etal-2022-generalized-quantifiers,
113
  title = "Generalized Quantifiers as a Source of Error in Multilingual {NLU} Benchmarks",
114
  author = "Cui, Ruixiang and
@@ -124,7 +124,7 @@ This repository contains a French version of the [GQNLI](https://github.com/ruix
124
  pages = "4875--4893",
125
  abstract = "Logical approaches to representing language have developed and evaluated computational models of quantifier words since the 19th century, but today{'}s NLU models still struggle to capture their semantics. We rely on Generalized Quantifier Theory for language-independent representations of the semantics of quantifier words, to quantify their contribution to the errors of NLU models. We find that quantifiers are pervasive in NLU benchmarks, and their occurrence at test time is associated with performance drops. Multilingual models also exhibit unsatisfying quantifier reasoning abilities, but not necessarily worse for non-English languages. To facilitate directly-targeted probing, we present an adversarial generalized quantifier NLI task (GQNLI) and show that pre-trained language models have a clear lack of robustness in generalized quantifier reasoning.",
126
  }
127
- ```
128
 
129
  ### Contributions
130
 
 
108
 
109
  ### Citation Information
110
 
111
+ ````BibTeX
112
  @inproceedings{cui-etal-2022-generalized-quantifiers,
113
  title = "Generalized Quantifiers as a Source of Error in Multilingual {NLU} Benchmarks",
114
  author = "Cui, Ruixiang and
 
124
  pages = "4875--4893",
125
  abstract = "Logical approaches to representing language have developed and evaluated computational models of quantifier words since the 19th century, but today{'}s NLU models still struggle to capture their semantics. We rely on Generalized Quantifier Theory for language-independent representations of the semantics of quantifier words, to quantify their contribution to the errors of NLU models. We find that quantifiers are pervasive in NLU benchmarks, and their occurrence at test time is associated with performance drops. Multilingual models also exhibit unsatisfying quantifier reasoning abilities, but not necessarily worse for non-English languages. To facilitate directly-targeted probing, we present an adversarial generalized quantifier NLI task (GQNLI) and show that pre-trained language models have a clear lack of robustness in generalized quantifier reasoning.",
126
  }
127
+ ````
128
 
129
  ### Contributions
130