Commit
•
4ef15ac
1
Parent(s):
c6ebf08
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,251 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- multilingual
|
4 |
+
- zh
|
5 |
+
- ja
|
6 |
+
- ar
|
7 |
+
- ko
|
8 |
+
- de
|
9 |
+
- fr
|
10 |
+
- es
|
11 |
+
- pt
|
12 |
+
- hi
|
13 |
+
- id
|
14 |
+
- it
|
15 |
+
- tr
|
16 |
+
- ru
|
17 |
+
- bn
|
18 |
+
- ur
|
19 |
+
- mr
|
20 |
+
- ta
|
21 |
+
- vi
|
22 |
+
- fa
|
23 |
+
- pl
|
24 |
+
- uk
|
25 |
+
- nl
|
26 |
+
- sv
|
27 |
+
- he
|
28 |
+
- sw
|
29 |
+
- ps
|
30 |
+
tags:
|
31 |
+
- zero-shot-classification
|
32 |
+
- text-classification
|
33 |
+
- nli
|
34 |
+
- pytorch
|
35 |
+
license: mit
|
36 |
+
metrics:
|
37 |
+
- accuracy
|
38 |
+
datasets:
|
39 |
+
- MoritzLaurer/multilingual-NLI-26lang-2mil7
|
40 |
+
- xnli
|
41 |
+
- multi_nli
|
42 |
+
- anli
|
43 |
+
- fever
|
44 |
+
- lingnli
|
45 |
+
- alisawuffles/WANLI
|
46 |
+
pipeline_tag: zero-shot-classification
|
47 |
+
#- text-classification
|
48 |
+
widget:
|
49 |
+
- text: "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
|
50 |
+
candidate_labels: "politics, economy, entertainment, environment"
|
51 |
+
|
52 |
+
model-index: # info: https://github.com/huggingface/hub-docs/blame/main/modelcard.md
|
53 |
+
- name: DeBERTa-v3-base-xnli-multilingual-nli-2mil7
|
54 |
+
results:
|
55 |
+
- task:
|
56 |
+
type: text-classification # Required. Example: automatic-speech-recognition
|
57 |
+
name: Natural Language Inference # Optional. Example: Speech Recognition
|
58 |
+
dataset:
|
59 |
+
type: multi_nli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
|
60 |
+
name: MultiNLI-matched # Required. A pretty name for the dataset. Example: Common Voice (French)
|
61 |
+
split: validation_matched # Optional. Example: test
|
62 |
+
metrics:
|
63 |
+
- type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
|
64 |
+
value: 0,857 # Required. Example: 20.90
|
65 |
+
#name: # Optional. Example: Test WER
|
66 |
+
verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
|
67 |
+
- task:
|
68 |
+
type: text-classification # Required. Example: automatic-speech-recognition
|
69 |
+
name: Natural Language Inference # Optional. Example: Speech Recognition
|
70 |
+
dataset:
|
71 |
+
type: multi_nli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
|
72 |
+
name: MultiNLI-mismatched # Required. A pretty name for the dataset. Example: Common Voice (French)
|
73 |
+
split: validation_mismatched # Optional. Example: test
|
74 |
+
metrics:
|
75 |
+
- type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
|
76 |
+
value: 0,856 # Required. Example: 20.90
|
77 |
+
#name: # Optional. Example: Test WER
|
78 |
+
verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
|
79 |
+
- task:
|
80 |
+
type: text-classification # Required. Example: automatic-speech-recognition
|
81 |
+
name: Natural Language Inference # Optional. Example: Speech Recognition
|
82 |
+
dataset:
|
83 |
+
type: anli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
|
84 |
+
name: ANLI-all # Required. A pretty name for the dataset. Example: Common Voice (French)
|
85 |
+
split: test_r1+test_r2+test_r3 # Optional. Example: test
|
86 |
+
metrics:
|
87 |
+
- type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
|
88 |
+
value: 0,537 # Required. Example: 20.90
|
89 |
+
#name: # Optional. Example: Test WER
|
90 |
+
verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
|
91 |
+
- task:
|
92 |
+
type: text-classification # Required. Example: automatic-speech-recognition
|
93 |
+
name: Natural Language Inference # Optional. Example: Speech Recognition
|
94 |
+
dataset:
|
95 |
+
type: anli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
|
96 |
+
name: ANLI-r3 # Required. A pretty name for the dataset. Example: Common Voice (French)
|
97 |
+
split: test_r3 # Optional. Example: test
|
98 |
+
metrics:
|
99 |
+
- type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
|
100 |
+
value: 0,497 # Required. Example: 20.90
|
101 |
+
#name: # Optional. Example: Test WER
|
102 |
+
verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
|
103 |
+
- task:
|
104 |
+
type: text-classification # Required. Example: automatic-speech-recognition
|
105 |
+
name: Natural Language Inference # Optional. Example: Speech Recognition
|
106 |
+
dataset:
|
107 |
+
type: alisawuffles/WANLI # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
|
108 |
+
name: WANLI # Required. A pretty name for the dataset. Example: Common Voice (French)
|
109 |
+
split: test # Optional. Example: test
|
110 |
+
metrics:
|
111 |
+
- type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
|
112 |
+
value: 0,732 # Required. Example: 20.90
|
113 |
+
#name: # Optional. Example: Test WER
|
114 |
+
verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
|
115 |
+
- task:
|
116 |
+
type: text-classification # Required. Example: automatic-speech-recognition
|
117 |
+
name: Natural Language Inference # Optional. Example: Speech Recognition
|
118 |
+
dataset:
|
119 |
+
type: lingnli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
|
120 |
+
name: LingNLI # Required. A pretty name for the dataset. Example: Common Voice (French)
|
121 |
+
split: test # Optional. Example: test
|
122 |
+
metrics:
|
123 |
+
- type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
|
124 |
+
value: 0,788 # Required. Example: 20.90
|
125 |
+
#name: # Optional. Example: Test WER
|
126 |
+
verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
|
127 |
+
- task:
|
128 |
+
type: text-classification # Required. Example: automatic-speech-recognition
|
129 |
+
name: Natural Language Inference # Optional. Example: Speech Recognition
|
130 |
+
dataset:
|
131 |
+
type: fever-nli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
|
132 |
+
name: fever-nli # Required. A pretty name for the dataset. Example: Common Voice (French)
|
133 |
+
split: test # Optional. Example: test
|
134 |
+
metrics:
|
135 |
+
- type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
|
136 |
+
value: 0,761 # Required. Example: 20.90
|
137 |
+
#name: # Optional. Example: Test WER
|
138 |
+
verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
|
139 |
+
|
140 |
+
|
141 |
+
|
142 |
+
---
|
143 |
+
# Model card for mDeBERTa-v3-base-xnli-multilingual-nli-2mil7
|
144 |
+
|
145 |
+
## Model description
|
146 |
+
|
147 |
+
This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual zero-shot classification. The underlying mDeBERTa-v3-base model was pre-trained by Microsoft on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100) with 100 languages. The model was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli) and on the [multilingual-NLI-26lang-2mil7 dataset](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7). Both datasets contain more than 2.7 million hypothesis-premise pairs in 27 languages spoken by more than 4 billion people.
|
148 |
+
|
149 |
+
As of December 2021, mDeBERTa-v3-base is the best performing multilingual base-sized transformer model introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf).
|
150 |
+
|
151 |
+
|
152 |
+
## Intended uses & limitations
|
153 |
+
#### How to use the model
|
154 |
+
```python
|
155 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
156 |
+
import torch
|
157 |
+
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
|
158 |
+
|
159 |
+
model_name = "MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7"
|
160 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
161 |
+
model = AutoModelForSequenceClassification.from_pretrained(model_name)
|
162 |
+
|
163 |
+
premise = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
|
164 |
+
hypothesis = "Emmanuel Macron is the President of France"
|
165 |
+
|
166 |
+
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
|
167 |
+
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
|
168 |
+
prediction = torch.softmax(output["logits"][0], -1).tolist()
|
169 |
+
label_names = ["entailment", "neutral", "contradiction"]
|
170 |
+
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
|
171 |
+
print(prediction)
|
172 |
+
```
|
173 |
+
|
174 |
+
### Training data
|
175 |
+
This model was trained on the [multilingual-nli-26lang-2mil7 dataset](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7) and the [XNLI](https://huggingface.co/datasets/xnli) validation dataset.
|
176 |
+
|
177 |
+
The multilingual-nli-26lang-2mil7 dataset contains 2 730 000 NLI hypothesis-premise pairs in 26 languages spoken by more than 4 billion people. The dataset contains 105 000 text pairs per language. It is based on the English datasets [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [ANLI](https://huggingface.co/datasets/anli), [LingNLI](https://arxiv.org/pdf/2104.07179.pdf) and [WANLI](https://huggingface.co/datasets/alisawuffles/WANLI) and was created using the latest open-source machine translation models. The languages in the dataset are: ['ar', 'bn', 'de', 'es', 'fa', 'fr', 'he', 'hi', 'id', 'it', 'ja', 'ko', 'mr', 'nl', 'pl', 'ps', 'pt', 'ru', 'sv', 'sw', 'ta', 'tr', 'uk', 'ur', 'vi', 'zh'] (see [ISO language codes](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes). For more details, see the [datasheet](XXX). In addition, a sample of 105 000 text pairs was also added for English following the same sampling method as the other languages, leading to 27 languages.
|
178 |
+
|
179 |
+
Moreover, for each language a random set of 10% of the hypothesis-premise pairs was added where an English hypothesis was paired with the premise in the other language (and the same for English premises and other language hypotheses). This mix of languages in the text pairs should enable users to formulate a hypothesis in English for a target text in another language.
|
180 |
+
|
181 |
+
The [XNLI](https://huggingface.co/datasets/xnli) validation set consists of 2490 professionally translated texts from English to 14 other languages (37350 texts in total) (see [this paper](https://arxiv.org/pdf/1809.05053.pdf)). Note that XNLI also contains a training set of 14 machine translated versions of the MultiNLI dataset for 14 languages, but this data was excluded due to quality issues with the machine translations from 2018.
|
182 |
+
|
183 |
+
Note that for evaluation purposes, three languages were excluded from the XNLI training data and only included in the test data: ["bg","el","th"]. This was done in order to test the performance of the model on languages it has not seen during NLI fine-tuning on 27 languages, but only during pre-training on 100 languages - see evaluation metrics below.
|
184 |
+
|
185 |
+
The total training dataset had a size of 3 287 280 hypothesis-premise pairs.
|
186 |
+
|
187 |
+
|
188 |
+
### Training procedure
|
189 |
+
mDeBERTa-v3-base-mnli-xnli was trained using the Hugging Face trainer with the following hyperparameters.
|
190 |
+
|
191 |
+
```
|
192 |
+
training_args = TrainingArguments(
|
193 |
+
num_train_epochs=3, # total number of training epochs
|
194 |
+
learning_rate=2e-05,
|
195 |
+
per_device_train_batch_size=32, # batch size per device during training
|
196 |
+
gradient_accumulation_steps=2, # to double the effective batch size for
|
197 |
+
warmup_ratio=0.06, # number of warmup steps for learning rate scheduler
|
198 |
+
weight_decay=0.01, # strength of weight decay
|
199 |
+
fp16=False
|
200 |
+
)
|
201 |
+
```
|
202 |
+
|
203 |
+
### Eval results
|
204 |
+
The model was evaluated on the XNLI test set in 15 languages (5010 texts per language, 75150 in total) and the English test sets of [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [ANLI](https://huggingface.co/datasets/anli), [LingNLI](https://arxiv.org/pdf/2104.07179.pdf) and [WANLI](https://huggingface.co/datasets/alisawuffles/WANLI) . Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training data in the specific language (cross-lingual transfer). This means that the model is also able to do NLI on the other 73 languages mDeBERTa was pre-trained on, but performance is most likely lower than for those languages seen during NLI fine-tuning. The performance on the languages ["bg","el","th"] in the table below is a good indicated of this cross-lingual transfer, as these languages were not included in the training data.
|
205 |
+
|
206 |
+
|XNLI subsets|ar|bg|de|el|en|es|fr|hi|ru|sw|th|tr|ur|vi|zh|
|
207 |
+
| :---: |:---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
208 |
+
|Accuracy|0.794|0.822|0.824|0.809|0.871|0.832|0.823|0.769|0.803|0.746|0.786|0.792|0.744|0.793|0.803|
|
209 |
+
|Speed (text/sec, A100-GPU)|1344.0|1355.0|1472.0|1149.0|1697.0|1446.0|1278.0|1115.0|1380.0|1463.0|1713.0|1594.0|1189.0|877.0|1887.0|
|
210 |
+
|
211 |
+
|English Datasets|mnli_test_m|mnli_test_mm|anli_test|anli_test_r3|fever_test|ling_test|wanli_test|
|
212 |
+
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
213 |
+
|Accuracy|0.857|0.856|0.537|0.497|0.761|0.788|0.732|0.794|
|
214 |
+
|Speed (text/sec, A100-GPU)|1000.0|1009.0|794.0|672.0|374.0|1177.0|1468.0|
|
215 |
+
|
216 |
+
|
217 |
+
Also note that if other multilingual models on the model hub claim performance of around 90% on languages other than English, the authors have most likely made a mistake during testing since non of the latest papers shows a multilingual average performance of more than a few points above 80% on XNLI (see [here](https://arxiv.org/pdf/2111.09543.pdf) or [here](https://arxiv.org/pdf/1911.02116.pdf)).
|
218 |
+
|
219 |
+
|
220 |
+
## Limitations and bias
|
221 |
+
Please consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases. Moreover, note that the multilingual-nli-26lang-2mil7 dataset was created using machine translation, which reduces the quality of the data for a complex task like NLI. You can inspect the data via the Hugging Face [dataset viewer](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7) for languages you are interested in. Note that grammatical errors introduced by machine translation are less of an issue for zero-shot classification, for which grammar is less important.
|
222 |
+
|
223 |
+
|
224 |
+
## Citation
|
225 |
+
|
226 |
+
If the dataset is useful for you, please cite the following article:
|
227 |
+
```
|
228 |
+
@article{laurer_less_2022,
|
229 |
+
title = {Less {Annotating}, {More} {Classifying} – {Addressing} the {Data} {Scarcity} {Issue} of {Supervised} {Machine} {Learning} with {Deep} {Transfer} {Learning} and {BERT} - {NLI}},
|
230 |
+
url = {https://osf.io/74b8k},
|
231 |
+
language = {en-us},
|
232 |
+
urldate = {2022-07-28},
|
233 |
+
journal = {Preprint},
|
234 |
+
author = {Laurer, Moritz and Atteveldt, Wouter van and Casas, Andreu Salleras and Welbers, Kasper},
|
235 |
+
month = jun,
|
236 |
+
year = {2022},
|
237 |
+
note = {Publisher: Open Science Framework},
|
238 |
+
}
|
239 |
+
```
|
240 |
+
|
241 |
+
|
242 |
+
## Ideas for cooperation or questions?
|
243 |
+
For updates on new models and datasets, follow me on [Twitter](https://twitter.com/MoritzLaurer).
|
244 |
+
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or on [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
|
245 |
+
|
246 |
+
|
247 |
+
## Debugging and issues
|
248 |
+
Note that DeBERTa-v3 was released in late 2021 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 or higher might solve some issues. Note that mDeBERTa currently does not support FP16, see here: https://github.com/microsoft/DeBERTa/issues/77
|
249 |
+
|
250 |
+
|
251 |
+
|