MoritzLaurer HF staff commited on
Commit
4f1fdaf
1 Parent(s): 8b7d59e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +145 -4
README.md CHANGED
@@ -1,8 +1,149 @@
1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
 
4
- |Datasets|mnli_m|mnli_mm|ar|bg|de|el|en|es|fr|hi|ru|sw|th|tr|ur|vi|zh|avg_xnli|
5
- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
6
- |Accuracy|0.782|0.8|0.687|0.742|0.719|0.723|0.789|0.748|0.741|0.691|0.714|0.642|0.699|0.696|0.664|0.723|0.721|0.713|
7
- |Speed (text/sec)|4430.0|4395.0|6210.0|6003.0|6053.0|5409.0|6531.0|6205.0|5615.0|5734.0|5970.0|6219.0|6289.0|6533.0|5851.0|5970.0|6798.0|6093.0|
8
 
 
1
 
2
+ ---
3
+ language:
4
+ - multilingual
5
+ - en
6
+ - ar
7
+ - bg
8
+ - de
9
+ - el
10
+ - es
11
+ - fr
12
+ - hi
13
+ - ru
14
+ - sw
15
+ - th
16
+ - tr
17
+ - ur
18
+ - vi
19
+ - zh
20
+ license: mit
21
+ tags:
22
+ - zero-shot-classification
23
+ - text-classification
24
+ - nli
25
+ - pytorch
26
+ metrics:
27
+ - accuracy
28
+ datasets:
29
+ - multi_nli
30
+ - xnli
31
+ pipeline_tag: zero-shot-classification
32
+ widget:
33
+ - text: "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
34
+ candidate_labels: "politics, economy, entertainment, environment"
35
+ ---
36
+
37
+
38
+ ---
39
+ # Multilingual MiniLMv2-L6-mnli-xnli
40
+ ## Model description
41
+ This multilingual model can perform natural language inference (NLI) on 100+ languages and is therefore also
42
+ suitable for multilingual zero-shot classification. The underlying multilingual-MiniLM-L6 model was created
43
+ by Microsoft and was distilled from XLM-RoBERTa-large (see details [in the original paper](https://arxiv.org/pdf/2002.10957.pdf)
44
+ and newer information in [this repo](https://github.com/microsoft/unilm/tree/master/minilm)).
45
+ The model was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages,
46
+ as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
47
+
48
+ The main advantage of distilled models is that they are smaller (faster inference, lower memory requirements) than their teachers (XLM-RoBERTa-large).
49
+ The disadvantage is that they lose some of the performance of their larger teachers.
50
+
51
+
52
+ ### How to use the model
53
+ #### Simple zero-shot classification pipeline
54
+ ```python
55
+ from transformers import pipeline
56
+ classifier = pipeline("zero-shot-classification", model="MoritzLaurer/xlm-v-base-mnli-xnli")
57
+
58
+ sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
59
+ candidate_labels = ["politics", "economy", "entertainment", "environment"]
60
+ output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
61
+ print(output)
62
+ ```
63
+ #### NLI use-case
64
+ ```python
65
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
66
+ import torch
67
+ device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
68
+
69
+ model_name = "MoritzLaurer/xlm-v-base-mnli-xnli"
70
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
71
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
72
+
73
+ premise = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
74
+ hypothesis = "Emmanuel Macron is the President of France"
75
+
76
+ input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
77
+ output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
78
+ prediction = torch.softmax(output["logits"][0], -1).tolist()
79
+ label_names = ["entailment", "neutral", "contradiction"]
80
+ prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
81
+ print(prediction)
82
+ ```
83
+
84
+ ### Training data
85
+ This model was trained on the XNLI development dataset and the MNLI train dataset.
86
+ The XNLI development set consists of 2490 professionally translated texts from English
87
+ to 14 other languages (37350 texts in total) (see [this paper](https://arxiv.org/pdf/1809.05053.pdf)).
88
+ Note that the XNLI contains a training set of 15 machine translated versions of the MNLI dataset for 15 languages,
89
+ but due to quality issues with these machine translations, this model was only trained on the professional translations
90
+ from the XNLI development set and the original English MNLI training set (392 702 texts).
91
+ Not using machine translated texts can avoid overfitting the model to the 15 languages;
92
+ avoids catastrophic forgetting of the other languages it was pre-trained on;
93
+ and significantly reduces training costs.
94
+
95
+ ### Training procedure
96
+ The model was trained using the Hugging Face trainer with the following hyperparameters.
97
+ ```
98
+ training_args = TrainingArguments(
99
+ num_train_epochs=3, # total number of training epochs
100
+ learning_rate=4e-05,
101
+ per_device_train_batch_size=64, # batch size per device during training
102
+ per_device_eval_batch_size=120, # batch size for evaluation
103
+ warmup_ratio=0.06, # number of warmup steps for learning rate scheduler
104
+ weight_decay=0.01, # strength of weight decay
105
+ )
106
+ ```
107
+
108
+ ### Eval results
109
+ The model was evaluated on the XNLI test set on 15 languages (5010 texts per language, 75150 in total).
110
+ Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training data
111
+ in the specific language (cross-lingual transfer). This means that the model is also able of doing NLI on
112
+ the other languages it was training on, but performance is most likely lower than for those languages available in XNLI.
113
+
114
+ The average XNLI performance of multilingual-MiniLM-L6 reported in the paper is 0.68 ([see table 11](https://arxiv.org/pdf/2002.10957.pdf)).
115
+ This reimplementation has an average performance of 0.713.
116
+ This increase in performance is probably thanks to the addition of MNLI in the training data and this model was distilled from
117
+ XLM-RoBERTa-large instead of -base (multilingual-MiniLM-L6-v2).
118
+
119
+ |Datasets|avg_xnli|ar|bg|de|el|en|es|fr|hi|ru|sw|th|tr|ur|vi|zh|
120
+ | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
121
+ |Accuracy|0.713|0.687|0.742|0.719|0.723|0.789|0.748|0.741|0.691|0.714|0.642|0.699|0.696|0.664|0.723|0.721|
122
+ |Speed (text/sec)|6093.0|6210.0|6003.0|6053.0|5409.0|6531.0|6205.0|5615.0|5734.0|5970.0|6219.0|6289.0|6533.0|5851.0|5970.0|6798.0|
123
+
124
+
125
+ |Datasets|mnli_m|mnli_mm|
126
+ | :---: | :---: | :---: |
127
+ |Accuracy|0.782|0.8|0.687|
128
+ |Speed (text/sec)|4430.0|4395.0|
129
+
130
+
131
+
132
+ ## Limitations and bias
133
+ Please consult the original paper and literature on different NLI datasets for potential biases.
134
+
135
+ ## Citation
136
+ If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022.
137
+ ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’.
138
+ Preprint, June. Open Science Framework. https://osf.io/74b8k.
139
+
140
+ ## Ideas for cooperation or questions?
141
+ If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
142
+
143
+
144
+
145
+
146
+
147
 
148
 
 
 
 
 
149