File size: 5,741 Bytes
7db83b3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b395d14
7db83b3
 
 
 
bdeff51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7db83b3
bdeff51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7db83b3
 
b395d14
bdeff51
7db83b3
bdeff51
7db83b3
bdeff51
7db83b3
bdeff51
7db83b3
bdeff51
7db83b3
bdeff51
7db83b3
bdeff51
7db83b3
bdeff51
7db83b3
 
 
 
bdeff51
7db83b3
bdeff51
7db83b3
 
 
 
 
bdeff51
7db83b3
bdeff51
7db83b3
 
 
bdeff51
7db83b3
bdeff51
 
7db83b3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bdeff51
 
7db83b3
bdeff51
7db83b3
bdeff51
7db83b3
bdeff51
7db83b3
bdeff51
7db83b3
 
 
bdeff51
7db83b3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221

This is the converted model from Unbabel/wmt23-cometkiwi-da

1) Just kept the weights/bias keys()
2) Renamed the keys to match the original Facebook/XLM-roberta-XL
3) kept the layer_wise_attention / estimator layers

Because of a hack in HF's code I had to rename the "layerwise_attention.gamma" key to "layerwise_attention.gam"

I changed the config.json key "layer_transformation" from sparsemax to softmax because there is a bug in COMET since the flag is not passed, the actual function used is the default which is softmax.

Usage:

```
from transformers import XLMRobertaTokenizer, XLMRobertaTokenizerFast, AutoModel
tokenizer = XLMRobertaTokenizerFast.from_pretrained("vince62s/wmt23-cometkiwi-da-roberta-xl", trust_remote_code=True)
model = AutoModel.from_pretrained("vince62s/wmt23-cometkiwi-da-roberta-xl", trust_remote_code=True)

text = "Hello world!</s></s>Bonjour le monde"
encoded_text = tokenizer(text, return_tensors='pt')
print(encoded_text)
output = model(**encoded_text)
print(output[0])

{'input_ids': tensor([[    0, 35378,  8999,    38,     2,     2, 84602,    95, 11146,     2]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
tensor([[0.8217]], grad_fn=<AddmmBackward0>)

```

Let's double check with the original code from Unbabel Comet:

```
from comet import download_model, load_from_checkpoint
model = load_from_checkpoint("/home/vincent/Downloads/cometkiwi23/checkpoints/model.ckpt") # this is the Unbabel checkpoint
data = [{"mt": "Hello world!", "src": "Bonjour le monde"}]
output = model.predict(data, gpus=0)
print(output)

Prediction([('scores', [0.8216837048530579]), ('system_score', 0.8216837048530579)])
```






---
extra_gated_heading: Acknowledge license to accept the repository
extra_gated_button_content: Acknowledge license
pipeline_tag: translation
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: cc-by-nc-sa-4.0
library_name: transformers
---

This is a [COMET](https://github.com/Unbabel/COMET) quality estimation model: It receives a source sentence and the respective translation and returns a score that reflects the quality of the translation.

# Paper

[CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task](https://aclanthology.org/2022.wmt-1.60) (Rei et al., WMT 2022)

# License:

cc-by-nc-sa-4.0

# Usage (unbabel-comet)

Using this model requires unbabel-comet to be installed:

```bash
pip install --upgrade pip  # ensures that pip is current 
pip install "unbabel-comet>=2.0.0"
```

Make sure you acknowledge its License and Log in into Hugging face hub before using:

```bash
huggingface-cli login
# or using an environment variable
huggingface-cli login --token $HUGGINGFACE_TOKEN
```

Then you can use it through comet CLI:

```bash
comet-score -s {source-input}.txt -t {translation-output}.txt --model Unbabel/wmt22-cometkiwi-da
```

Or using Python:

```python
from comet import download_model, load_from_checkpoint

model_path = download_model("Unbabel/wmt22-cometkiwi-da")
model = load_from_checkpoint(model_path)
data = [
    {
        "src": "The output signal provides constant sync so the display never glitches.",
        "mt": "Das Ausgangssignal bietet eine konstante Synchronisation, so dass die Anzeige nie stört."
    },
    {
        "src": "Kroužek ilustrace je určen všem milovníkům umění ve věku od 10 do 15 let.",
        "mt": "Кільце ілюстрації призначене для всіх любителів мистецтва у віці від 10 до 15 років."
    },
    {
        "src": "Mandela then became South Africa's first black president after his African National Congress party won the 1994 election.",
        "mt": "その後、1994年の選挙でアフリカ国民会議派が勝利し、南アフリカ初の黒人大統領となった。"
    }
]
model_output = model.predict(data, batch_size=8, gpus=1)
print (model_output)
```

# Intended uses

Our model is intented to be used for **reference-free MT evaluation**. 

Given a source text and its translation, outputs a single score between 0 and 1 where 1 represents a perfect translation. 

# Languages Covered:

This model builds on top of InfoXLM which cover the following languages:

Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.

Thus, results for language pairs containing uncovered languages are unreliable!