Update README.md
Browse files
README.md
CHANGED
@@ -137,7 +137,7 @@ model-index:
|
|
137 |
metrics:
|
138 |
- name: Test BLEU (En->De)
|
139 |
type: bleu
|
140 |
-
value:
|
141 |
- task:
|
142 |
type: Automatic Speech Translation
|
143 |
name: automatic-speech-translation
|
@@ -151,7 +151,7 @@ model-index:
|
|
151 |
metrics:
|
152 |
- name: Test BLEU (En->Es)
|
153 |
type: bleu
|
154 |
-
value:
|
155 |
- task:
|
156 |
type: Automatic Speech Translation
|
157 |
name: automatic-speech-translation
|
@@ -179,7 +179,7 @@ model-index:
|
|
179 |
metrics:
|
180 |
- name: Test BLEU (De->En)
|
181 |
type: bleu
|
182 |
-
value:
|
183 |
- task:
|
184 |
type: Automatic Speech Translation
|
185 |
name: automatic-speech-translation
|
@@ -193,7 +193,7 @@ model-index:
|
|
193 |
metrics:
|
194 |
- name: Test BLEU (Es->En)
|
195 |
type: bleu
|
196 |
-
value:
|
197 |
- task:
|
198 |
type: Automatic Speech Translation
|
199 |
name: automatic-speech-translation
|
@@ -207,7 +207,7 @@ model-index:
|
|
207 |
metrics:
|
208 |
- name: Test BLEU (Fr->En)
|
209 |
type: bleu
|
210 |
-
value:
|
211 |
- task:
|
212 |
type: Automatic Speech Translation
|
213 |
name: automatic-speech-translation
|
@@ -481,7 +481,7 @@ BLEU score on [FLEURS](https://huggingface.co/datasets/google/fleurs) test set:
|
|
481 |
|
482 |
| **Version** | **Model** | **En->De** | **En->Es** | **En->Fr** | **De->En** | **Es->En** | **Fr->En** |
|
483 |
|:-----------:|:---------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|
|
484 |
-
| 1.23.0 | canary-1b | 32.
|
485 |
|
486 |
|
487 |
BLEU score on [COVOST-v2](https://github.com/facebookresearch/covost) test set:
|
|
|
137 |
metrics:
|
138 |
- name: Test BLEU (En->De)
|
139 |
type: bleu
|
140 |
+
value: 32.15
|
141 |
- task:
|
142 |
type: Automatic Speech Translation
|
143 |
name: automatic-speech-translation
|
|
|
151 |
metrics:
|
152 |
- name: Test BLEU (En->Es)
|
153 |
type: bleu
|
154 |
+
value: 22.66
|
155 |
- task:
|
156 |
type: Automatic Speech Translation
|
157 |
name: automatic-speech-translation
|
|
|
179 |
metrics:
|
180 |
- name: Test BLEU (De->En)
|
181 |
type: bleu
|
182 |
+
value: 33.98
|
183 |
- task:
|
184 |
type: Automatic Speech Translation
|
185 |
name: automatic-speech-translation
|
|
|
193 |
metrics:
|
194 |
- name: Test BLEU (Es->En)
|
195 |
type: bleu
|
196 |
+
value: 21.80
|
197 |
- task:
|
198 |
type: Automatic Speech Translation
|
199 |
name: automatic-speech-translation
|
|
|
207 |
metrics:
|
208 |
- name: Test BLEU (Fr->En)
|
209 |
type: bleu
|
210 |
+
value: 30.95
|
211 |
- task:
|
212 |
type: Automatic Speech Translation
|
213 |
name: automatic-speech-translation
|
|
|
481 |
|
482 |
| **Version** | **Model** | **En->De** | **En->Es** | **En->Fr** | **De->En** | **Es->En** | **Fr->En** |
|
483 |
|:-----------:|:---------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|
|
484 |
+
| 1.23.0 | canary-1b | 32.15 | 22.66 | 40.76 | 33.98 | 21.80 | 30.95 |
|
485 |
|
486 |
|
487 |
BLEU score on [COVOST-v2](https://github.com/facebookresearch/covost) test set:
|