Update README.md
Browse files
README.md
CHANGED
@@ -1,8 +1,20 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
datasets:
|
3 |
- bigscience/xP3
|
4 |
- mc4
|
5 |
-
license: apache-2.0
|
6 |
language:
|
7 |
- af
|
8 |
- am
|
@@ -105,809 +117,142 @@ language:
|
|
105 |
- yo
|
106 |
- zh
|
107 |
- zu
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
108 |
pipeline_tag: text2text-generation
|
109 |
-
widget:
|
110 |
-
- text: >-
|
111 |
-
一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous
|
112 |
-
review as positive, neutral or negative?
|
113 |
-
example_title: zh-en sentiment
|
114 |
-
- text: 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
|
115 |
-
example_title: zh-zh sentiment
|
116 |
-
- text: Suggest at least five related search terms to "Mạng neural nhân tạo".
|
117 |
-
example_title: vi-en query
|
118 |
-
- text: >-
|
119 |
-
Proposez au moins cinq mots clés concernant «Réseau de neurones
|
120 |
-
artificiels».
|
121 |
-
example_title: fr-fr query
|
122 |
-
- text: Explain in a sentence in Telugu what is backpropagation in neural networks.
|
123 |
-
example_title: te-en qa
|
124 |
-
- text: Why is the sky blue?
|
125 |
-
example_title: en-en qa
|
126 |
-
- text: >-
|
127 |
-
Write a fairy tale about a troll saving a princess from a dangerous dragon.
|
128 |
-
The fairy tale is a masterpiece that has achieved praise worldwide and its
|
129 |
-
moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
|
130 |
-
example_title: es-en fable
|
131 |
-
- text: >-
|
132 |
-
Write a fable about wood elves living in a forest that is suddenly invaded
|
133 |
-
by ogres. The fable is a masterpiece that has achieved praise worldwide and
|
134 |
-
its moral is "Violence is the last refuge of the incompetent". Fable (in
|
135 |
-
Hindi):
|
136 |
-
example_title: hi-en fable
|
137 |
-
model-index:
|
138 |
-
- name: mt0-base
|
139 |
-
results:
|
140 |
-
- task:
|
141 |
-
type: Coreference resolution
|
142 |
-
dataset:
|
143 |
-
type: winogrande
|
144 |
-
name: Winogrande XL (xl)
|
145 |
-
config: xl
|
146 |
-
split: validation
|
147 |
-
revision: a80f460359d1e9a67c006011c94de42a8759430c
|
148 |
-
metrics:
|
149 |
-
- type: Accuracy
|
150 |
-
value: 53.28
|
151 |
-
- task:
|
152 |
-
type: Coreference resolution
|
153 |
-
dataset:
|
154 |
-
type: Muennighoff/xwinograd
|
155 |
-
name: XWinograd (en)
|
156 |
-
config: en
|
157 |
-
split: test
|
158 |
-
revision: 9dd5ea5505fad86b7bedad667955577815300cee
|
159 |
-
metrics:
|
160 |
-
- type: Accuracy
|
161 |
-
value: 51.4
|
162 |
-
- task:
|
163 |
-
type: Coreference resolution
|
164 |
-
dataset:
|
165 |
-
type: Muennighoff/xwinograd
|
166 |
-
name: XWinograd (fr)
|
167 |
-
config: fr
|
168 |
-
split: test
|
169 |
-
revision: 9dd5ea5505fad86b7bedad667955577815300cee
|
170 |
-
metrics:
|
171 |
-
- type: Accuracy
|
172 |
-
value: 55.42
|
173 |
-
- task:
|
174 |
-
type: Coreference resolution
|
175 |
-
dataset:
|
176 |
-
type: Muennighoff/xwinograd
|
177 |
-
name: XWinograd (jp)
|
178 |
-
config: jp
|
179 |
-
split: test
|
180 |
-
revision: 9dd5ea5505fad86b7bedad667955577815300cee
|
181 |
-
metrics:
|
182 |
-
- type: Accuracy
|
183 |
-
value: 51.41
|
184 |
-
- task:
|
185 |
-
type: Coreference resolution
|
186 |
-
dataset:
|
187 |
-
type: Muennighoff/xwinograd
|
188 |
-
name: XWinograd (pt)
|
189 |
-
config: pt
|
190 |
-
split: test
|
191 |
-
revision: 9dd5ea5505fad86b7bedad667955577815300cee
|
192 |
-
metrics:
|
193 |
-
- type: Accuracy
|
194 |
-
value: 52.09
|
195 |
-
- task:
|
196 |
-
type: Coreference resolution
|
197 |
-
dataset:
|
198 |
-
type: Muennighoff/xwinograd
|
199 |
-
name: XWinograd (ru)
|
200 |
-
config: ru
|
201 |
-
split: test
|
202 |
-
revision: 9dd5ea5505fad86b7bedad667955577815300cee
|
203 |
-
metrics:
|
204 |
-
- type: Accuracy
|
205 |
-
value: 53.97
|
206 |
-
- task:
|
207 |
-
type: Coreference resolution
|
208 |
-
dataset:
|
209 |
-
type: Muennighoff/xwinograd
|
210 |
-
name: XWinograd (zh)
|
211 |
-
config: zh
|
212 |
-
split: test
|
213 |
-
revision: 9dd5ea5505fad86b7bedad667955577815300cee
|
214 |
-
metrics:
|
215 |
-
- type: Accuracy
|
216 |
-
value: 53.97
|
217 |
-
- task:
|
218 |
-
type: Natural language inference
|
219 |
-
dataset:
|
220 |
-
type: anli
|
221 |
-
name: ANLI (r1)
|
222 |
-
config: r1
|
223 |
-
split: validation
|
224 |
-
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
|
225 |
-
metrics:
|
226 |
-
- type: Accuracy
|
227 |
-
value: 33.3
|
228 |
-
- task:
|
229 |
-
type: Natural language inference
|
230 |
-
dataset:
|
231 |
-
type: anli
|
232 |
-
name: ANLI (r2)
|
233 |
-
config: r2
|
234 |
-
split: validation
|
235 |
-
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
|
236 |
-
metrics:
|
237 |
-
- type: Accuracy
|
238 |
-
value: 33.5
|
239 |
-
- task:
|
240 |
-
type: Natural language inference
|
241 |
-
dataset:
|
242 |
-
type: anli
|
243 |
-
name: ANLI (r3)
|
244 |
-
config: r3
|
245 |
-
split: validation
|
246 |
-
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
|
247 |
-
metrics:
|
248 |
-
- type: Accuracy
|
249 |
-
value: 33.33
|
250 |
-
- task:
|
251 |
-
type: Natural language inference
|
252 |
-
dataset:
|
253 |
-
type: super_glue
|
254 |
-
name: SuperGLUE (cb)
|
255 |
-
config: cb
|
256 |
-
split: validation
|
257 |
-
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
|
258 |
-
metrics:
|
259 |
-
- type: Accuracy
|
260 |
-
value: 50
|
261 |
-
- task:
|
262 |
-
type: Natural language inference
|
263 |
-
dataset:
|
264 |
-
type: super_glue
|
265 |
-
name: SuperGLUE (rte)
|
266 |
-
config: rte
|
267 |
-
split: validation
|
268 |
-
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
|
269 |
-
metrics:
|
270 |
-
- type: Accuracy
|
271 |
-
value: 66.43
|
272 |
-
- task:
|
273 |
-
type: Natural language inference
|
274 |
-
dataset:
|
275 |
-
type: xnli
|
276 |
-
name: XNLI (ar)
|
277 |
-
config: ar
|
278 |
-
split: validation
|
279 |
-
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
|
280 |
-
metrics:
|
281 |
-
- type: Accuracy
|
282 |
-
value: 41.85
|
283 |
-
- task:
|
284 |
-
type: Natural language inference
|
285 |
-
dataset:
|
286 |
-
type: xnli
|
287 |
-
name: XNLI (bg)
|
288 |
-
config: bg
|
289 |
-
split: validation
|
290 |
-
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
|
291 |
-
metrics:
|
292 |
-
- type: Accuracy
|
293 |
-
value: 42.33
|
294 |
-
- task:
|
295 |
-
type: Natural language inference
|
296 |
-
dataset:
|
297 |
-
type: xnli
|
298 |
-
name: XNLI (de)
|
299 |
-
config: de
|
300 |
-
split: validation
|
301 |
-
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
|
302 |
-
metrics:
|
303 |
-
- type: Accuracy
|
304 |
-
value: 42.41
|
305 |
-
- task:
|
306 |
-
type: Natural language inference
|
307 |
-
dataset:
|
308 |
-
type: xnli
|
309 |
-
name: XNLI (el)
|
310 |
-
config: el
|
311 |
-
split: validation
|
312 |
-
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
|
313 |
-
metrics:
|
314 |
-
- type: Accuracy
|
315 |
-
value: 40.92
|
316 |
-
- task:
|
317 |
-
type: Natural language inference
|
318 |
-
dataset:
|
319 |
-
type: xnli
|
320 |
-
name: XNLI (en)
|
321 |
-
config: en
|
322 |
-
split: validation
|
323 |
-
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
|
324 |
-
metrics:
|
325 |
-
- type: Accuracy
|
326 |
-
value: 43.78
|
327 |
-
- task:
|
328 |
-
type: Natural language inference
|
329 |
-
dataset:
|
330 |
-
type: xnli
|
331 |
-
name: XNLI (es)
|
332 |
-
config: es
|
333 |
-
split: validation
|
334 |
-
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
|
335 |
-
metrics:
|
336 |
-
- type: Accuracy
|
337 |
-
value: 41.93
|
338 |
-
- task:
|
339 |
-
type: Natural language inference
|
340 |
-
dataset:
|
341 |
-
type: xnli
|
342 |
-
name: XNLI (fr)
|
343 |
-
config: fr
|
344 |
-
split: validation
|
345 |
-
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
|
346 |
-
metrics:
|
347 |
-
- type: Accuracy
|
348 |
-
value: 42.45
|
349 |
-
- task:
|
350 |
-
type: Natural language inference
|
351 |
-
dataset:
|
352 |
-
type: xnli
|
353 |
-
name: XNLI (hi)
|
354 |
-
config: hi
|
355 |
-
split: validation
|
356 |
-
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
|
357 |
-
metrics:
|
358 |
-
- type: Accuracy
|
359 |
-
value: 39.76
|
360 |
-
- task:
|
361 |
-
type: Natural language inference
|
362 |
-
dataset:
|
363 |
-
type: xnli
|
364 |
-
name: XNLI (ru)
|
365 |
-
config: ru
|
366 |
-
split: validation
|
367 |
-
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
|
368 |
-
metrics:
|
369 |
-
- type: Accuracy
|
370 |
-
value: 41.93
|
371 |
-
- task:
|
372 |
-
type: Natural language inference
|
373 |
-
dataset:
|
374 |
-
type: xnli
|
375 |
-
name: XNLI (sw)
|
376 |
-
config: sw
|
377 |
-
split: validation
|
378 |
-
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
|
379 |
-
metrics:
|
380 |
-
- type: Accuracy
|
381 |
-
value: 39.68
|
382 |
-
- task:
|
383 |
-
type: Natural language inference
|
384 |
-
dataset:
|
385 |
-
type: xnli
|
386 |
-
name: XNLI (th)
|
387 |
-
config: th
|
388 |
-
split: validation
|
389 |
-
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
|
390 |
-
metrics:
|
391 |
-
- type: Accuracy
|
392 |
-
value: 41.97
|
393 |
-
- task:
|
394 |
-
type: Natural language inference
|
395 |
-
dataset:
|
396 |
-
type: xnli
|
397 |
-
name: XNLI (tr)
|
398 |
-
config: tr
|
399 |
-
split: validation
|
400 |
-
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
|
401 |
-
metrics:
|
402 |
-
- type: Accuracy
|
403 |
-
value: 40.28
|
404 |
-
- task:
|
405 |
-
type: Natural language inference
|
406 |
-
dataset:
|
407 |
-
type: xnli
|
408 |
-
name: XNLI (ur)
|
409 |
-
config: ur
|
410 |
-
split: validation
|
411 |
-
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
|
412 |
-
metrics:
|
413 |
-
- type: Accuracy
|
414 |
-
value: 38.71
|
415 |
-
- task:
|
416 |
-
type: Natural language inference
|
417 |
-
dataset:
|
418 |
-
type: xnli
|
419 |
-
name: XNLI (vi)
|
420 |
-
config: vi
|
421 |
-
split: validation
|
422 |
-
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
|
423 |
-
metrics:
|
424 |
-
- type: Accuracy
|
425 |
-
value: 40.2
|
426 |
-
- task:
|
427 |
-
type: Natural language inference
|
428 |
-
dataset:
|
429 |
-
type: xnli
|
430 |
-
name: XNLI (zh)
|
431 |
-
config: zh
|
432 |
-
split: validation
|
433 |
-
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
|
434 |
-
metrics:
|
435 |
-
- type: Accuracy
|
436 |
-
value: 42.49
|
437 |
-
- task:
|
438 |
-
type: Sentence completion
|
439 |
-
dataset:
|
440 |
-
type: story_cloze
|
441 |
-
name: StoryCloze (2016)
|
442 |
-
config: '2016'
|
443 |
-
split: validation
|
444 |
-
revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db
|
445 |
-
metrics:
|
446 |
-
- type: Accuracy
|
447 |
-
value: 57.83
|
448 |
-
- task:
|
449 |
-
type: Sentence completion
|
450 |
-
dataset:
|
451 |
-
type: super_glue
|
452 |
-
name: SuperGLUE (copa)
|
453 |
-
config: copa
|
454 |
-
split: validation
|
455 |
-
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
|
456 |
-
metrics:
|
457 |
-
- type: Accuracy
|
458 |
-
value: 55
|
459 |
-
- task:
|
460 |
-
type: Sentence completion
|
461 |
-
dataset:
|
462 |
-
type: xcopa
|
463 |
-
name: XCOPA (et)
|
464 |
-
config: et
|
465 |
-
split: validation
|
466 |
-
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
|
467 |
-
metrics:
|
468 |
-
- type: Accuracy
|
469 |
-
value: 52
|
470 |
-
- task:
|
471 |
-
type: Sentence completion
|
472 |
-
dataset:
|
473 |
-
type: xcopa
|
474 |
-
name: XCOPA (ht)
|
475 |
-
config: ht
|
476 |
-
split: validation
|
477 |
-
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
|
478 |
-
metrics:
|
479 |
-
- type: Accuracy
|
480 |
-
value: 60
|
481 |
-
- task:
|
482 |
-
type: Sentence completion
|
483 |
-
dataset:
|
484 |
-
type: xcopa
|
485 |
-
name: XCOPA (id)
|
486 |
-
config: id
|
487 |
-
split: validation
|
488 |
-
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
|
489 |
-
metrics:
|
490 |
-
- type: Accuracy
|
491 |
-
value: 55
|
492 |
-
- task:
|
493 |
-
type: Sentence completion
|
494 |
-
dataset:
|
495 |
-
type: xcopa
|
496 |
-
name: XCOPA (it)
|
497 |
-
config: it
|
498 |
-
split: validation
|
499 |
-
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
|
500 |
-
metrics:
|
501 |
-
- type: Accuracy
|
502 |
-
value: 61
|
503 |
-
- task:
|
504 |
-
type: Sentence completion
|
505 |
-
dataset:
|
506 |
-
type: xcopa
|
507 |
-
name: XCOPA (qu)
|
508 |
-
config: qu
|
509 |
-
split: validation
|
510 |
-
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
|
511 |
-
metrics:
|
512 |
-
- type: Accuracy
|
513 |
-
value: 55
|
514 |
-
- task:
|
515 |
-
type: Sentence completion
|
516 |
-
dataset:
|
517 |
-
type: xcopa
|
518 |
-
name: XCOPA (sw)
|
519 |
-
config: sw
|
520 |
-
split: validation
|
521 |
-
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
|
522 |
-
metrics:
|
523 |
-
- type: Accuracy
|
524 |
-
value: 59
|
525 |
-
- task:
|
526 |
-
type: Sentence completion
|
527 |
-
dataset:
|
528 |
-
type: xcopa
|
529 |
-
name: XCOPA (ta)
|
530 |
-
config: ta
|
531 |
-
split: validation
|
532 |
-
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
|
533 |
-
metrics:
|
534 |
-
- type: Accuracy
|
535 |
-
value: 63
|
536 |
-
- task:
|
537 |
-
type: Sentence completion
|
538 |
-
dataset:
|
539 |
-
type: xcopa
|
540 |
-
name: XCOPA (th)
|
541 |
-
config: th
|
542 |
-
split: validation
|
543 |
-
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
|
544 |
-
metrics:
|
545 |
-
- type: Accuracy
|
546 |
-
value: 55
|
547 |
-
- task:
|
548 |
-
type: Sentence completion
|
549 |
-
dataset:
|
550 |
-
type: xcopa
|
551 |
-
name: XCOPA (tr)
|
552 |
-
config: tr
|
553 |
-
split: validation
|
554 |
-
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
|
555 |
-
metrics:
|
556 |
-
- type: Accuracy
|
557 |
-
value: 60
|
558 |
-
- task:
|
559 |
-
type: Sentence completion
|
560 |
-
dataset:
|
561 |
-
type: xcopa
|
562 |
-
name: XCOPA (vi)
|
563 |
-
config: vi
|
564 |
-
split: validation
|
565 |
-
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
|
566 |
-
metrics:
|
567 |
-
- type: Accuracy
|
568 |
-
value: 52
|
569 |
-
- task:
|
570 |
-
type: Sentence completion
|
571 |
-
dataset:
|
572 |
-
type: xcopa
|
573 |
-
name: XCOPA (zh)
|
574 |
-
config: zh
|
575 |
-
split: validation
|
576 |
-
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
|
577 |
-
metrics:
|
578 |
-
- type: Accuracy
|
579 |
-
value: 58
|
580 |
-
- task:
|
581 |
-
type: Sentence completion
|
582 |
-
dataset:
|
583 |
-
type: Muennighoff/xstory_cloze
|
584 |
-
name: XStoryCloze (ar)
|
585 |
-
config: ar
|
586 |
-
split: validation
|
587 |
-
revision: 8bb76e594b68147f1a430e86829d07189622b90d
|
588 |
-
metrics:
|
589 |
-
- type: Accuracy
|
590 |
-
value: 54.53
|
591 |
-
- task:
|
592 |
-
type: Sentence completion
|
593 |
-
dataset:
|
594 |
-
type: Muennighoff/xstory_cloze
|
595 |
-
name: XStoryCloze (es)
|
596 |
-
config: es
|
597 |
-
split: validation
|
598 |
-
revision: 8bb76e594b68147f1a430e86829d07189622b90d
|
599 |
-
metrics:
|
600 |
-
- type: Accuracy
|
601 |
-
value: 55.39
|
602 |
-
- task:
|
603 |
-
type: Sentence completion
|
604 |
-
dataset:
|
605 |
-
type: Muennighoff/xstory_cloze
|
606 |
-
name: XStoryCloze (eu)
|
607 |
-
config: eu
|
608 |
-
split: validation
|
609 |
-
revision: 8bb76e594b68147f1a430e86829d07189622b90d
|
610 |
-
metrics:
|
611 |
-
- type: Accuracy
|
612 |
-
value: 53.67
|
613 |
-
- task:
|
614 |
-
type: Sentence completion
|
615 |
-
dataset:
|
616 |
-
type: Muennighoff/xstory_cloze
|
617 |
-
name: XStoryCloze (hi)
|
618 |
-
config: hi
|
619 |
-
split: validation
|
620 |
-
revision: 8bb76e594b68147f1a430e86829d07189622b90d
|
621 |
-
metrics:
|
622 |
-
- type: Accuracy
|
623 |
-
value: 55
|
624 |
-
- task:
|
625 |
-
type: Sentence completion
|
626 |
-
dataset:
|
627 |
-
type: Muennighoff/xstory_cloze
|
628 |
-
name: XStoryCloze (id)
|
629 |
-
config: id
|
630 |
-
split: validation
|
631 |
-
revision: 8bb76e594b68147f1a430e86829d07189622b90d
|
632 |
-
metrics:
|
633 |
-
- type: Accuracy
|
634 |
-
value: 57.38
|
635 |
-
- task:
|
636 |
-
type: Sentence completion
|
637 |
-
dataset:
|
638 |
-
type: Muennighoff/xstory_cloze
|
639 |
-
name: XStoryCloze (my)
|
640 |
-
config: my
|
641 |
-
split: validation
|
642 |
-
revision: 8bb76e594b68147f1a430e86829d07189622b90d
|
643 |
-
metrics:
|
644 |
-
- type: Accuracy
|
645 |
-
value: 52.75
|
646 |
-
- task:
|
647 |
-
type: Sentence completion
|
648 |
-
dataset:
|
649 |
-
type: Muennighoff/xstory_cloze
|
650 |
-
name: XStoryCloze (ru)
|
651 |
-
config: ru
|
652 |
-
split: validation
|
653 |
-
revision: 8bb76e594b68147f1a430e86829d07189622b90d
|
654 |
-
metrics:
|
655 |
-
- type: Accuracy
|
656 |
-
value: 53.87
|
657 |
-
- task:
|
658 |
-
type: Sentence completion
|
659 |
-
dataset:
|
660 |
-
type: Muennighoff/xstory_cloze
|
661 |
-
name: XStoryCloze (sw)
|
662 |
-
config: sw
|
663 |
-
split: validation
|
664 |
-
revision: 8bb76e594b68147f1a430e86829d07189622b90d
|
665 |
-
metrics:
|
666 |
-
- type: Accuracy
|
667 |
-
value: 54.4
|
668 |
-
- task:
|
669 |
-
type: Sentence completion
|
670 |
-
dataset:
|
671 |
-
type: Muennighoff/xstory_cloze
|
672 |
-
name: XStoryCloze (te)
|
673 |
-
config: te
|
674 |
-
split: validation
|
675 |
-
revision: 8bb76e594b68147f1a430e86829d07189622b90d
|
676 |
-
metrics:
|
677 |
-
- type: Accuracy
|
678 |
-
value: 56.92
|
679 |
-
- task:
|
680 |
-
type: Sentence completion
|
681 |
-
dataset:
|
682 |
-
type: Muennighoff/xstory_cloze
|
683 |
-
name: XStoryCloze (zh)
|
684 |
-
config: zh
|
685 |
-
split: validation
|
686 |
-
revision: 8bb76e594b68147f1a430e86829d07189622b90d
|
687 |
-
metrics:
|
688 |
-
- type: Accuracy
|
689 |
-
value: 55.72
|
690 |
---
|
691 |
-
|
692 |
-
![xmtf](https://github.com/bigscience-workshop/xmtf/blob/master/xmtf_banner.png?raw=true)
|
693 |
-
|
694 |
-
# Table of Contents
|
695 |
-
|
696 |
-
1. [Model Summary](#model-summary)
|
697 |
-
2. [Use](#use)
|
698 |
-
3. [Limitations](#limitations)
|
699 |
-
4. [Training](#training)
|
700 |
-
5. [Evaluation](#evaluation)
|
701 |
-
7. [Citation](#citation)
|
702 |
-
|
703 |
-
# Model Summary
|
704 |
-
|
705 |
-
> We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find our resulting models capable of crosslingual generalization to unseen tasks & languages.
|
706 |
-
|
707 |
-
- **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
|
708 |
-
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
|
709 |
-
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
|
710 |
-
- **Languages:** Refer to [mc4](https://huggingface.co/datasets/mc4) for pretraining & [xP3](https://huggingface.co/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
|
711 |
-
- **BLOOMZ & mT0 Model Family:**
|
712 |
-
|
713 |
-
<div class="max-w-full overflow-auto">
|
714 |
-
<table>
|
715 |
-
<tr>
|
716 |
-
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English.
|
717 |
-
</tr>
|
718 |
-
<tr>
|
719 |
-
<td>Parameters</td>
|
720 |
-
<td>300M</td>
|
721 |
-
<td>580M</td>
|
722 |
-
<td>1.2B</td>
|
723 |
-
<td>3.7B</td>
|
724 |
-
<td>13B</td>
|
725 |
-
<td>560M</td>
|
726 |
-
<td>1.1B</td>
|
727 |
-
<td>1.7B</td>
|
728 |
-
<td>3B</td>
|
729 |
-
<td>7.1B</td>
|
730 |
-
<td>176B</td>
|
731 |
-
</tr>
|
732 |
-
<tr>
|
733 |
-
<td>Finetuned Model</td>
|
734 |
-
<td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td>
|
735 |
-
<td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td>
|
736 |
-
<td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td>
|
737 |
-
<td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td>
|
738 |
-
<td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
|
739 |
-
<td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td>
|
740 |
-
<td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td>
|
741 |
-
<td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td>
|
742 |
-
<td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td>
|
743 |
-
<td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td>
|
744 |
-
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
|
745 |
-
</tr>
|
746 |
-
</tr>
|
747 |
-
<tr>
|
748 |
-
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th>
|
749 |
-
</tr>
|
750 |
-
<tr>
|
751 |
-
<td>Finetuned Model</td>
|
752 |
-
<td></td>
|
753 |
-
<td></td>
|
754 |
-
<td></td>
|
755 |
-
<td></td>
|
756 |
-
<td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
|
757 |
-
<td></td>
|
758 |
-
<td></td>
|
759 |
-
<td></td>
|
760 |
-
<td></td>
|
761 |
-
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td>
|
762 |
-
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td>
|
763 |
-
</tr>
|
764 |
-
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th>
|
765 |
-
</tr>
|
766 |
-
<tr>
|
767 |
-
<td>Finetuned Model</td>
|
768 |
-
<td></td>
|
769 |
-
<td></td>
|
770 |
-
<td></td>
|
771 |
-
<td></td>
|
772 |
-
<td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
|
773 |
-
<td></td>
|
774 |
-
<td></td>
|
775 |
-
<td></td>
|
776 |
-
<td></td>
|
777 |
-
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td>
|
778 |
-
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td>
|
779 |
-
</tr>
|
780 |
-
<th colspan="12">Original pretrained checkpoints. Not recommended.</th>
|
781 |
-
<tr>
|
782 |
-
<td>Pretrained Model</td>
|
783 |
-
<td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td>
|
784 |
-
<td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td>
|
785 |
-
<td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td>
|
786 |
-
<td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td>
|
787 |
-
<td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td>
|
788 |
-
<td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td>
|
789 |
-
<td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td>
|
790 |
-
<td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td>
|
791 |
-
<td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td>
|
792 |
-
<td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td>
|
793 |
-
<td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td>
|
794 |
-
</tr>
|
795 |
-
</table>
|
796 |
-
</div>
|
797 |
-
|
798 |
-
|
799 |
-
# Use
|
800 |
-
|
801 |
-
## Intended use
|
802 |
-
|
803 |
-
We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper:
|
804 |
-
- 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
|
805 |
-
- Suggest at least five related search terms to "Mạng neural nhân tạo".
|
806 |
-
- Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
|
807 |
-
- Explain in a sentence in Telugu what is backpropagation in neural networks.
|
808 |
-
|
809 |
-
**Feel free to share your generations in the Community tab!**
|
810 |
-
|
811 |
-
## How to use
|
812 |
-
|
813 |
-
### CPU
|
814 |
-
|
815 |
-
<details>
|
816 |
-
<summary> Click to expand </summary>
|
817 |
-
|
818 |
-
```python
|
819 |
-
# pip install -q transformers
|
820 |
-
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
821 |
-
|
822 |
-
checkpoint = "bigscience/mt0-base"
|
823 |
-
|
824 |
-
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
|
825 |
-
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
|
826 |
-
|
827 |
-
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt")
|
828 |
-
outputs = model.generate(inputs)
|
829 |
-
print(tokenizer.decode(outputs[0]))
|
830 |
-
```
|
831 |
-
|
832 |
-
</details>
|
833 |
-
|
834 |
-
### GPU
|
835 |
-
|
836 |
-
<details>
|
837 |
-
<summary> Click to expand </summary>
|
838 |
-
|
839 |
-
```python
|
840 |
-
# pip install -q transformers accelerate
|
841 |
-
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
842 |
-
|
843 |
-
checkpoint = "bigscience/mt0-base"
|
844 |
-
|
845 |
-
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
|
846 |
-
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
|
847 |
-
|
848 |
-
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
|
849 |
-
outputs = model.generate(inputs)
|
850 |
-
print(tokenizer.decode(outputs[0]))
|
851 |
-
```
|
852 |
-
|
853 |
-
</details>
|
854 |
-
|
855 |
-
### GPU in 8bit
|
856 |
-
|
857 |
-
<details>
|
858 |
-
<summary> Click to expand </summary>
|
859 |
-
|
860 |
-
```python
|
861 |
-
# pip install -q transformers accelerate bitsandbytes
|
862 |
-
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
863 |
-
|
864 |
-
checkpoint = "bigscience/mt0-base"
|
865 |
-
|
866 |
-
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
|
867 |
-
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
|
868 |
-
|
869 |
-
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
|
870 |
-
outputs = model.generate(inputs)
|
871 |
-
print(tokenizer.decode(outputs[0]))
|
872 |
```
|
873 |
-
|
874 |
-
|
875 |
-
|
876 |
-
|
877 |
-
|
878 |
-
|
879 |
-
|
880 |
-
|
881 |
-
|
882 |
-
|
883 |
-
|
884 |
-
|
885 |
-
|
886 |
-
|
887 |
-
-
|
888 |
-
-
|
889 |
-
|
890 |
-
-
|
891 |
-
|
892 |
-
|
893 |
-
|
894 |
-
-
|
895 |
-
|
896 |
-
|
897 |
-
|
898 |
-
-
|
899 |
-
-
|
900 |
-
|
901 |
-
|
902 |
-
|
903 |
-
|
904 |
-
|
905 |
-
|
906 |
-
|
907 |
-
|
908 |
-
|
909 |
-
|
910 |
-
|
911 |
-
|
912 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
913 |
```
|
|
|
1 |
---
|
2 |
+
name: bloom-mt0-base
|
3 |
+
license: apache-2.0
|
4 |
+
tags:
|
5 |
+
- bloom
|
6 |
+
- bigscience
|
7 |
+
- natural-language
|
8 |
+
- google
|
9 |
+
type:
|
10 |
+
- 2GB
|
11 |
+
- bf16
|
12 |
+
- mt5
|
13 |
+
config:
|
14 |
+
resolutions:
|
15 |
datasets:
|
16 |
- bigscience/xP3
|
17 |
- mc4
|
|
|
18 |
language:
|
19 |
- af
|
20 |
- am
|
|
|
117 |
- yo
|
118 |
- zh
|
119 |
- zu
|
120 |
+
size: 2329638768
|
121 |
+
use: natural-language
|
122 |
+
shortcomings:
|
123 |
+
- context
|
124 |
+
- missing-punctuation
|
125 |
+
sources: https://arxiv.org/abs/2211.01786
|
126 |
+
funded_by:
|
127 |
+
train_hardware: TPUv4-64
|
128 |
pipeline_tag: text2text-generation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
129 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
130 |
```
|
131 |
+
name: bloom-mt0-base
|
132 |
+
license: apache-2.0
|
133 |
+
tags:
|
134 |
+
- bloom
|
135 |
+
- bigscience
|
136 |
+
- natural-language
|
137 |
+
- google
|
138 |
+
type:
|
139 |
+
- 2GB
|
140 |
+
- bf16
|
141 |
+
- mt5
|
142 |
+
config:
|
143 |
+
resolutions:
|
144 |
+
datasets:
|
145 |
+
- bigscience/xP3
|
146 |
+
- mc4
|
147 |
+
language:
|
148 |
+
- af
|
149 |
+
- am
|
150 |
+
- ar
|
151 |
+
- az
|
152 |
+
- be
|
153 |
+
- bg
|
154 |
+
- bn
|
155 |
+
- ca
|
156 |
+
- ceb
|
157 |
+
- co
|
158 |
+
- cs
|
159 |
+
- cy
|
160 |
+
- da
|
161 |
+
- de
|
162 |
+
- el
|
163 |
+
- en
|
164 |
+
- eo
|
165 |
+
- es
|
166 |
+
- et
|
167 |
+
- eu
|
168 |
+
- fa
|
169 |
+
- fi
|
170 |
+
- fil
|
171 |
+
- fr
|
172 |
+
- fy
|
173 |
+
- ga
|
174 |
+
- gd
|
175 |
+
- gl
|
176 |
+
- gu
|
177 |
+
- ha
|
178 |
+
- haw
|
179 |
+
- hi
|
180 |
+
- hmn
|
181 |
+
- ht
|
182 |
+
- hu
|
183 |
+
- hy
|
184 |
+
- ig
|
185 |
+
- is
|
186 |
+
- it
|
187 |
+
- iw
|
188 |
+
- ja
|
189 |
+
- jv
|
190 |
+
- ka
|
191 |
+
- kk
|
192 |
+
- km
|
193 |
+
- kn
|
194 |
+
- ko
|
195 |
+
- ku
|
196 |
+
- ky
|
197 |
+
- la
|
198 |
+
- lb
|
199 |
+
- lo
|
200 |
+
- lt
|
201 |
+
- lv
|
202 |
+
- mg
|
203 |
+
- mi
|
204 |
+
- mk
|
205 |
+
- ml
|
206 |
+
- mn
|
207 |
+
- mr
|
208 |
+
- ms
|
209 |
+
- mt
|
210 |
+
- my
|
211 |
+
- ne
|
212 |
+
- nl
|
213 |
+
- 'no'
|
214 |
+
- ny
|
215 |
+
- pa
|
216 |
+
- pl
|
217 |
+
- ps
|
218 |
+
- pt
|
219 |
+
- ro
|
220 |
+
- ru
|
221 |
+
- sd
|
222 |
+
- si
|
223 |
+
- sk
|
224 |
+
- sl
|
225 |
+
- sm
|
226 |
+
- sn
|
227 |
+
- so
|
228 |
+
- sq
|
229 |
+
- sr
|
230 |
+
- st
|
231 |
+
- su
|
232 |
+
- sv
|
233 |
+
- sw
|
234 |
+
- ta
|
235 |
+
- te
|
236 |
+
- tg
|
237 |
+
- th
|
238 |
+
- tr
|
239 |
+
- uk
|
240 |
+
- und
|
241 |
+
- ur
|
242 |
+
- uz
|
243 |
+
- vi
|
244 |
+
- xh
|
245 |
+
- yi
|
246 |
+
- yo
|
247 |
+
- zh
|
248 |
+
- zu
|
249 |
+
size: 2329638768
|
250 |
+
use: natural-language
|
251 |
+
shortcomings:
|
252 |
+
- context
|
253 |
+
- missing-punctuation
|
254 |
+
sources: https://arxiv.org/abs/2211.01786
|
255 |
+
funded_by:
|
256 |
+
train_hardware: TPUv4-64
|
257 |
+
pipeline_tag: text2text-generation
|
258 |
```
|