Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
Ali-C137 commited on
Commit
fda0c4d
1 Parent(s): f394f49

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -1
README.md CHANGED
@@ -21,6 +21,60 @@ configs:
21
  - split: train
22
  path: data/train-*
23
  ---
24
- # Dataset Card for "ArabicDarija-xP3x"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
21
  - split: train
22
  path: data/train-*
23
  ---
24
+ # Dataset Card for "ArabicDarija-xP3x" part of "xP3x" by [Muennighoff](https://huggingface.co/Muennighoff)
25
+
26
+ ## Find below part of the original dataset card
27
+ ## Dataset Description
28
+
29
+ - **Repository:** https://github.com/bigscience-workshop/xmtf
30
+ - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
31
+ - **Point of Contact:** [Niklas Muennighoff](mailto:n.muennighoff@gmail.com)
32
+
33
+ ### Dataset Summary
34
+
35
+ > xP3x (Crosslingual Public Pool of Prompts eXtended) is a collection of prompts & datasets across 277 languages & 16 NLP tasks. It contains all of xP3 + much more! It is used for training future contenders of mT0 & BLOOMZ at project Aya @[C4AI](https://cohere.for.ai/) 🧡
36
+ >
37
+ - **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3) together with the file in this repository named `xp3x_create.py`. We provide this version to save processing time.
38
+ - **Languages:** 277
39
+ - **xP3 Dataset Family:**
40
+
41
+ <table>
42
+ <tr>
43
+ <th>Name</th>
44
+ <th>Explanation</th>
45
+ <th>Example models</th>
46
+ </tr>
47
+ <tr>
48
+ <td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t>
49
+ <td>Mixture of 17 tasks in 277 languages with English prompts</td>
50
+ <td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td>
51
+ </tr>
52
+ <tr>
53
+ <td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
54
+ <td>Mixture of 13 training tasks in 46 languages with English prompts</td>
55
+ <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
56
+ </tr>
57
+ <tr>
58
+ <td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
59
+ <td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
60
+ <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
61
+ </tr>
62
+ <tr>
63
+ <td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
64
+ <td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
65
+ <td></td>
66
+ </tr>
67
+ <tr>
68
+ <td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
69
+ <td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
70
+ <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
71
+ </tr>
72
+ <tr>
73
+ <td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
74
+ <td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
75
+ <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
76
+ </tr>
77
+ </table>
78
+
79
 
80
  [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)