Improve metadata in dataset card
#6
by
albertvillanova
HF staff
- opened
- README.md +94 -16
- dataset_infos.json +0 -1
README.md
CHANGED
@@ -30,8 +30,6 @@ language:
|
|
30 |
- vi
|
31 |
- zh
|
32 |
license:
|
33 |
-
- cc-by-nc-4.0
|
34 |
-
- cc-by-sa-4.0
|
35 |
- other
|
36 |
multilinguality:
|
37 |
- multilingual
|
@@ -786,10 +784,22 @@ config_names:
|
|
786 |
|
787 |
- **Homepage:** [XGLUE homepage](https://microsoft.github.io/XGLUE/)
|
788 |
- **Paper:** [XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation](https://arxiv.org/abs/2004.01401)
|
|
|
789 |
|
790 |
### Dataset Summary
|
791 |
|
792 |
-
XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained models with respect to
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
793 |
|
794 |
The training data of each task is in English while the validation and test data is present in multiple different languages.
|
795 |
The following table shows which languages are present as validation and test data for each config.
|
@@ -801,7 +811,7 @@ Therefore, for each config, a cross-lingual pre-trained model should be fine-tun
|
|
801 |
### Supported Tasks and Leaderboards
|
802 |
|
803 |
The XGLUE leaderboard can be found on the [homepage](https://microsoft.github.io/XGLUE/) and
|
804 |
-
|
805 |
|
806 |
### Languages
|
807 |
|
@@ -828,7 +838,7 @@ For each task, the "validation" and "test" splits are present in these languages
|
|
828 |
|
829 |
An example of 'test.nl' looks as follows.
|
830 |
|
831 |
-
```
|
832 |
{
|
833 |
"ner": [
|
834 |
"O",
|
@@ -885,7 +895,7 @@ An example of 'test.nl' looks as follows.
|
|
885 |
|
886 |
An example of 'test.fr' looks as follows.
|
887 |
|
888 |
-
```
|
889 |
{
|
890 |
"pos": [
|
891 |
"PRON",
|
@@ -956,7 +966,7 @@ An example of 'test.fr' looks as follows.
|
|
956 |
|
957 |
An example of 'test.hi' looks as follows.
|
958 |
|
959 |
-
```
|
960 |
{
|
961 |
"answers": {
|
962 |
"answer_start": [
|
@@ -975,7 +985,7 @@ An example of 'test.hi' looks as follows.
|
|
975 |
|
976 |
An example of 'test.es' looks as follows.
|
977 |
|
978 |
-
```
|
979 |
{
|
980 |
"news_body": "El bizcocho es seguramente el producto m\u00e1s b\u00e1sico y sencillo de toda la reposter\u00eda : consiste en poco m\u00e1s que mezclar unos cuantos ingredientes, meterlos al horno y esperar a que se hagan. Por obra y gracia del impulsor qu\u00edmico, tambi\u00e9n conocido como \"levadura de tipo Royal\", despu\u00e9s de un rato de calorcito esta combinaci\u00f3n de harina, az\u00facar, huevo, grasa -aceite o mantequilla- y l\u00e1cteo se transforma en uno de los productos m\u00e1s deliciosos que existen para desayunar o merendar . Por muy manazas que seas, es m\u00e1s que probable que tu bizcocho casero supere en calidad a cualquier infamia industrial envasada. Para lograr un bizcocho digno de admiraci\u00f3n s\u00f3lo tienes que respetar unas pocas normas que afectan a los ingredientes, proporciones, mezclado, horneado y desmoldado. Todas las tienes resumidas en unos dos minutos el v\u00eddeo de arriba, en el que adem \u00e1s aprender\u00e1s alg\u00fan truquillo para que tu bizcochaco quede m\u00e1s fino, jugoso, esponjoso y amoroso. M\u00e1s en MSN:",
|
981 |
"news_category": "foodanddrink",
|
@@ -987,7 +997,7 @@ An example of 'test.es' looks as follows.
|
|
987 |
|
988 |
An example of 'validation.th' looks as follows.
|
989 |
|
990 |
-
```
|
991 |
{
|
992 |
"hypothesis": "\u0e40\u0e02\u0e32\u0e42\u0e17\u0e23\u0e2b\u0e32\u0e40\u0e40\u0e21\u0e48\u0e02\u0e2d\u0e07\u0e40\u0e02\u0e32\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e23\u0e27\u0e14\u0e40\u0e23\u0e47\u0e27\u0e2b\u0e25\u0e31\u0e07\u0e08\u0e32\u0e01\u0e17\u0e35\u0e48\u0e23\u0e16\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2a\u0e48\u0e07\u0e40\u0e02\u0e32\u0e40\u0e40\u0e25\u0e49\u0e27",
|
993 |
"label": 1,
|
@@ -999,7 +1009,7 @@ An example of 'validation.th' looks as follows.
|
|
999 |
|
1000 |
An example of 'test.es' looks as follows.
|
1001 |
|
1002 |
-
```
|
1003 |
{
|
1004 |
"label": 1,
|
1005 |
"sentence1": "La excepci\u00f3n fue entre fines de 2005 y 2009 cuando jug\u00f3 en Suecia con Carlstad United BK, Serbia con FK Borac \u010ca\u010dak y el FC Terek Grozny de Rusia.",
|
@@ -1011,7 +1021,7 @@ An example of 'test.es' looks as follows.
|
|
1011 |
|
1012 |
An example of 'train' looks as follows.
|
1013 |
|
1014 |
-
```
|
1015 |
{
|
1016 |
"ad_description": "Your New England Cruise Awaits! Holland America Line Official Site.",
|
1017 |
"ad_title": "New England Cruises",
|
@@ -1024,7 +1034,7 @@ An example of 'train' looks as follows.
|
|
1024 |
|
1025 |
An example of 'test.zh' looks as follows.
|
1026 |
|
1027 |
-
```
|
1028 |
{
|
1029 |
"query": "maxpro\u5b98\u7f51",
|
1030 |
"relavance_label": 0,
|
@@ -1037,7 +1047,7 @@ An example of 'test.zh' looks as follows.
|
|
1037 |
|
1038 |
An example of 'validation.en' looks as follows.
|
1039 |
|
1040 |
-
```
|
1041 |
{
|
1042 |
"annswer": "Erikson has stated that after the last novel of the Malazan Book of the Fallen was finished, he and Esslemont would write a comprehensive guide tentatively named The Encyclopaedia Malazica.",
|
1043 |
"label": 0,
|
@@ -1049,7 +1059,7 @@ An example of 'validation.en' looks as follows.
|
|
1049 |
|
1050 |
An example of 'test.de' looks as follows.
|
1051 |
|
1052 |
-
```
|
1053 |
{
|
1054 |
"answer_passage": "Medien bei WhatsApp automatisch speichern. Tippen Sie oben rechts unter WhatsApp auf die drei Punkte oder auf die Men\u00fc-Taste Ihres Smartphones. Dort wechseln Sie in die \"Einstellungen\" und von hier aus weiter zu den \"Chat-Einstellungen\". Unter dem Punkt \"Medien Auto-Download\" k\u00f6nnen Sie festlegen, wann die WhatsApp-Bilder heruntergeladen werden sollen.",
|
1055 |
"question": "speichenn von whats app bilder unterbinden"
|
@@ -1060,7 +1070,7 @@ An example of 'test.de' looks as follows.
|
|
1060 |
|
1061 |
An example of 'test.en' looks as follows.
|
1062 |
|
1063 |
-
```
|
1064 |
{
|
1065 |
"news_body": "Check out this vintage Willys Pickup! As they say, the devil is in the details, and it's not every day you see such attention paid to every last area of a restoration like with this 1961 Willys Pickup . Already the Pickup has a unique look that shares some styling with the Jeep, plus some original touches you don't get anywhere else. It's a classy way to show up to any event, all thanks to Hollywood Motors . A burgundy paint job contrasts with white lower panels and the roof. Plenty of tasteful chrome details grace the exterior, including the bumpers, headlight bezels, crossmembers on the grille, hood latches, taillight bezels, exhaust finisher, tailgate hinges, etc. Steel wheels painted white and chrome hubs are a tasteful addition. Beautiful oak side steps and bed strips add a touch of craftsmanship to this ride. This truck is of real showroom quality, thanks to the astoundingly detailed restoration work performed on it, making this Willys Pickup a fierce contender for best of show. Under that beautiful hood is a 225 Buick V6 engine mated to a three-speed manual transmission, so you enjoy an ideal level of control. Four wheel drive is functional, making it that much more utilitarian and downright cool. The tires are new, so you can enjoy a lot of life out of them, while the wheels and hubs are in great condition. Just in case, a fifth wheel with a tire and a side mount are included. Just as important, this Pickup runs smoothly, so you can go cruising or even hit the open road if you're interested in participating in some classic rallies. You might associate Willys with the famous Jeep CJ, but the automaker did produce a fair amount of trucks. The Pickup is quite the unique example, thanks to distinct styling that really turns heads, making it a favorite at quite a few shows. Source: Hollywood Motors Check These Rides Out Too: Fear No Trails With These Off-Roaders 1965 Pontiac GTO: American Icon For Sale In Canada Low-Mileage 1955 Chevy 3100 Represents Turn In Pickup Market",
|
1066 |
"news_title": "This 1961 Willys Pickup Will Let You Cruise In Style"
|
@@ -1338,10 +1348,30 @@ The dataset is maintained mainly by Yaobo Liang, Yeyun Gong, Nan Duan, Ming Gong
|
|
1338 |
|
1339 |
### Licensing Information
|
1340 |
|
1341 |
-
The
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1342 |
|
1343 |
### Citation Information
|
1344 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1345 |
```
|
1346 |
@article{Liang2020XGLUEAN,
|
1347 |
title={XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation},
|
@@ -1350,6 +1380,54 @@ The licensing status of the dataset hinges on the legal status of [XGLUE](https:
|
|
1350 |
year={2020},
|
1351 |
volume={abs/2004.01401}
|
1352 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1353 |
```
|
1354 |
|
1355 |
### Contributions
|
|
|
30 |
- vi
|
31 |
- zh
|
32 |
license:
|
|
|
|
|
33 |
- other
|
34 |
multilinguality:
|
35 |
- multilingual
|
|
|
784 |
|
785 |
- **Homepage:** [XGLUE homepage](https://microsoft.github.io/XGLUE/)
|
786 |
- **Paper:** [XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation](https://arxiv.org/abs/2004.01401)
|
787 |
+
- **Point of Contact:** [xglue@microsoft.com](mailto:xglue@microsoft.com?subject=XGLUE Feedback)
|
788 |
|
789 |
### Dataset Summary
|
790 |
|
791 |
+
XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained models with respect to
|
792 |
+
cross-lingual natural language understanding and generation.
|
793 |
+
|
794 |
+
XGLUE is composed of 11 tasks spans 19 languages. For each task, the training data is only available in English.
|
795 |
+
This means that to succeed at XGLUE, a model must have a strong zero-shot cross-lingual transfer capability to learn
|
796 |
+
from the English data of a specific task and transfer what it learned to other languages. Comparing to its concurrent
|
797 |
+
work XTREME, XGLUE has two characteristics: First, it includes cross-lingual NLU and cross-lingual NLG tasks at the
|
798 |
+
same time; Second, besides including 5 existing cross-lingual tasks (i.e. NER, POS, MLQA, PAWS-X and XNLI), XGLUE
|
799 |
+
selects 6 new tasks from Bing scenarios as well, including News Classification (NC), Query-Ad Matching (QADSM),
|
800 |
+
Web Page Ranking (WPR), QA Matching (QAM), Question Generation (QG) and News Title Generation (NTG). Such diversities
|
801 |
+
of languages, tasks and task origin provide a comprehensive benchmark for quantifying the quality of a pre-trained
|
802 |
+
model on cross-lingual natural language understanding and generation.
|
803 |
|
804 |
The training data of each task is in English while the validation and test data is present in multiple different languages.
|
805 |
The following table shows which languages are present as validation and test data for each config.
|
|
|
811 |
### Supported Tasks and Leaderboards
|
812 |
|
813 |
The XGLUE leaderboard can be found on the [homepage](https://microsoft.github.io/XGLUE/) and
|
814 |
+
consists of a XGLUE-Understanding Score (the average of the tasks `ner`, `pos`, `mlqa`, `nc`, `xnli`, `paws-x`, `qadsm`, `wpr`, `qam`) and a XGLUE-Generation Score (the average of the tasks `qg`, `ntg`).
|
815 |
|
816 |
### Languages
|
817 |
|
|
|
838 |
|
839 |
An example of 'test.nl' looks as follows.
|
840 |
|
841 |
+
```json
|
842 |
{
|
843 |
"ner": [
|
844 |
"O",
|
|
|
895 |
|
896 |
An example of 'test.fr' looks as follows.
|
897 |
|
898 |
+
```json
|
899 |
{
|
900 |
"pos": [
|
901 |
"PRON",
|
|
|
966 |
|
967 |
An example of 'test.hi' looks as follows.
|
968 |
|
969 |
+
```json
|
970 |
{
|
971 |
"answers": {
|
972 |
"answer_start": [
|
|
|
985 |
|
986 |
An example of 'test.es' looks as follows.
|
987 |
|
988 |
+
```json
|
989 |
{
|
990 |
"news_body": "El bizcocho es seguramente el producto m\u00e1s b\u00e1sico y sencillo de toda la reposter\u00eda : consiste en poco m\u00e1s que mezclar unos cuantos ingredientes, meterlos al horno y esperar a que se hagan. Por obra y gracia del impulsor qu\u00edmico, tambi\u00e9n conocido como \"levadura de tipo Royal\", despu\u00e9s de un rato de calorcito esta combinaci\u00f3n de harina, az\u00facar, huevo, grasa -aceite o mantequilla- y l\u00e1cteo se transforma en uno de los productos m\u00e1s deliciosos que existen para desayunar o merendar . Por muy manazas que seas, es m\u00e1s que probable que tu bizcocho casero supere en calidad a cualquier infamia industrial envasada. Para lograr un bizcocho digno de admiraci\u00f3n s\u00f3lo tienes que respetar unas pocas normas que afectan a los ingredientes, proporciones, mezclado, horneado y desmoldado. Todas las tienes resumidas en unos dos minutos el v\u00eddeo de arriba, en el que adem \u00e1s aprender\u00e1s alg\u00fan truquillo para que tu bizcochaco quede m\u00e1s fino, jugoso, esponjoso y amoroso. M\u00e1s en MSN:",
|
991 |
"news_category": "foodanddrink",
|
|
|
997 |
|
998 |
An example of 'validation.th' looks as follows.
|
999 |
|
1000 |
+
```json
|
1001 |
{
|
1002 |
"hypothesis": "\u0e40\u0e02\u0e32\u0e42\u0e17\u0e23\u0e2b\u0e32\u0e40\u0e40\u0e21\u0e48\u0e02\u0e2d\u0e07\u0e40\u0e02\u0e32\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e23\u0e27\u0e14\u0e40\u0e23\u0e47\u0e27\u0e2b\u0e25\u0e31\u0e07\u0e08\u0e32\u0e01\u0e17\u0e35\u0e48\u0e23\u0e16\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2a\u0e48\u0e07\u0e40\u0e02\u0e32\u0e40\u0e40\u0e25\u0e49\u0e27",
|
1003 |
"label": 1,
|
|
|
1009 |
|
1010 |
An example of 'test.es' looks as follows.
|
1011 |
|
1012 |
+
```json
|
1013 |
{
|
1014 |
"label": 1,
|
1015 |
"sentence1": "La excepci\u00f3n fue entre fines de 2005 y 2009 cuando jug\u00f3 en Suecia con Carlstad United BK, Serbia con FK Borac \u010ca\u010dak y el FC Terek Grozny de Rusia.",
|
|
|
1021 |
|
1022 |
An example of 'train' looks as follows.
|
1023 |
|
1024 |
+
```json
|
1025 |
{
|
1026 |
"ad_description": "Your New England Cruise Awaits! Holland America Line Official Site.",
|
1027 |
"ad_title": "New England Cruises",
|
|
|
1034 |
|
1035 |
An example of 'test.zh' looks as follows.
|
1036 |
|
1037 |
+
```json
|
1038 |
{
|
1039 |
"query": "maxpro\u5b98\u7f51",
|
1040 |
"relavance_label": 0,
|
|
|
1047 |
|
1048 |
An example of 'validation.en' looks as follows.
|
1049 |
|
1050 |
+
```json
|
1051 |
{
|
1052 |
"annswer": "Erikson has stated that after the last novel of the Malazan Book of the Fallen was finished, he and Esslemont would write a comprehensive guide tentatively named The Encyclopaedia Malazica.",
|
1053 |
"label": 0,
|
|
|
1059 |
|
1060 |
An example of 'test.de' looks as follows.
|
1061 |
|
1062 |
+
```json
|
1063 |
{
|
1064 |
"answer_passage": "Medien bei WhatsApp automatisch speichern. Tippen Sie oben rechts unter WhatsApp auf die drei Punkte oder auf die Men\u00fc-Taste Ihres Smartphones. Dort wechseln Sie in die \"Einstellungen\" und von hier aus weiter zu den \"Chat-Einstellungen\". Unter dem Punkt \"Medien Auto-Download\" k\u00f6nnen Sie festlegen, wann die WhatsApp-Bilder heruntergeladen werden sollen.",
|
1065 |
"question": "speichenn von whats app bilder unterbinden"
|
|
|
1070 |
|
1071 |
An example of 'test.en' looks as follows.
|
1072 |
|
1073 |
+
```json
|
1074 |
{
|
1075 |
"news_body": "Check out this vintage Willys Pickup! As they say, the devil is in the details, and it's not every day you see such attention paid to every last area of a restoration like with this 1961 Willys Pickup . Already the Pickup has a unique look that shares some styling with the Jeep, plus some original touches you don't get anywhere else. It's a classy way to show up to any event, all thanks to Hollywood Motors . A burgundy paint job contrasts with white lower panels and the roof. Plenty of tasteful chrome details grace the exterior, including the bumpers, headlight bezels, crossmembers on the grille, hood latches, taillight bezels, exhaust finisher, tailgate hinges, etc. Steel wheels painted white and chrome hubs are a tasteful addition. Beautiful oak side steps and bed strips add a touch of craftsmanship to this ride. This truck is of real showroom quality, thanks to the astoundingly detailed restoration work performed on it, making this Willys Pickup a fierce contender for best of show. Under that beautiful hood is a 225 Buick V6 engine mated to a three-speed manual transmission, so you enjoy an ideal level of control. Four wheel drive is functional, making it that much more utilitarian and downright cool. The tires are new, so you can enjoy a lot of life out of them, while the wheels and hubs are in great condition. Just in case, a fifth wheel with a tire and a side mount are included. Just as important, this Pickup runs smoothly, so you can go cruising or even hit the open road if you're interested in participating in some classic rallies. You might associate Willys with the famous Jeep CJ, but the automaker did produce a fair amount of trucks. The Pickup is quite the unique example, thanks to distinct styling that really turns heads, making it a favorite at quite a few shows. Source: Hollywood Motors Check These Rides Out Too: Fear No Trails With These Off-Roaders 1965 Pontiac GTO: American Icon For Sale In Canada Low-Mileage 1955 Chevy 3100 Represents Turn In Pickup Market",
|
1076 |
"news_title": "This 1961 Willys Pickup Will Let You Cruise In Style"
|
|
|
1348 |
|
1349 |
### Licensing Information
|
1350 |
|
1351 |
+
The XGLUE datasets are intended for non-commercial research purposes only to promote advancement in the field of
|
1352 |
+
artificial intelligence and related areas, and is made available free of charge without extending any license or other
|
1353 |
+
intellectual property rights. The dataset is provided “as is” without warranty and usage of the data has risks since we
|
1354 |
+
may not own the underlying rights in the documents. We are not be liable for any damages related to use of the dataset.
|
1355 |
+
Feedback is voluntarily given and can be used as we see fit. Upon violation of any of these terms, your rights to use
|
1356 |
+
the dataset will end automatically.
|
1357 |
+
|
1358 |
+
If you have questions about use of the dataset or any research outputs in your products or services, we encourage you
|
1359 |
+
to undertake your own independent legal review. For other questions, please feel free to contact us.
|
1360 |
|
1361 |
### Citation Information
|
1362 |
|
1363 |
+
If you use this dataset, please cite it. Additionally, since XGLUE is also built out of exiting 5 datasets, please
|
1364 |
+
ensure you cite all of them.
|
1365 |
+
|
1366 |
+
An example:
|
1367 |
+
```
|
1368 |
+
We evaluate our model using the XGLUE benchmark \cite{Liang2020XGLUEAN}, a cross-lingual evaluation benchmark
|
1369 |
+
consiting of Named Entity Resolution (NER) \cite{Sang2002IntroductionTT} \cite{Sang2003IntroductionTT},
|
1370 |
+
Part of Speech Tagging (POS) \cite{11234/1-3105}, News Classification (NC), MLQA \cite{Lewis2019MLQAEC},
|
1371 |
+
XNLI \cite{Conneau2018XNLIEC}, PAWS-X \cite{Yang2019PAWSXAC}, Query-Ad Matching (QADSM), Web Page Ranking (WPR),
|
1372 |
+
QA Matching (QAM), Question Generation (QG) and News Title Generation (NTG).
|
1373 |
+
```
|
1374 |
+
|
1375 |
```
|
1376 |
@article{Liang2020XGLUEAN,
|
1377 |
title={XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation},
|
|
|
1380 |
year={2020},
|
1381 |
volume={abs/2004.01401}
|
1382 |
}
|
1383 |
+
|
1384 |
+
@misc{11234/1-3105,
|
1385 |
+
title={Universal Dependencies 2.5},
|
1386 |
+
author={Zeman, Daniel and Nivre, Joakim and Abrams, Mitchell and Aepli, No{\"e}mi and Agi{\'c}, {\v Z}eljko and Ahrenberg, Lars and Aleksandravi{\v c}i{\=u}t{\.e}, Gabriel{\.e} and Antonsen, Lene and Aplonova, Katya and Aranzabe, Maria Jesus and Arutie, Gashaw and Asahara, Masayuki and Ateyah, Luma and Attia, Mohammed and Atutxa, Aitziber and Augustinus, Liesbeth and Badmaeva, Elena and Ballesteros, Miguel and Banerjee, Esha and Bank, Sebastian and Barbu Mititelu, Verginica and Basmov, Victoria and Batchelor, Colin and Bauer, John and Bellato, Sandra and Bengoetxea, Kepa and Berzak, Yevgeni and Bhat, Irshad Ahmad and Bhat, Riyaz Ahmad and Biagetti, Erica and Bick, Eckhard and Bielinskien{\.e}, Agn{\.e} and Blokland, Rogier and Bobicev, Victoria and Boizou, Lo{\"{\i}}c and Borges V{\"o}lker, Emanuel and B{\"o}rstell, Carl and Bosco, Cristina and Bouma, Gosse and Bowman, Sam and Boyd, Adriane and Brokait{\.e}, Kristina and Burchardt, Aljoscha and Candito, Marie and Caron, Bernard and Caron, Gauthier and Cavalcanti, Tatiana and Cebiro{\u g}lu Eryi{\u g}it, G{\"u}l{\c s}en and Cecchini, Flavio Massimiliano and Celano, Giuseppe G. A. and {\v C}{\'e}pl{\"o}, Slavom{\'{\i}}r and Cetin, Savas and Chalub, Fabricio and Choi, Jinho and Cho, Yongseok and Chun, Jayeol and Cignarella, Alessandra T. and Cinkov{\'a}, Silvie and Collomb, Aur{\'e}lie and {\c C}{\"o}ltekin, {\c C}a{\u g}r{\i} and Connor, Miriam and Courtin, Marine and Davidson, Elizabeth and de Marneffe, Marie-Catherine and de Paiva, Valeria and de Souza, Elvis and Diaz de Ilarraza, Arantza and Dickerson, Carly and Dione, Bamba and Dirix, Peter and Dobrovoljc, Kaja and Dozat, Timothy and Droganova, Kira and Dwivedi, Puneet and Eckhoff, Hanne and Eli, Marhaba and Elkahky, Ali and Ephrem, Binyam and Erina, Olga and Erjavec, Toma{\v z} and Etienne, Aline and Evelyn, Wograine and Farkas, Rich{\'a}rd and Fernandez Alcalde, Hector and Foster, Jennifer and Freitas, Cl{\'a}udia and Fujita, Kazunori and Gajdo{\v s}ov{\'a}, Katar{\'{\i}}na and Galbraith, Daniel and Garcia, Marcos and G{\"a}rdenfors, Moa and Garza, Sebastian and Gerdes, Kim and Ginter, Filip and Goenaga, Iakes and Gojenola, Koldo and G{\"o}k{\i}rmak, Memduh and Goldberg, Yoav and G{\'o}mez Guinovart, Xavier and Gonz{\'a}lez Saavedra, Berta and Grici{\=u}t{\.e}, Bernadeta and Grioni, Matias and Gr{\=u}z{\={\i}}tis, Normunds and Guillaume, Bruno and Guillot-Barbance, C{\'e}line and Habash, Nizar and Haji{\v c}, Jan and Haji{\v c} jr., Jan and H{\"a}m{\"a}l{\"a}inen, Mika and H{\`a} M{\~y}, Linh and Han, Na-Rae and Harris, Kim and Haug, Dag and Heinecke, Johannes and Hennig, Felix and Hladk{\'a}, Barbora and Hlav{\'a}{\v c}ov{\'a}, Jaroslava and Hociung, Florinel and Hohle, Petter and Hwang, Jena and Ikeda, Takumi and Ion, Radu and Irimia, Elena and Ishola, {\d O}l{\'a}j{\'{\i}}d{\'e} and Jel{\'{\i}}nek, Tom{\'a}{\v s} and Johannsen, Anders and J{\o}rgensen, Fredrik and Juutinen, Markus and Ka{\c s}{\i}kara, H{\"u}ner and Kaasen, Andre and Kabaeva, Nadezhda and Kahane, Sylvain and Kanayama, Hiroshi and Kanerva, Jenna and Katz, Boris and Kayadelen, Tolga and Kenney, Jessica and Kettnerov{\'a}, V{\'a}clava and Kirchner, Jesse and Klementieva, Elena and K{\"o}hn, Arne and Kopacewicz, Kamil and Kotsyba, Natalia and Kovalevskait{\.e}, Jolanta and Krek, Simon and Kwak, Sookyoung and Laippala, Veronika and Lambertino, Lorenzo and Lam, Lucia and Lando, Tatiana and Larasati, Septina Dian and Lavrentiev, Alexei and Lee, John and L{\^e} H{\`{\^o}}ng, Phương and Lenci, Alessandro and Lertpradit, Saran and Leung, Herman and Li, Cheuk Ying and Li, Josie and Li, Keying and Lim, {KyungTae} and Liovina, Maria and Li, Yuan and Ljube{\v s}i{\'c}, Nikola and Loginova, Olga and Lyashevskaya, Olga and Lynn, Teresa and Macketanz, Vivien and Makazhanov, Aibek and Mandl, Michael and Manning, Christopher and Manurung, Ruli and M{\u a}r{\u a}nduc, C{\u a}t{\u a}lina and Mare{\v c}ek, David and Marheinecke, Katrin and Mart{\'{\i}}nez Alonso, H{\'e}ctor and Martins, Andr{\'e} and Ma{\v s}ek, Jan and Matsumoto, Yuji and {McDonald}, Ryan and {McGuinness}, Sarah and Mendon{\c c}a, Gustavo and Miekka, Niko and Misirpashayeva, Margarita and Missil{\"a}, Anna and Mititelu, C{\u a}t{\u a}lin and Mitrofan, Maria and Miyao, Yusuke and Montemagni, Simonetta and More, Amir and Moreno Romero, Laura and Mori, Keiko Sophie and Morioka, Tomohiko and Mori, Shinsuke and Moro, Shigeki and Mortensen, Bjartur and Moskalevskyi, Bohdan and Muischnek, Kadri and Munro, Robert and Murawaki, Yugo and M{\"u}{\"u}risep, Kaili and Nainwani, Pinkey and Navarro Hor{\~n}iacek, Juan Ignacio and Nedoluzhko, Anna and Ne{\v s}pore-B{\=e}rzkalne, Gunta and Nguy{\~{\^e}}n Th{\d i}, Lương and Nguy{\~{\^e}}n Th{\d i} Minh, Huy{\`{\^e}}n and Nikaido, Yoshihiro and Nikolaev, Vitaly and Nitisaroj, Rattima and Nurmi, Hanna and Ojala, Stina and Ojha, Atul Kr. and Ol{\'u}{\`o}kun, Ad{\'e}day{\d o}̀ and Omura, Mai and Osenova, Petya and {\"O}stling, Robert and {\O}vrelid, Lilja and Partanen, Niko and Pascual, Elena and Passarotti, Marco and Patejuk, Agnieszka and Paulino-Passos, Guilherme and Peljak-{\L}api{\'n}ska, Angelika and Peng, Siyao and Perez, Cenel-Augusto and Perrier, Guy and Petrova, Daria and Petrov, Slav and Phelan, Jason and Piitulainen, Jussi and Pirinen, Tommi A and Pitler, Emily and Plank, Barbara and Poibeau, Thierry and Ponomareva, Larisa and Popel, Martin and Pretkalni{\c n}a, Lauma and Pr{\'e}vost, Sophie and Prokopidis, Prokopis and Przepi{\'o}rkowski, Adam and Puolakainen, Tiina and Pyysalo, Sampo and Qi, Peng and R{\"a}{\"a}bis, Andriela and Rademaker, Alexandre and Ramasamy, Loganathan and Rama, Taraka and Ramisch, Carlos and Ravishankar, Vinit and Real, Livy and Reddy, Siva and Rehm, Georg and Riabov, Ivan and Rie{\ss}ler, Michael and Rimkut{\.e}, Erika and Rinaldi, Larissa and Rituma, Laura and Rocha, Luisa and Romanenko, Mykhailo and Rosa, Rudolf and Rovati, Davide and Roșca, Valentin and Rudina, Olga and Rueter, Jack and Sadde, Shoval and Sagot, Beno{\^{\i}}t and Saleh, Shadi and Salomoni, Alessio and Samard{\v z}i{\'c}, Tanja and Samson, Stephanie and Sanguinetti, Manuela and S{\"a}rg, Dage and Saul{\={\i}}te, Baiba and Sawanakunanon, Yanin and Schneider, Nathan and Schuster, Sebastian and Seddah, Djam{\'e} and Seeker, Wolfgang and Seraji, Mojgan and Shen, Mo and Shimada, Atsuko and Shirasu, Hiroyuki and Shohibussirri, Muh and Sichinava, Dmitry and Silveira, Aline and Silveira, Natalia and Simi, Maria and Simionescu, Radu and Simk{\'o}, Katalin and {\v S}imkov{\'a}, M{\'a}ria and Simov, Kiril and Smith, Aaron and Soares-Bastos, Isabela and Spadine, Carolyn and Stella, Antonio and Straka, Milan and Strnadov{\'a}, Jana and Suhr, Alane and Sulubacak, Umut and Suzuki, Shingo and Sz{\'a}nt{\'o}, Zsolt and Taji, Dima and Takahashi, Yuta and Tamburini, Fabio and Tanaka, Takaaki and Tellier, Isabelle and Thomas, Guillaume and Torga, Liisi and Trosterud, Trond and Trukhina, Anna and Tsarfaty, Reut and Tyers, Francis and Uematsu, Sumire and Ure{\v s}ov{\'a}, Zde{\v n}ka and Uria, Larraitz and Uszkoreit, Hans and Utka, Andrius and Vajjala, Sowmya and van Niekerk, Daniel and van Noord, Gertjan and Varga, Viktor and Villemonte de la Clergerie, Eric and Vincze, Veronika and Wallin, Lars and Walsh, Abigail and Wang, Jing Xian and Washington, Jonathan North and Wendt, Maximilan and Williams, Seyi and Wir{\'e}n, Mats and Wittern, Christian and Woldemariam, Tsegay and Wong, Tak-sum and Wr{\'o}blewska, Alina and Yako, Mary and Yamazaki, Naoki and Yan, Chunxiao and Yasuoka, Koichi and Yavrumyan, Marat M. and Yu, Zhuoran and {\v Z}abokrtsk{\'y}, Zden{\v e}k and Zeldes, Amir and Zhang, Manying and Zhu, Hanzhi},
|
1387 |
+
url={http://hdl.handle.net/11234/1-3105},
|
1388 |
+
note={{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{\'U}FAL}), Faculty of Mathematics and Physics, Charles University},
|
1389 |
+
copyright={Licence Universal Dependencies v2.5},
|
1390 |
+
year={2019}
|
1391 |
+
}
|
1392 |
+
|
1393 |
+
@article{Sang2003IntroductionTT,
|
1394 |
+
title={Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition},
|
1395 |
+
author={Erik F. Tjong Kim Sang and Fien De Meulder},
|
1396 |
+
journal={ArXiv},
|
1397 |
+
year={2003},
|
1398 |
+
volume={cs.CL/0306050}
|
1399 |
+
}
|
1400 |
+
|
1401 |
+
@article{Sang2002IntroductionTT,
|
1402 |
+
title={Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition},
|
1403 |
+
author={Erik F. Tjong Kim Sang},
|
1404 |
+
journal={ArXiv},
|
1405 |
+
year={2002},
|
1406 |
+
volume={cs.CL/0209010}
|
1407 |
+
}
|
1408 |
+
|
1409 |
+
@inproceedings{Conneau2018XNLIEC,
|
1410 |
+
title={XNLI: Evaluating Cross-lingual Sentence Representations},
|
1411 |
+
author={Alexis Conneau and Guillaume Lample and Ruty Rinott and Adina Williams and Samuel R. Bowman and Holger Schwenk and Veselin Stoyanov},
|
1412 |
+
booktitle={EMNLP},
|
1413 |
+
year={2018}
|
1414 |
+
}
|
1415 |
+
|
1416 |
+
@article{Lewis2019MLQAEC,
|
1417 |
+
title={MLQA: Evaluating Cross-lingual Extractive Question Answering},
|
1418 |
+
author={Patrick Lewis and Barlas Oguz and Ruty Rinott and Sebastian Riedel and Holger Schwenk},
|
1419 |
+
journal={ArXiv},
|
1420 |
+
year={2019},
|
1421 |
+
volume={abs/1910.07475}
|
1422 |
+
}
|
1423 |
+
|
1424 |
+
@article{Yang2019PAWSXAC,
|
1425 |
+
title={PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification},
|
1426 |
+
author={Yinfei Yang and Yuan Zhang and Chris Tar and Jason Baldridge},
|
1427 |
+
journal={ArXiv},
|
1428 |
+
year={2019},
|
1429 |
+
volume={abs/1908.11828}
|
1430 |
+
}
|
1431 |
```
|
1432 |
|
1433 |
### Contributions
|
dataset_infos.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"ner": {"description": "XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained\nmodels with respect to cross-lingual natural language understanding and generation.\nThe benchmark is composed of the following 11 tasks:\n- NER\n- POS Tagging (POS)\n- News Classification (NC)\n- MLQA\n- XNLI\n- PAWS-X\n- Query-Ad Matching (QADSM)\n- Web Page Ranking (WPR)\n- QA Matching (QAM)\n- Question Generation (QG)\n- News Title Generation (NTG)\n\nFor more information, please take a look at https://microsoft.github.io/XGLUE/.\n", "citation": "@article{Sang2003IntroductionTT,\n title={Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition},\n author={Erik F. Tjong Kim Sang and Fien De Meulder},\n journal={ArXiv},\n year={2003},\n volume={cs.CL/0306050}\n},\n@article{Sang2002IntroductionTT,\n title={Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition},\n author={Erik F. Tjong Kim Sang},\n journal={ArXiv},\n year={2002},\n volume={cs.CL/0209010}\n}\n@article{Liang2020XGLUEAN,\n title={XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation},\n author={Yaobo Liang and Nan Duan and Yeyun Gong and Ning Wu and Fenfei Guo and Weizhen Qi\n and Ming Gong and Linjun Shou and Daxin Jiang and Guihong Cao and Xiaodong Fan and Ruofei\n Zhang and Rahul Agrawal and Edward Cui and Sining Wei and Taroon Bharti and Ying Qiao\n and Jiun-Hung Chen and Winnie Wu and Shuguang Liu and Fan Yang and Daniel Campos\n and Rangan Majumder and Ming Zhou},\n journal={arXiv},\n year={2020},\n volume={abs/2004.01401}\n}\n", "homepage": "https://www.clips.uantwerpen.be/conll2003/ner/", "license": "", "features": {"words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner": {"feature": {"num_classes": 9, "names": ["O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "x_glue", "config_name": "ner", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 3445854, "num_examples": 14042, "dataset_name": "x_glue"}, "validation.en": {"name": "validation.en", "num_bytes": 866569, "num_examples": 3252, "dataset_name": "x_glue"}, "validation.de": {"name": "validation.de", "num_bytes": 917967, "num_examples": 2874, "dataset_name": "x_glue"}, "validation.es": {"name": "validation.es", "num_bytes": 888551, "num_examples": 1923, "dataset_name": "x_glue"}, "validation.nl": {"name": "validation.nl", "num_bytes": 659144, "num_examples": 2895, "dataset_name": "x_glue"}, "test.en": {"name": "test.en", "num_bytes": 784976, "num_examples": 3454, "dataset_name": "x_glue"}, "test.de": {"name": "test.de", "num_bytes": 922741, "num_examples": 3007, "dataset_name": "x_glue"}, "test.es": {"name": "test.es", "num_bytes": 864804, "num_examples": 1523, "dataset_name": "x_glue"}, "test.nl": {"name": "test.nl", "num_bytes": 1196660, "num_examples": 5202, "dataset_name": "x_glue"}}, "download_checksums": {"https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz": {"num_bytes": 875905871, "checksum": "e11016c02d8565d00119833a16679bbbe0fec437f5ad53c2d3f9eef6fa03f65b"}}, "download_size": 875905871, "post_processing_size": null, "dataset_size": 10547266, "size_in_bytes": 886453137}, "pos": {"description": "XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained\nmodels with respect to cross-lingual natural language understanding and generation.\nThe benchmark is composed of the following 11 tasks:\n- NER\n- POS Tagging (POS)\n- News Classification (NC)\n- MLQA\n- XNLI\n- PAWS-X\n- Query-Ad Matching (QADSM)\n- Web Page Ranking (WPR)\n- QA Matching (QAM)\n- Question Generation (QG)\n- News Title Generation (NTG)\n\nFor more information, please take a look at https://microsoft.github.io/XGLUE/.\n", "citation": "@misc{11234/1-3105,\n title={Universal Dependencies 2.5},\n author={Zeman, Daniel and Nivre, Joakim and Abrams, Mitchell and Aepli, et al.},\n url={http://hdl.handle.net/11234/1-3105},\n note={{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{'U}FAL}), Faculty of Mathematics and Physics, Charles University},\n copyright={Licence Universal Dependencies v2.5},\n year={2019}\n}\n@article{Liang2020XGLUEAN,\n title={XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation},\n author={Yaobo Liang and Nan Duan and Yeyun Gong and Ning Wu and Fenfei Guo and Weizhen Qi\n and Ming Gong and Linjun Shou and Daxin Jiang and Guihong Cao and Xiaodong Fan and Ruofei\n Zhang and Rahul Agrawal and Edward Cui and Sining Wei and Taroon Bharti and Ying Qiao\n and Jiun-Hung Chen and Winnie Wu and Shuguang Liu and Fan Yang and Daniel Campos\n and Rangan Majumder and Ming Zhou},\n journal={arXiv},\n year={2020},\n volume={abs/2004.01401}\n}\n", "homepage": "https://universaldependencies.org/", "license": "", "features": {"words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pos": {"feature": {"num_classes": 17, "names": ["ADJ", "ADP", "ADV", "AUX", "CCONJ", "DET", "INTJ", "NOUN", "NUM", "PART", "PRON", "PROPN", "PUNCT", "SCONJ", "SYM", "VERB", "X"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "x_glue", "config_name": "pos", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 7279459, "num_examples": 25376, "dataset_name": "x_glue"}, "validation.en": {"name": "validation.en", "num_bytes": 421410, "num_examples": 2001, "dataset_name": "x_glue"}, "validation.de": {"name": "validation.de", "num_bytes": 219328, "num_examples": 798, "dataset_name": "x_glue"}, "validation.es": {"name": "validation.es", "num_bytes": 620491, "num_examples": 1399, "dataset_name": "x_glue"}, "validation.nl": {"name": "validation.nl", "num_bytes": 198003, "num_examples": 717, "dataset_name": "x_glue"}, "validation.bg": {"name": "validation.bg", "num_bytes": 346802, "num_examples": 1114, "dataset_name": "x_glue"}, "validation.el": {"name": "validation.el", "num_bytes": 229447, "num_examples": 402, "dataset_name": "x_glue"}, "validation.fr": {"name": "validation.fr", "num_bytes": 600964, "num_examples": 1475, "dataset_name": "x_glue"}, "validation.pl": {"name": "validation.pl", "num_bytes": 620694, "num_examples": 2214, "dataset_name": "x_glue"}, "validation.tr": {"name": "validation.tr", "num_bytes": 186196, "num_examples": 987, "dataset_name": "x_glue"}, "validation.vi": {"name": "validation.vi", "num_bytes": 203669, "num_examples": 799, "dataset_name": "x_glue"}, "validation.zh": {"name": "validation.zh", "num_bytes": 212579, "num_examples": 499, "dataset_name": "x_glue"}, "validation.ur": {"name": "validation.ur", "num_bytes": 284016, "num_examples": 551, "dataset_name": "x_glue"}, "validation.hi": {"name": "validation.hi", "num_bytes": 838700, "num_examples": 1658, "dataset_name": "x_glue"}, "validation.it": {"name": "validation.it", "num_bytes": 198608, "num_examples": 563, "dataset_name": "x_glue"}, "validation.ar": {"name": "validation.ar", "num_bytes": 592943, "num_examples": 908, "dataset_name": "x_glue"}, "validation.ru": {"name": "validation.ru", "num_bytes": 261563, "num_examples": 578, "dataset_name": "x_glue"}, "validation.th": {"name": "validation.th", "num_bytes": 272834, "num_examples": 497, "dataset_name": "x_glue"}, "test.en": {"name": "test.en", "num_bytes": 420613, "num_examples": 2076, "dataset_name": "x_glue"}, "test.de": {"name": "test.de", "num_bytes": 291759, "num_examples": 976, "dataset_name": "x_glue"}, "test.es": {"name": "test.es", "num_bytes": 200003, "num_examples": 425, "dataset_name": "x_glue"}, "test.nl": {"name": "test.nl", "num_bytes": 193337, "num_examples": 595, "dataset_name": "x_glue"}, "test.bg": {"name": "test.bg", "num_bytes": 339460, "num_examples": 1115, "dataset_name": "x_glue"}, "test.el": {"name": "test.el", "num_bytes": 235137, "num_examples": 455, "dataset_name": "x_glue"}, "test.fr": {"name": "test.fr", "num_bytes": 166865, "num_examples": 415, "dataset_name": "x_glue"}, "test.pl": {"name": "test.pl", "num_bytes": 600534, "num_examples": 2214, "dataset_name": "x_glue"}, "test.tr": {"name": "test.tr", "num_bytes": 186519, "num_examples": 982, "dataset_name": "x_glue"}, "test.vi": {"name": "test.vi", "num_bytes": 211408, "num_examples": 799, "dataset_name": "x_glue"}, "test.zh": {"name": "test.zh", "num_bytes": 202055, "num_examples": 499, "dataset_name": "x_glue"}, "test.ur": {"name": "test.ur", "num_bytes": 288189, "num_examples": 534, "dataset_name": "x_glue"}, "test.hi": {"name": "test.hi", "num_bytes": 839659, "num_examples": 1683, "dataset_name": "x_glue"}, "test.it": {"name": "test.it", "num_bytes": 173861, "num_examples": 481, "dataset_name": "x_glue"}, "test.ar": {"name": "test.ar", "num_bytes": 561709, "num_examples": 679, "dataset_name": "x_glue"}, "test.ru": {"name": "test.ru", "num_bytes": 255393, "num_examples": 600, "dataset_name": "x_glue"}, "test.th": {"name": "test.th", "num_bytes": 272834, "num_examples": 497, "dataset_name": "x_glue"}}, "download_checksums": {"https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz": {"num_bytes": 875905871, "checksum": "e11016c02d8565d00119833a16679bbbe0fec437f5ad53c2d3f9eef6fa03f65b"}}, "download_size": 875905871, "post_processing_size": null, "dataset_size": 19027041, "size_in_bytes": 894932912}, "mlqa": {"description": "XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained\nmodels with respect to cross-lingual natural language understanding and generation.\nThe benchmark is composed of the following 11 tasks:\n- NER\n- POS Tagging (POS)\n- News Classification (NC)\n- MLQA\n- XNLI\n- PAWS-X\n- Query-Ad Matching (QADSM)\n- Web Page Ranking (WPR)\n- QA Matching (QAM)\n- Question Generation (QG)\n- News Title Generation (NTG)\n\nFor more information, please take a look at https://microsoft.github.io/XGLUE/.\n", "citation": "@article{Lewis2019MLQAEC,\n title={MLQA: Evaluating Cross-lingual Extractive Question Answering},\n author={Patrick Lewis and Barlas Oguz and Ruty Rinott and Sebastian Riedel and Holger Schwenk},\n journal={ArXiv},\n year={2019},\n volume={abs/1910.07475}\n}\n@article{Liang2020XGLUEAN,\n title={XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation},\n author={Yaobo Liang and Nan Duan and Yeyun Gong and Ning Wu and Fenfei Guo and Weizhen Qi\n and Ming Gong and Linjun Shou and Daxin Jiang and Guihong Cao and Xiaodong Fan and Ruofei\n Zhang and Rahul Agrawal and Edward Cui and Sining Wei and Taroon Bharti and Ying Qiao\n and Jiun-Hung Chen and Winnie Wu and Shuguang Liu and Fan Yang and Daniel Campos\n and Rangan Majumder and Ming Zhou},\n journal={arXiv},\n year={2020},\n volume={abs/2004.01401}\n}\n", "homepage": "https://github.com/facebookresearch/MLQA", "license": "", "features": {"context": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"answer_start": {"dtype": "int32", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "x_glue", "config_name": "mlqa", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 75307933, "num_examples": 87599, "dataset_name": "x_glue"}, "validation.en": {"name": "validation.en", "num_bytes": 1255587, "num_examples": 1148, "dataset_name": "x_glue"}, "validation.de": {"name": "validation.de", "num_bytes": 454258, "num_examples": 512, "dataset_name": "x_glue"}, "validation.ar": {"name": "validation.ar", "num_bytes": 785493, "num_examples": 517, "dataset_name": "x_glue"}, "validation.es": {"name": "validation.es", "num_bytes": 388625, "num_examples": 500, "dataset_name": "x_glue"}, "validation.hi": {"name": "validation.hi", "num_bytes": 1092167, "num_examples": 507, "dataset_name": "x_glue"}, "validation.vi": {"name": "validation.vi", "num_bytes": 692227, "num_examples": 511, "dataset_name": "x_glue"}, "validation.zh": {"name": "validation.zh", "num_bytes": 411213, "num_examples": 504, "dataset_name": "x_glue"}, "test.en": {"name": "test.en", "num_bytes": 13264513, "num_examples": 11590, "dataset_name": "x_glue"}, "test.de": {"name": "test.de", "num_bytes": 4070659, "num_examples": 4517, "dataset_name": "x_glue"}, "test.ar": {"name": "test.ar", "num_bytes": 7976090, "num_examples": 5335, "dataset_name": "x_glue"}, "test.es": {"name": "test.es", "num_bytes": 4044224, "num_examples": 5253, "dataset_name": "x_glue"}, "test.hi": {"name": "test.hi", "num_bytes": 11385051, "num_examples": 4918, "dataset_name": "x_glue"}, "test.vi": {"name": "test.vi", "num_bytes": 7559078, "num_examples": 5495, "dataset_name": "x_glue"}, "test.zh": {"name": "test.zh", "num_bytes": 4092921, "num_examples": 5137, "dataset_name": "x_glue"}}, "download_checksums": {"https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz": {"num_bytes": 875905871, "checksum": "e11016c02d8565d00119833a16679bbbe0fec437f5ad53c2d3f9eef6fa03f65b"}}, "download_size": 875905871, "post_processing_size": null, "dataset_size": 132780039, "size_in_bytes": 1008685910}, "nc": {"description": "XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained\nmodels with respect to cross-lingual natural language understanding and generation.\nThe benchmark is composed of the following 11 tasks:\n- NER\n- POS Tagging (POS)\n- News Classification (NC)\n- MLQA\n- XNLI\n- PAWS-X\n- Query-Ad Matching (QADSM)\n- Web Page Ranking (WPR)\n- QA Matching (QAM)\n- Question Generation (QG)\n- News Title Generation (NTG)\n\nFor more information, please take a look at https://microsoft.github.io/XGLUE/.\n", "citation": "\n@article{Liang2020XGLUEAN,\n title={XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation},\n author={Yaobo Liang and Nan Duan and Yeyun Gong and Ning Wu and Fenfei Guo and Weizhen Qi\n and Ming Gong and Linjun Shou and Daxin Jiang and Guihong Cao and Xiaodong Fan and Ruofei\n Zhang and Rahul Agrawal and Edward Cui and Sining Wei and Taroon Bharti and Ying Qiao\n and Jiun-Hung Chen and Winnie Wu and Shuguang Liu and Fan Yang and Daniel Campos\n and Rangan Majumder and Ming Zhou},\n journal={arXiv},\n year={2020},\n volume={abs/2004.01401}\n}\n", "homepage": "", "license": "", "features": {"news_title": {"dtype": "string", "id": null, "_type": "Value"}, "news_body": {"dtype": "string", "id": null, "_type": "Value"}, "news_category": {"num_classes": 10, "names": ["foodanddrink", "sports", "travel", "finance", "lifestyle", "news", "entertainment", "health", "video", "autos"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "x_glue", "config_name": "nc", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 280615806, "num_examples": 100000, "dataset_name": "x_glue"}, "validation.en": {"name": "validation.en", "num_bytes": 33389140, "num_examples": 10000, "dataset_name": "x_glue"}, "validation.de": {"name": "validation.de", "num_bytes": 26757254, "num_examples": 10000, "dataset_name": "x_glue"}, "validation.es": {"name": "validation.es", "num_bytes": 31781308, "num_examples": 10000, "dataset_name": "x_glue"}, "validation.fr": {"name": "validation.fr", "num_bytes": 27154099, "num_examples": 10000, "dataset_name": "x_glue"}, "validation.ru": {"name": "validation.ru", "num_bytes": 46053007, "num_examples": 10000, "dataset_name": "x_glue"}, "test.en": {"name": "test.en", "num_bytes": 34437987, "num_examples": 10000, "dataset_name": "x_glue"}, "test.de": {"name": "test.de", "num_bytes": 26632007, "num_examples": 10000, "dataset_name": "x_glue"}, "test.es": {"name": "test.es", "num_bytes": 31350078, "num_examples": 10000, "dataset_name": "x_glue"}, "test.fr": {"name": "test.fr", "num_bytes": 27589545, "num_examples": 10000, "dataset_name": "x_glue"}, "test.ru": {"name": "test.ru", "num_bytes": 46183830, "num_examples": 10000, "dataset_name": "x_glue"}}, "download_checksums": {"https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz": {"num_bytes": 875905871, "checksum": "e11016c02d8565d00119833a16679bbbe0fec437f5ad53c2d3f9eef6fa03f65b"}}, "download_size": 875905871, "post_processing_size": null, "dataset_size": 611944061, "size_in_bytes": 1487849932}, "xnli": {"description": "XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained\nmodels with respect to cross-lingual natural language understanding and generation.\nThe benchmark is composed of the following 11 tasks:\n- NER\n- POS Tagging (POS)\n- News Classification (NC)\n- MLQA\n- XNLI\n- PAWS-X\n- Query-Ad Matching (QADSM)\n- Web Page Ranking (WPR)\n- QA Matching (QAM)\n- Question Generation (QG)\n- News Title Generation (NTG)\n\nFor more information, please take a look at https://microsoft.github.io/XGLUE/.\n", "citation": "@inproceedings{Conneau2018XNLIEC,\n title={XNLI: Evaluating Cross-lingual Sentence Representations},\n author={Alexis Conneau and Guillaume Lample and Ruty Rinott and Adina Williams and Samuel R. Bowman and Holger Schwenk and Veselin Stoyanov},\n booktitle={EMNLP},\n year={2018}\n}\n@article{Liang2020XGLUEAN,\n title={XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation},\n author={Yaobo Liang and Nan Duan and Yeyun Gong and Ning Wu and Fenfei Guo and Weizhen Qi\n and Ming Gong and Linjun Shou and Daxin Jiang and Guihong Cao and Xiaodong Fan and Ruofei\n Zhang and Rahul Agrawal and Edward Cui and Sining Wei and Taroon Bharti and Ying Qiao\n and Jiun-Hung Chen and Winnie Wu and Shuguang Liu and Fan Yang and Daniel Campos\n and Rangan Majumder and Ming Zhou},\n journal={arXiv},\n year={2020},\n volume={abs/2004.01401}\n}\n", "homepage": "https://github.com/facebookresearch/XNLI", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 3, "names": ["entailment", "neutral", "contradiction"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "x_glue", "config_name": "xnli", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 74444346, "num_examples": 392702, "dataset_name": "x_glue"}, "validation.en": {"name": "validation.en", "num_bytes": 433471, "num_examples": 2490, "dataset_name": "x_glue"}, "validation.ar": {"name": "validation.ar", "num_bytes": 633009, "num_examples": 2490, "dataset_name": "x_glue"}, "validation.bg": {"name": "validation.bg", "num_bytes": 774069, "num_examples": 2490, "dataset_name": "x_glue"}, "validation.de": {"name": "validation.de", "num_bytes": 494612, "num_examples": 2490, "dataset_name": "x_glue"}, "validation.el": {"name": "validation.el", "num_bytes": 841234, "num_examples": 2490, "dataset_name": "x_glue"}, "validation.es": {"name": "validation.es", "num_bytes": 478430, "num_examples": 2490, "dataset_name": "x_glue"}, "validation.fr": {"name": "validation.fr", "num_bytes": 510112, "num_examples": 2490, "dataset_name": "x_glue"}, "validation.hi": {"name": "validation.hi", "num_bytes": 1023923, "num_examples": 2490, "dataset_name": "x_glue"}, "validation.ru": {"name": "validation.ru", "num_bytes": 786450, "num_examples": 2490, "dataset_name": "x_glue"}, "validation.sw": {"name": "validation.sw", "num_bytes": 429858, "num_examples": 2490, "dataset_name": "x_glue"}, "validation.th": {"name": "validation.th", "num_bytes": 1061168, "num_examples": 2490, "dataset_name": "x_glue"}, "validation.tr": {"name": "validation.tr", "num_bytes": 459316, "num_examples": 2490, "dataset_name": "x_glue"}, "validation.ur": {"name": "validation.ur", "num_bytes": 699960, "num_examples": 2490, "dataset_name": "x_glue"}, "validation.vi": {"name": "validation.vi", "num_bytes": 590688, "num_examples": 2490, "dataset_name": "x_glue"}, "validation.zh": {"name": "validation.zh", "num_bytes": 384859, "num_examples": 2490, "dataset_name": "x_glue"}, "test.en": {"name": "test.en", "num_bytes": 875142, "num_examples": 5010, "dataset_name": "x_glue"}, "test.ar": {"name": "test.ar", "num_bytes": 1294561, "num_examples": 5010, "dataset_name": "x_glue"}, "test.bg": {"name": "test.bg", "num_bytes": 1573042, "num_examples": 5010, "dataset_name": "x_glue"}, "test.de": {"name": "test.de", "num_bytes": 996487, "num_examples": 5010, "dataset_name": "x_glue"}, "test.el": {"name": "test.el", "num_bytes": 1704793, "num_examples": 5010, "dataset_name": "x_glue"}, "test.es": {"name": "test.es", "num_bytes": 969821, "num_examples": 5010, "dataset_name": "x_glue"}, "test.fr": {"name": "test.fr", "num_bytes": 1029247, "num_examples": 5010, "dataset_name": "x_glue"}, "test.hi": {"name": "test.hi", "num_bytes": 2073081, "num_examples": 5010, "dataset_name": "x_glue"}, "test.ru": {"name": "test.ru", "num_bytes": 1603474, "num_examples": 5010, "dataset_name": "x_glue"}, "test.sw": {"name": "test.sw", "num_bytes": 871659, "num_examples": 5010, "dataset_name": "x_glue"}, "test.th": {"name": "test.th", "num_bytes": 2147023, "num_examples": 5010, "dataset_name": "x_glue"}, "test.tr": {"name": "test.tr", "num_bytes": 934942, "num_examples": 5010, "dataset_name": "x_glue"}, "test.ur": {"name": "test.ur", "num_bytes": 1416246, "num_examples": 5010, "dataset_name": "x_glue"}, "test.vi": {"name": "test.vi", "num_bytes": 1190225, "num_examples": 5010, "dataset_name": "x_glue"}, "test.zh": {"name": "test.zh", "num_bytes": 777937, "num_examples": 5010, "dataset_name": "x_glue"}}, "download_checksums": {"https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz": {"num_bytes": 875905871, "checksum": "e11016c02d8565d00119833a16679bbbe0fec437f5ad53c2d3f9eef6fa03f65b"}}, "download_size": 875905871, "post_processing_size": null, "dataset_size": 103503185, "size_in_bytes": 979409056}, "paws-x": {"description": "XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained\nmodels with respect to cross-lingual natural language understanding and generation.\nThe benchmark is composed of the following 11 tasks:\n- NER\n- POS Tagging (POS)\n- News Classification (NC)\n- MLQA\n- XNLI\n- PAWS-X\n- Query-Ad Matching (QADSM)\n- Web Page Ranking (WPR)\n- QA Matching (QAM)\n- Question Generation (QG)\n- News Title Generation (NTG)\n\nFor more information, please take a look at https://microsoft.github.io/XGLUE/.\n", "citation": "@article{Yang2019PAWSXAC,\n title={PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification},\n author={Yinfei Yang and Yuan Zhang and Chris Tar and Jason Baldridge},\n journal={ArXiv},\n year={2019},\n volume={abs/1908.11828}\n}\n@article{Liang2020XGLUEAN,\n title={XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation},\n author={Yaobo Liang and Nan Duan and Yeyun Gong and Ning Wu and Fenfei Guo and Weizhen Qi\n and Ming Gong and Linjun Shou and Daxin Jiang and Guihong Cao and Xiaodong Fan and Ruofei\n Zhang and Rahul Agrawal and Edward Cui and Sining Wei and Taroon Bharti and Ying Qiao\n and Jiun-Hung Chen and Winnie Wu and Shuguang Liu and Fan Yang and Daniel Campos\n and Rangan Majumder and Ming Zhou},\n journal={arXiv},\n year={2020},\n volume={abs/2004.01401}\n}\n", "homepage": "https://github.com/google-research-datasets/paws/tree/master/pawsx", "license": "", "features": {"sentence1": {"dtype": "string", "id": null, "_type": "Value"}, "sentence2": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 2, "names": ["different", "same"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "x_glue", "config_name": "paws-x", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 12018349, "num_examples": 49401, "dataset_name": "x_glue"}, "validation.en": {"name": "validation.en", "num_bytes": 484287, "num_examples": 2000, "dataset_name": "x_glue"}, "validation.de": {"name": "validation.de", "num_bytes": 506009, "num_examples": 2000, "dataset_name": "x_glue"}, "validation.es": {"name": "validation.es", "num_bytes": 505888, "num_examples": 2000, "dataset_name": "x_glue"}, "validation.fr": {"name": "validation.fr", "num_bytes": 525031, "num_examples": 2000, "dataset_name": "x_glue"}, "test.en": {"name": "test.en", "num_bytes": 486734, "num_examples": 2000, "dataset_name": "x_glue"}, "test.de": {"name": "test.de", "num_bytes": 516214, "num_examples": 2000, "dataset_name": "x_glue"}, "test.es": {"name": "test.es", "num_bytes": 511111, "num_examples": 2000, "dataset_name": "x_glue"}, "test.fr": {"name": "test.fr", "num_bytes": 527101, "num_examples": 2000, "dataset_name": "x_glue"}}, "download_checksums": {"https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz": {"num_bytes": 875905871, "checksum": "e11016c02d8565d00119833a16679bbbe0fec437f5ad53c2d3f9eef6fa03f65b"}}, "download_size": 875905871, "post_processing_size": null, "dataset_size": 16080724, "size_in_bytes": 891986595}, "qadsm": {"description": "XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained\nmodels with respect to cross-lingual natural language understanding and generation.\nThe benchmark is composed of the following 11 tasks:\n- NER\n- POS Tagging (POS)\n- News Classification (NC)\n- MLQA\n- XNLI\n- PAWS-X\n- Query-Ad Matching (QADSM)\n- Web Page Ranking (WPR)\n- QA Matching (QAM)\n- Question Generation (QG)\n- News Title Generation (NTG)\n\nFor more information, please take a look at https://microsoft.github.io/XGLUE/.\n", "citation": "\n@article{Liang2020XGLUEAN,\n title={XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation},\n author={Yaobo Liang and Nan Duan and Yeyun Gong and Ning Wu and Fenfei Guo and Weizhen Qi\n and Ming Gong and Linjun Shou and Daxin Jiang and Guihong Cao and Xiaodong Fan and Ruofei\n Zhang and Rahul Agrawal and Edward Cui and Sining Wei and Taroon Bharti and Ying Qiao\n and Jiun-Hung Chen and Winnie Wu and Shuguang Liu and Fan Yang and Daniel Campos\n and Rangan Majumder and Ming Zhou},\n journal={arXiv},\n year={2020},\n volume={abs/2004.01401}\n}\n", "homepage": "", "license": "", "features": {"query": {"dtype": "string", "id": null, "_type": "Value"}, "ad_title": {"dtype": "string", "id": null, "_type": "Value"}, "ad_description": {"dtype": "string", "id": null, "_type": "Value"}, "relevance_label": {"num_classes": 2, "names": ["Bad", "Good"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "x_glue", "config_name": "qadsm", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 12528141, "num_examples": 100000, "dataset_name": "x_glue"}, "validation.en": {"name": "validation.en", "num_bytes": 1248839, "num_examples": 10000, "dataset_name": "x_glue"}, "validation.de": {"name": "validation.de", "num_bytes": 1566011, "num_examples": 10000, "dataset_name": "x_glue"}, "validation.fr": {"name": "validation.fr", "num_bytes": 1651804, "num_examples": 10000, "dataset_name": "x_glue"}, "test.en": {"name": "test.en", "num_bytes": 1236997, "num_examples": 10000, "dataset_name": "x_glue"}, "test.de": {"name": "test.de", "num_bytes": 1563985, "num_examples": 10000, "dataset_name": "x_glue"}, "test.fr": {"name": "test.fr", "num_bytes": 1594118, "num_examples": 10000, "dataset_name": "x_glue"}}, "download_checksums": {"https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz": {"num_bytes": 875905871, "checksum": "e11016c02d8565d00119833a16679bbbe0fec437f5ad53c2d3f9eef6fa03f65b"}}, "download_size": 875905871, "post_processing_size": null, "dataset_size": 21389895, "size_in_bytes": 897295766}, "wpr": {"description": "XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained\nmodels with respect to cross-lingual natural language understanding and generation.\nThe benchmark is composed of the following 11 tasks:\n- NER\n- POS Tagging (POS)\n- News Classification (NC)\n- MLQA\n- XNLI\n- PAWS-X\n- Query-Ad Matching (QADSM)\n- Web Page Ranking (WPR)\n- QA Matching (QAM)\n- Question Generation (QG)\n- News Title Generation (NTG)\n\nFor more information, please take a look at https://microsoft.github.io/XGLUE/.\n", "citation": "\n@article{Liang2020XGLUEAN,\n title={XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation},\n author={Yaobo Liang and Nan Duan and Yeyun Gong and Ning Wu and Fenfei Guo and Weizhen Qi\n and Ming Gong and Linjun Shou and Daxin Jiang and Guihong Cao and Xiaodong Fan and Ruofei\n Zhang and Rahul Agrawal and Edward Cui and Sining Wei and Taroon Bharti and Ying Qiao\n and Jiun-Hung Chen and Winnie Wu and Shuguang Liu and Fan Yang and Daniel Campos\n and Rangan Majumder and Ming Zhou},\n journal={arXiv},\n year={2020},\n volume={abs/2004.01401}\n}\n", "homepage": "", "license": "", "features": {"query": {"dtype": "string", "id": null, "_type": "Value"}, "web_page_title": {"dtype": "string", "id": null, "_type": "Value"}, "web_page_snippet": {"dtype": "string", "id": null, "_type": "Value"}, "relavance_label": {"num_classes": 5, "names": ["Bad", "Fair", "Good", "Excellent", "Perfect"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "x_glue", "config_name": "wpr", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 33885931, "num_examples": 99997, "dataset_name": "x_glue"}, "validation.en": {"name": "validation.en", "num_bytes": 3417760, "num_examples": 10008, "dataset_name": "x_glue"}, "validation.de": {"name": "validation.de", "num_bytes": 2929029, "num_examples": 10004, "dataset_name": "x_glue"}, "validation.es": {"name": "validation.es", "num_bytes": 2451026, "num_examples": 10004, "dataset_name": "x_glue"}, "validation.fr": {"name": "validation.fr", "num_bytes": 3055899, "num_examples": 10005, "dataset_name": "x_glue"}, "validation.it": {"name": "validation.it", "num_bytes": 2416388, "num_examples": 10003, "dataset_name": "x_glue"}, "validation.pt": {"name": "validation.pt", "num_bytes": 2449797, "num_examples": 10001, "dataset_name": "x_glue"}, "validation.zh": {"name": "validation.zh", "num_bytes": 3118577, "num_examples": 10002, "dataset_name": "x_glue"}, "test.en": {"name": "test.en", "num_bytes": 3402487, "num_examples": 10004, "dataset_name": "x_glue"}, "test.de": {"name": "test.de", "num_bytes": 2923577, "num_examples": 9997, "dataset_name": "x_glue"}, "test.es": {"name": "test.es", "num_bytes": 2422895, "num_examples": 10006, "dataset_name": "x_glue"}, "test.fr": {"name": "test.fr", "num_bytes": 3059392, "num_examples": 10020, "dataset_name": "x_glue"}, "test.it": {"name": "test.it", "num_bytes": 2403736, "num_examples": 10001, "dataset_name": "x_glue"}, "test.pt": {"name": "test.pt", "num_bytes": 2462350, "num_examples": 10015, "dataset_name": "x_glue"}, "test.zh": {"name": "test.zh", "num_bytes": 3141598, "num_examples": 9999, "dataset_name": "x_glue"}}, "download_checksums": {"https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz": {"num_bytes": 875905871, "checksum": "e11016c02d8565d00119833a16679bbbe0fec437f5ad53c2d3f9eef6fa03f65b"}}, "download_size": 875905871, "post_processing_size": null, "dataset_size": 73540442, "size_in_bytes": 949446313}, "qam": {"description": "XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained\nmodels with respect to cross-lingual natural language understanding and generation.\nThe benchmark is composed of the following 11 tasks:\n- NER\n- POS Tagging (POS)\n- News Classification (NC)\n- MLQA\n- XNLI\n- PAWS-X\n- Query-Ad Matching (QADSM)\n- Web Page Ranking (WPR)\n- QA Matching (QAM)\n- Question Generation (QG)\n- News Title Generation (NTG)\n\nFor more information, please take a look at https://microsoft.github.io/XGLUE/.\n", "citation": "\n@article{Liang2020XGLUEAN,\n title={XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation},\n author={Yaobo Liang and Nan Duan and Yeyun Gong and Ning Wu and Fenfei Guo and Weizhen Qi\n and Ming Gong and Linjun Shou and Daxin Jiang and Guihong Cao and Xiaodong Fan and Ruofei\n Zhang and Rahul Agrawal and Edward Cui and Sining Wei and Taroon Bharti and Ying Qiao\n and Jiun-Hung Chen and Winnie Wu and Shuguang Liu and Fan Yang and Daniel Campos\n and Rangan Majumder and Ming Zhou},\n journal={arXiv},\n year={2020},\n volume={abs/2004.01401}\n}\n", "homepage": "", "license": "", "features": {"question": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 2, "names": ["False", "True"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "x_glue", "config_name": "qam", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 28357964, "num_examples": 100000, "dataset_name": "x_glue"}, "validation.en": {"name": "validation.en", "num_bytes": 3085501, "num_examples": 10000, "dataset_name": "x_glue"}, "validation.de": {"name": "validation.de", "num_bytes": 3304031, "num_examples": 10000, "dataset_name": "x_glue"}, "validation.fr": {"name": "validation.fr", "num_bytes": 3142833, "num_examples": 10000, "dataset_name": "x_glue"}, "test.en": {"name": "test.en", "num_bytes": 3082297, "num_examples": 10000, "dataset_name": "x_glue"}, "test.de": {"name": "test.de", "num_bytes": 3309496, "num_examples": 10000, "dataset_name": "x_glue"}, "test.fr": {"name": "test.fr", "num_bytes": 3140213, "num_examples": 10000, "dataset_name": "x_glue"}}, "download_checksums": {"https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz": {"num_bytes": 875905871, "checksum": "e11016c02d8565d00119833a16679bbbe0fec437f5ad53c2d3f9eef6fa03f65b"}}, "download_size": 875905871, "post_processing_size": null, "dataset_size": 47422335, "size_in_bytes": 923328206}, "qg": {"description": "XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained\nmodels with respect to cross-lingual natural language understanding and generation.\nThe benchmark is composed of the following 11 tasks:\n- NER\n- POS Tagging (POS)\n- News Classification (NC)\n- MLQA\n- XNLI\n- PAWS-X\n- Query-Ad Matching (QADSM)\n- Web Page Ranking (WPR)\n- QA Matching (QAM)\n- Question Generation (QG)\n- News Title Generation (NTG)\n\nFor more information, please take a look at https://microsoft.github.io/XGLUE/.\n", "citation": "\n@article{Liang2020XGLUEAN,\n title={XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation},\n author={Yaobo Liang and Nan Duan and Yeyun Gong and Ning Wu and Fenfei Guo and Weizhen Qi\n and Ming Gong and Linjun Shou and Daxin Jiang and Guihong Cao and Xiaodong Fan and Ruofei\n Zhang and Rahul Agrawal and Edward Cui and Sining Wei and Taroon Bharti and Ying Qiao\n and Jiun-Hung Chen and Winnie Wu and Shuguang Liu and Fan Yang and Daniel Campos\n and Rangan Majumder and Ming Zhou},\n journal={arXiv},\n year={2020},\n volume={abs/2004.01401}\n}\n", "homepage": "", "license": "", "features": {"answer_passage": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "x_glue", "config_name": "qg", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 27464034, "num_examples": 100000, "dataset_name": "x_glue"}, "validation.en": {"name": "validation.en", "num_bytes": 3047040, "num_examples": 10000, "dataset_name": "x_glue"}, "validation.de": {"name": "validation.de", "num_bytes": 3270877, "num_examples": 10000, "dataset_name": "x_glue"}, "validation.es": {"name": "validation.es", "num_bytes": 3341775, "num_examples": 10000, "dataset_name": "x_glue"}, "validation.fr": {"name": "validation.fr", "num_bytes": 3175615, "num_examples": 10000, "dataset_name": "x_glue"}, "validation.it": {"name": "validation.it", "num_bytes": 3191193, "num_examples": 10000, "dataset_name": "x_glue"}, "validation.pt": {"name": "validation.pt", "num_bytes": 3328434, "num_examples": 10000, "dataset_name": "x_glue"}, "test.en": {"name": "test.en", "num_bytes": 3043813, "num_examples": 10000, "dataset_name": "x_glue"}, "test.de": {"name": "test.de", "num_bytes": 3270190, "num_examples": 10000, "dataset_name": "x_glue"}, "test.es": {"name": "test.es", "num_bytes": 3353522, "num_examples": 10000, "dataset_name": "x_glue"}, "test.fr": {"name": "test.fr", "num_bytes": 3178352, "num_examples": 10000, "dataset_name": "x_glue"}, "test.it": {"name": "test.it", "num_bytes": 3195684, "num_examples": 10000, "dataset_name": "x_glue"}, "test.pt": {"name": "test.pt", "num_bytes": 3340296, "num_examples": 10000, "dataset_name": "x_glue"}}, "download_checksums": {"https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz": {"num_bytes": 875905871, "checksum": "e11016c02d8565d00119833a16679bbbe0fec437f5ad53c2d3f9eef6fa03f65b"}}, "download_size": 875905871, "post_processing_size": null, "dataset_size": 66200825, "size_in_bytes": 942106696}, "ntg": {"description": "XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained\nmodels with respect to cross-lingual natural language understanding and generation.\nThe benchmark is composed of the following 11 tasks:\n- NER\n- POS Tagging (POS)\n- News Classification (NC)\n- MLQA\n- XNLI\n- PAWS-X\n- Query-Ad Matching (QADSM)\n- Web Page Ranking (WPR)\n- QA Matching (QAM)\n- Question Generation (QG)\n- News Title Generation (NTG)\n\nFor more information, please take a look at https://microsoft.github.io/XGLUE/.\n", "citation": "\n@article{Liang2020XGLUEAN,\n title={XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation},\n author={Yaobo Liang and Nan Duan and Yeyun Gong and Ning Wu and Fenfei Guo and Weizhen Qi\n and Ming Gong and Linjun Shou and Daxin Jiang and Guihong Cao and Xiaodong Fan and Ruofei\n Zhang and Rahul Agrawal and Edward Cui and Sining Wei and Taroon Bharti and Ying Qiao\n and Jiun-Hung Chen and Winnie Wu and Shuguang Liu and Fan Yang and Daniel Campos\n and Rangan Majumder and Ming Zhou},\n journal={arXiv},\n year={2020},\n volume={abs/2004.01401}\n}\n", "homepage": "", "license": "", "features": {"news_body": {"dtype": "string", "id": null, "_type": "Value"}, "news_title": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "x_glue", "config_name": "ntg", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 890709581, "num_examples": 300000, "dataset_name": "x_glue"}, "validation.en": {"name": "validation.en", "num_bytes": 34317076, "num_examples": 10000, "dataset_name": "x_glue"}, "validation.de": {"name": "validation.de", "num_bytes": 27404379, "num_examples": 10000, "dataset_name": "x_glue"}, "validation.es": {"name": "validation.es", "num_bytes": 30896109, "num_examples": 10000, "dataset_name": "x_glue"}, "validation.fr": {"name": "validation.fr", "num_bytes": 27261523, "num_examples": 10000, "dataset_name": "x_glue"}, "validation.ru": {"name": "validation.ru", "num_bytes": 43247386, "num_examples": 10000, "dataset_name": "x_glue"}, "test.en": {"name": "test.en", "num_bytes": 33697284, "num_examples": 10000, "dataset_name": "x_glue"}, "test.de": {"name": "test.de", "num_bytes": 26738202, "num_examples": 10000, "dataset_name": "x_glue"}, "test.es": {"name": "test.es", "num_bytes": 31111489, "num_examples": 10000, "dataset_name": "x_glue"}, "test.fr": {"name": "test.fr", "num_bytes": 26997447, "num_examples": 10000, "dataset_name": "x_glue"}, "test.ru": {"name": "test.ru", "num_bytes": 44050350, "num_examples": 10000, "dataset_name": "x_glue"}}, "download_checksums": {"https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz": {"num_bytes": 875905871, "checksum": "e11016c02d8565d00119833a16679bbbe0fec437f5ad53c2d3f9eef6fa03f65b"}}, "download_size": 875905871, "post_processing_size": null, "dataset_size": 1216430826, "size_in_bytes": 2092336697}}
|
|
|
|