File size: 6,909 Bytes
96f6baa 513303f 96f6baa 513303f 96f6baa cf685a8 4fd5fa6 c961423 4fd5fa6 c961423 a144406 c961423 4fd5fa6 5e45656 4fd5fa6 5e45656 4fd5fa6 5e45656 4fd5fa6 5e45656 4fd5fa6 2bcc84f 4fd5fa6 5e45656 4fd5fa6 5e45656 4fd5fa6 5e45656 4fd5fa6 5e45656 4fd5fa6 5e45656 4fd5fa6 5e45656 4fd5fa6 5e45656 4fd5fa6 5e45656 4fd5fa6 5e45656 4fd5fa6 5e45656 4fd5fa6 5e45656 4fd5fa6 a144406 4fd5fa6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 |
---
annotations_creators:
- other
language:
- sv
language_creators:
- other
multilinguality:
- monolingual
pretty_name: >-
A standardized suite for evaluation and analysis of Swedish natural language
understanding systems.
size_categories:
- unknown
source_datasets: []
task_categories:
- multiple-choice
- text-classification
- question-answering
- sentence-similarity
- token-classification
- summarization
task_ids:
- sentiment-analysis
- acceptability-classification
- closed-domain-qa
- word-sense-disambiguation
- coreference-resolution
---
# Dataset Card for Superlim-2
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [The official homepage of Språkbanken](https://spraakbanken.gu.se/resurser/superlim/)
- **Repository:**
- **Paper:**[SwedishGLUE – Towards a Swedish Test Set for Evaluating Natural Language Understanding Models](https://gup.ub.gu.se/publication/299130?lang=sv)
- **Leaderboard:** https://lab.kb.se/leaderboard/
- **Point of Contact:**[sb-info@svenska.gu.se](sb-info@svenska.gu.se)
### Dataset Summary
SuperLim 2.0 is a continuation of SuperLim 1.0, which aims for a standardized suite for evaluation and analysis of Swedish natural language understanding systems. The projects is inspired by the GLUE/SuperGLUE projects from which the name is derived: "lim" is the Swedish translation of "glue".
Since Superlim 2.0 is a collection of datasets, we refer for information about dataset structure, creation, social impact etc. to the specific data cards or documentation sheets in the official GitHub repository: https://github.com/spraakbanken/SuperLim-2/
### Supported Tasks and Leaderboards
See our leaderboard: https://lab.kb.se/leaderboard/
### Languages
Swedish
## Dataset Structure
### Data Instances
See individual datasets: https://github.com/spraakbanken/SuperLim-2/
### Data Fields
See individual datasets: https://github.com/spraakbanken/SuperLim-2/
### Data Splits
Most datasets have a train, dev and test split. However, there are a few (`supersim`, `sweanalogy` and `swesat-synonyms`) who only have a train and test split. The diagnostic tasks `swediagnostics` and `swewinogender` only have a test split, but they could be evaluated on models trained on `swenli` since they are also NLI-based.
## Dataset Creation
### Curation Rationale
See individual datasets: https://github.com/spraakbanken/SuperLim-2/
### Source Data
#### Initial Data Collection and Normalization
See individual datasets: https://github.com/spraakbanken/SuperLim-2/
#### Who are the source language producers?
See individual datasets: https://github.com/spraakbanken/SuperLim-2/
### Annotations
#### Annotation process
See individual datasets: https://github.com/spraakbanken/SuperLim-2/
#### Who are the annotators?
See individual datasets: https://github.com/spraakbanken/SuperLim-2/
### Personal and Sensitive Information
See individual datasets: https://github.com/spraakbanken/SuperLim-2/
## Considerations for Using the Data
### Social Impact of Dataset
See individual datasets: https://github.com/spraakbanken/SuperLim-2/
### Discussion of Biases
See individual datasets: https://github.com/spraakbanken/SuperLim-2/
### Other Known Limitations
See individual datasets: https://github.com/spraakbanken/SuperLim-2/
### Dataset Curators
See individual datasets: https://github.com/spraakbanken/SuperLim-2/
### Licensing Information
All datasets constituting Superlim are available under Creative Commons licenses (CC BY 4.0, 8144 CC BY-SA 4.0, respectively).
### Citation Information
To cite as a whole, use the standard reference. If you use or reference individual resources, cite the references specific for these resources:
Standard reference:
Superlim: A Swedish Language Understanding Evaluation Benchmark (Berdicevskis et al., EMNLP 2023)
```
@inproceedings{berdicevskis-etal-2023-superlim,
title = "Superlim: A {S}wedish Language Understanding Evaluation Benchmark",
author = {Berdicevskis, Aleksandrs and
Bouma, Gerlof and
Kurtz, Robin and
Morger, Felix and
{\"O}hman, Joey and
Adesam, Yvonne and
Borin, Lars and
Dann{\'e}lls, Dana and
Forsberg, Markus and
Isbister, Tim and
Lindahl, Anna and
Malmsten, Martin and
Rekathati, Faton and
Sahlgren, Magnus and
Volodina, Elena and
B{\"o}rjeson, Love and
Hengchen, Simon and
Tahmasebi, Nina},
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.506",
doi = "10.18653/v1/2023.emnlp-main.506",
pages = "8137--8153",
abstract = "We present Superlim, a multi-task NLP benchmark and analysis platform for evaluating Swedish language models, a counterpart to the English-language (Super)GLUE suite. We describe the dataset, the tasks, the leaderboard and report the baseline results yielded by a reference implementation. The tested models do not approach ceiling performance on any of the tasks, which suggests that Superlim is truly difficult, a desirable quality for a benchmark. We address methodological challenges, such as mitigating the Anglocentric bias when creating datasets for a less-resourced language; choosing the most appropriate measures; documenting the datasets and making the leaderboard convenient and transparent. We also highlight other potential usages of the dataset, such as, for instance, the evaluation of cross-lingual transfer learning.",
}
```
Thanks to [Felix Morger](https://github.com/felixhultin) for adding this dataset. |