add paper to README
Browse files
README.md
CHANGED
@@ -11,6 +11,6 @@ size_categories:
|
|
11 |
---
|
12 |
This dataset can be used as a benchmark for clustering word embeddings for <b>German</b>.
|
13 |
|
14 |
-
The datasets contains news article titles and is based on the dataset of the [One Million Posts Corpus](https://ofai.github.io/million-post-corpus/) and [10kGNAD](https://github.com/tblock/10kGNAD). It contains 10'267 unique samples, 10 splits with 1'436 to 9'962 samples and 9 unique classes. Splits are built similarly to MTEB's [TwentyNewsgroupsClustering](https://huggingface.co/datasets/mteb/twentynewsgroups-clustering)
|
15 |
|
16 |
-
Have a look at
|
|
|
11 |
---
|
12 |
This dataset can be used as a benchmark for clustering word embeddings for <b>German</b>.
|
13 |
|
14 |
+
The datasets contains news article titles and is based on the dataset of the [One Million Posts Corpus](https://ofai.github.io/million-post-corpus/) and [10kGNAD](https://github.com/tblock/10kGNAD). It contains 10'267 unique samples, 10 splits with 1'436 to 9'962 samples and 9 unique classes. Splits are built similarly to MTEB's [TwentyNewsgroupsClustering](https://huggingface.co/datasets/mteb/twentynewsgroups-clustering).
|
15 |
|
16 |
+
Have a look at German Text Embedding Clustering Benchmark ([Github](https://github.com/ClimSocAna/tecb-de), [Paper](https://arxiv.org/abs/2401.02709)) for more infos, datasets and evaluation results.
|