add initial version of readme
Browse files
README.md
CHANGED
@@ -1,3 +1,16 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-sa-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-nc-sa-4.0
|
3 |
+
language:
|
4 |
+
- de
|
5 |
+
tags:
|
6 |
+
- embeddings
|
7 |
+
- clustering
|
8 |
+
- benchmark
|
9 |
+
size_categories:
|
10 |
+
- 10K<n<100K
|
11 |
---
|
12 |
+
This dataset can be used as a benchmark for clustering word embeddings for <b>German</b>.
|
13 |
+
|
14 |
+
The datasets contains news article titles and is based on the dataset of the [One Million Posts Corpus](https://ofai.github.io/million-post-corpus/) and [10kGNAD](https://github.com/tblock/10kGNAD). It contains 10'267 unique samples, 10 splits with 1'436 to 9'962 samples and 9 unique classes. Splits are built similarly to MTEB's [TwentyNewsgroupsClustering](https://huggingface.co/datasets/mteb/twentynewsgroups-clustering) ([Paper](https://arxiv.org/abs/2210.07316)).
|
15 |
+
|
16 |
+
Have a look at [German Text Embedding Clustering Benchmark](https://github.com/ClimSocAna/tecb-de) for more infos, datasets and evaluation results.
|