Datasets:

Modalities:
Text
Formats:
json
Languages:
French
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 1,006 Bytes
686edb8
 
 
 
 
 
c63c91c
 
686edb8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
233f2ac
 
 
 
 
 
 
 
 
 
 
 
 
686edb8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
license: apache-2.0
task_categories:
- text-classification
language:
- fr
size_categories:
- 10K<n<100K
---

## Clustering HAL

This dataset was created by scrapping data from the HAL platform. 
Over 80,000 articles have been scrapped to keep their id, title and category. 

It was originally used for the French version of [MTEB](https://github.com/embeddings-benchmark/mteb), but it can also be used for various clustering or classification tasks.

### Usage

To use this dataset, you can run the following code :
```py
from datasets import load_dataset
dataset = load_dataset("lyon-nlp/clustering-hal-s2s", "test")
```

### Citation
If you use this dataset in your work, please consider citing:
```
@misc{ciancone2024extending,
      title={Extending the Massive Text Embedding Benchmark to French}, 
      author={Mathieu Ciancone and Imene Kerboua and Marion Schaeffer and Wissam Siblini},
      year={2024},
      eprint={2405.20468},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```