File size: 1,632 Bytes
ebf81f3 df29103 016f35f df02b8f df29103 016f35f df02b8f df29103 016f35f df02b8f 016f35f ebf81f3 1fb319e 03dbf27 e6d31d3 dccc656 b1170fc b8744b9 b1170fc b8744b9 1fb319e ccdec9e 1fb319e ccdec9e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
---
license: cc-by-sa-4.0
dataset_info:
features:
- name: id
dtype: int64
- name: genre
dtype: string
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 1034815
num_examples: 5691
- name: valid
num_bytes: 297254
num_examples: 1465
- name: test
num_bytes: 247409
num_examples: 1376
download_size: 837346
dataset_size: 1579478
---
# Korean Semantic Textual Similarity (KorSTS) Dataset
For a better dataset description, please visit this GitHub repository prepared by the authors of the article: [LINK](https://github.com/kakaobrain/kor-nlu-datasets) <br>
<br>
**This dataset was prepared by converting tsv files from this repository.** The idea was to share the dataset for broader audience. I am not an original author of it. <br>
Because of the specifity of read_csv method from Pandas library, there are couple of observations, which had to be deleted because of the formatting (54 in train, 35 in valid, and 1 in test)
Additionaly, **None values have been removed from the dataset** (5 from train, 1 from eval, and 3 from test)
**How to download**
```
from datasets import load_dataset
data = load_dataset("dkoterwa/kor-sts")
```
**If you use this dataset for research, please cite this paper:**
```
@article{ham2020kornli,
title={KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding},
author={Ham, Jiyeon and Choe, Yo Joong and Park, Kyubyong and Choi, Ilji and Soh, Hyungjoon},
journal={arXiv preprint arXiv:2004.03289},
year={2020}
}
``` |