|
--- |
|
license: cc-by-sa-4.0 |
|
dataset_info: |
|
features: |
|
- name: id |
|
dtype: int64 |
|
- name: genre |
|
dtype: string |
|
- name: sentence1 |
|
dtype: string |
|
- name: sentence2 |
|
dtype: string |
|
- name: score |
|
dtype: float64 |
|
splits: |
|
- name: train |
|
num_bytes: 1034815 |
|
num_examples: 5691 |
|
- name: valid |
|
num_bytes: 297254 |
|
num_examples: 1465 |
|
- name: test |
|
num_bytes: 247409 |
|
num_examples: 1376 |
|
download_size: 837346 |
|
dataset_size: 1579478 |
|
--- |
|
|
|
# Korean Semantic Textual Similarity (KorSTS) Dataset |
|
|
|
For a better dataset description, please visit this GitHub repository prepared by the authors of the article: [LINK](https://github.com/kakaobrain/kor-nlu-datasets) <br> |
|
<br> |
|
**This dataset was prepared by converting tsv files from this repository.** The idea was to share the dataset for broader audience. I am not an original author of it. <br> |
|
Because of the specifity of read_csv method from Pandas library, there are couple of observations, which had to be deleted because of the formatting (54 in train, 35 in valid, and 1 in test) |
|
|
|
Additionaly, **None values have been removed from the dataset** (5 from train, 1 from eval, and 3 from test) |
|
|
|
**How to download** |
|
|
|
``` |
|
from datasets import load_dataset |
|
data = load_dataset("dkoterwa/kor-sts") |
|
``` |
|
|
|
|
|
**If you use this dataset for research, please cite this paper:** |
|
|
|
``` |
|
@article{ham2020kornli, |
|
title={KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding}, |
|
author={Ham, Jiyeon and Choe, Yo Joong and Park, Kyubyong and Choi, Ilji and Soh, Hyungjoon}, |
|
journal={arXiv preprint arXiv:2004.03289}, |
|
year={2020} |
|
} |
|
``` |