Datasets:
File size: 1,832 Bytes
edb5ac0 e5ca4e3 2b4b5e3 e5ca4e3 2b4b5e3 3d1d813 2b4b5e3 3d1d813 2b4b5e3 edb5ac0 e5ca4e3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
language:
- am
- ha
- en
- es
- te
- ar
- af
license: mit
size_categories:
- 10K<n<100K
tags:
- Semantic Textual Relatedness
dataset_info:
features:
- name: PairID
dtype: string
- name: Language
dtype: string
- name: Sentence1
dtype: string
- name: Sentence2
dtype: string
- name: Length
dtype: int64
- name: Score
dtype: float64
splits:
- name: train
num_bytes: 4248215
num_examples: 15123
- name: dev
num_bytes: 460985
num_examples: 1390
download_size: 2400795
dataset_size: 4709200
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
---
# Dataset Card for Dataset Name
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
Each instance in the training, development, and test sets is a sentence pair. The instance is labeled with a score representing the degree of semantic textual relatedness between the two sentences. The scores can range from 0 (maximally unrelated) to 1 (maximally related). These gold label scores have been determined through manual annotation. Specifically, a comparative annotation approach was used to avoid known limitations of traditional rating scale annotation methods. This comparative annotation process (which avoids several biases of traditional rating scales) led to a high reliability of the final relatedness rankings. Further details about the task, the method of data annotation, how STR is different from semantic textual similarity, applications of semantic textual relatedness, etc. can be found in this paper.
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** [https://github.com/semantic-textual-relatedness/Semantic_Relatedness_SemEval2024/tree/main]
|