Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ size_categories:
|
|
17 |
|
18 |
The dataset consists of 5500 English sentence pairs that are scored and ranked on a relatedness scale ranging from 0 (least related) to 1 (most related).
|
19 |
|
20 |
-
|
21 |
- The sentence pairs, and associated scores, are in the file sem_text_rel_ranked.csv in the root directory. The CSV file can be read using:
|
22 |
|
23 |
```python
|
@@ -40,6 +40,18 @@ The dataset consists of 5500 English sentence pairs that are scored and ranked o
|
|
40 |
- The SubsetID column indicates the sampling strategy used for the source dataset
|
41 |
- and the PairID is a unique identifier for each pair that also indicates its Source and Subset.
|
42 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
### Citing our work
|
44 |
Please use the following BibTex entry to cite us if you use our dataset or any of the [associated analyses](https://arxiv.org/abs/2110.04845):
|
45 |
|
@@ -54,7 +66,7 @@ Please use the following BibTex entry to cite us if you use our dataset or any o
|
|
54 |
}
|
55 |
```
|
56 |
|
57 |
-
|
58 |
Any dataset of semantic relatedness entails several ethical considerations. We talk about this in Section 10 of [our paper](https://arxiv.org/abs/2110.04845).
|
59 |
|
60 |
## Creators
|
|
|
17 |
|
18 |
The dataset consists of 5500 English sentence pairs that are scored and ranked on a relatedness scale ranging from 0 (least related) to 1 (most related).
|
19 |
|
20 |
+
### Loading the Dataset
|
21 |
- The sentence pairs, and associated scores, are in the file sem_text_rel_ranked.csv in the root directory. The CSV file can be read using:
|
22 |
|
23 |
```python
|
|
|
40 |
- The SubsetID column indicates the sampling strategy used for the source dataset
|
41 |
- and the PairID is a unique identifier for each pair that also indicates its Source and Subset.
|
42 |
|
43 |
+
### Why Semantic Relatedness?
|
44 |
+
Closeness of meaning can be of two kinds: semantic relatedness and semantic similarity. Two sentences are considered semantically similar when they have a paraphrasal or entailment relation, whereas relatedness accounts for all of the commonalities that can exist between two sentences. Semantic relatedness is central to textual coherence and narrative structure. Automatically determining semantic relatedness has many applications such as question answering, plagiarism detection, text generation (say in personal assistants and chat bots), and summarization.
|
45 |
+
|
46 |
+
Prior NLP work has focused on semantic similarity (a small subset of semantic relatedness), largely because of a dearth of datasets. In this paper, we present the first manually annotated dataset of sentence--sentence semantic relatedness. It includes fine-grained scores of relatedness from 0 (least related) to 1 (most related) for 5,500 English sentence pairs. The sentences are taken from diverse sources and thus also have diverse sentence structures, varying amounts of lexical overlap, and varying formality.
|
47 |
+
|
48 |
+
### Comparative Annotations and Best-Worst Scaling
|
49 |
+
Most existing sentence-sentence similarity datasets were annotated, one item at a time, using coarse rating labels such as integer values between 1 and 5\@ representing coarse degrees of closeness. It is well documented that such approaches suffer from inter- and intra-annotator inconsistency, scale region bias, and issues arising due to the fixed granularity.
|
50 |
+
|
51 |
+
The relatedness scores for our dataset were, instead, obtained using a __comparative__ annotation schema. In comparative annotations, two (or more) items are presented together and the annotator has to determine which is greater with respect to the metric of interest.
|
52 |
+
|
53 |
+
Specifically, we use Best-Worst Scaling, a comparative annotation method}, which has been shown to produce reliable scores with fewer annotations in other NLP tasks. We use scripts from https://saifmohammad.com/WebPages/BestWorst.html to obtain relatedness scores from our annotations.
|
54 |
+
|
55 |
### Citing our work
|
56 |
Please use the following BibTex entry to cite us if you use our dataset or any of the [associated analyses](https://arxiv.org/abs/2110.04845):
|
57 |
|
|
|
66 |
}
|
67 |
```
|
68 |
|
69 |
+
### Ethics Statement
|
70 |
Any dataset of semantic relatedness entails several ethical considerations. We talk about this in Section 10 of [our paper](https://arxiv.org/abs/2110.04845).
|
71 |
|
72 |
## Creators
|