Update README.md
Browse files
README.md
CHANGED
@@ -30,4 +30,11 @@ dataset_info:
|
|
30 |
---
|
31 |
# Dataset Card for "WikiMedical_sentence_similarity"
|
32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
|
|
30 |
---
|
31 |
# Dataset Card for "WikiMedical_sentence_similarity"
|
32 |
|
33 |
+
WikiMedical_sentence_similarity is an adapted and ready-to-use sentence similarity dataset based on [this dataset](https://huggingface.co/datasets/gamino/wiki_medical_terms).
|
34 |
+
|
35 |
+
The preprocessing followed three steps:
|
36 |
+
- Each text is splitted into sentences of 256 tokens (nltk tokenizer)
|
37 |
+
- Each sentence is paired with a positive pair if found, and a negative one. Negative one are drawn randomly in the whole dataset.
|
38 |
+
- Train and test split correspond to 70%/30%
|
39 |
+
|
40 |
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|