nreimers commited on
Commit
0e1781f
1 Parent(s): d04b167

add searchQA

Browse files
Files changed (2) hide show
  1. README.md +3 -1
  2. searchQA_top5_snippets.jsonl.gz +3 -0
README.md CHANGED
@@ -10,7 +10,8 @@ The JSON objects can come in different formats:
10
  - **Pairs:** `["text1", "text2"]` - This is a positive pair that should be close in vector space.
11
  - **Triplets:** `["anchor", "positive", "negative"]` - This is a triplet: The `positive` text should be close to the `anchor`, while the `negative` text should be distant to the `anchor`.
12
  - **Sets:** `{"set": ["text1", "text2", ...]}` A set of texts describing the same thing, e.g. different paraphrases of the same question, different captions for the same image. Any combination of the elements is considered as a positive pair.
13
- - **Query-Triplets:** `{"query": "text", "pos": ["text1", "text2", ...], "neg": ["text1", "text2", ...]}` A query together with a set of positive texts and negative texts. Can be formed to a triplet `["anchor", "positive", "negative"]` by randomly selecting a text from `pos` and `neg`.
 
14
 
15
  ## Available Datasets
16
 
@@ -41,6 +42,7 @@ We measure the performance for each training dataset by training the [nreimers/M
41
  | [stackexchange_duplicate_questions_title-body_title-body.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/stackexchange_duplicate_questions_title-body_title-body.jsonl.gz) | (Title+Body, Title+Body) pairs of duplicate questions from StackExchange | 250,460 | 57.30 | [Stack Exchange Data API](https://data.stackexchange.com/apple/query/fork/1456963)
42
  | [stackexchange_duplicate_questions_title_title.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/stackexchange_duplicate_questions_title_title.jsonl.gz) | (Title, Title) pairs of duplicate questions from StackExchange | 304,525 | 58.47 | [Stack Exchange Data API](https://data.stackexchange.com/apple/query/fork/1456963)
43
  | [S2ORC_title_abstract.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/S2ORC_title_abstract.jsonl.gz) | (Title, Abstract) pairs of scientific papers | 41,769,185 | 57.39 | [S2ORC](https://github.com/allenai/s2orc)
 
44
  | [SimpleWiki.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/SimpleWiki.jsonl.gz) | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 102,225 | 56.15 | [SimpleWiki](https://cs.pomona.edu/~dkauchak/simplification/)
45
  | [TriviaQA_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/TriviaQA_pairs.jsonl.gz) | Pairs (query, answer) from TriviaQA dataset | 73,346 | 55.56 | [TriviaQA](https://huggingface.co/datasets/trivia_qa)
46
  | [WikiAnswers.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/WikiAnswers.jsonl.gz) | Sets of duplicates questions | 27,383,151 | 57.34 | [WikiAnswers Corpus](https://github.com/afader/oqa#wikianswers-corpus)
 
10
  - **Pairs:** `["text1", "text2"]` - This is a positive pair that should be close in vector space.
11
  - **Triplets:** `["anchor", "positive", "negative"]` - This is a triplet: The `positive` text should be close to the `anchor`, while the `negative` text should be distant to the `anchor`.
12
  - **Sets:** `{"set": ["text1", "text2", ...]}` A set of texts describing the same thing, e.g. different paraphrases of the same question, different captions for the same image. Any combination of the elements is considered as a positive pair.
13
+ - **Query-Pairs:** `{"query": "text", "pos": ["text1", "text2", ...]}` A query together with a set of positive texts. Can be formed to a pair `["query", "positive"]` by randomly selecting a text from `pos`.
14
+ - **Query-Triplets:** `{"query": "text", "pos": ["text1", "text2", ...], "neg": ["text1", "text2", ...]}` A query together with a set of positive texts and negative texts. Can be formed to a triplet `["query", "positive", "negative"]` by randomly selecting a text from `pos` and `neg`.
15
 
16
  ## Available Datasets
17
 
 
42
  | [stackexchange_duplicate_questions_title-body_title-body.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/stackexchange_duplicate_questions_title-body_title-body.jsonl.gz) | (Title+Body, Title+Body) pairs of duplicate questions from StackExchange | 250,460 | 57.30 | [Stack Exchange Data API](https://data.stackexchange.com/apple/query/fork/1456963)
43
  | [stackexchange_duplicate_questions_title_title.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/stackexchange_duplicate_questions_title_title.jsonl.gz) | (Title, Title) pairs of duplicate questions from StackExchange | 304,525 | 58.47 | [Stack Exchange Data API](https://data.stackexchange.com/apple/query/fork/1456963)
44
  | [S2ORC_title_abstract.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/S2ORC_title_abstract.jsonl.gz) | (Title, Abstract) pairs of scientific papers | 41,769,185 | 57.39 | [S2ORC](https://github.com/allenai/s2orc)
45
+ | [searchQA_top5_snippets.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/searchQA_top5_snippets.jsonl.gz) | Question + Top5 text snippets from SearchQA dataset. Top5 | 117,220 | 57.34 | [search_qa](https://huggingface.co/datasets/search_qa)
46
  | [SimpleWiki.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/SimpleWiki.jsonl.gz) | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 102,225 | 56.15 | [SimpleWiki](https://cs.pomona.edu/~dkauchak/simplification/)
47
  | [TriviaQA_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/TriviaQA_pairs.jsonl.gz) | Pairs (query, answer) from TriviaQA dataset | 73,346 | 55.56 | [TriviaQA](https://huggingface.co/datasets/trivia_qa)
48
  | [WikiAnswers.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/WikiAnswers.jsonl.gz) | Sets of duplicates questions | 27,383,151 | 57.34 | [WikiAnswers Corpus](https://github.com/afader/oqa#wikianswers-corpus)
searchQA_top5_snippets.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ace2499a0083ee9d6a3f4a11f878cf26afec536d70f54fe40b530cb001320232
3
+ size 77894565