Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ tags:
|
|
7 |
- transformers
|
8 |
---
|
9 |
|
10 |
-
# msmarco-bert-base-dot-
|
11 |
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 500K (query, answer) pairs from the [MS MARCO dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking/). For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
|
12 |
|
13 |
|
@@ -26,7 +26,7 @@ query = "How many people live in London?"
|
|
26 |
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
|
27 |
|
28 |
#Load the model
|
29 |
-
model = SentenceTransformer('sentence-transformers/msmarco-bert-base-dot-
|
30 |
|
31 |
#Encode query and documents
|
32 |
query_emb = model.encode(query)
|
@@ -82,8 +82,8 @@ query = "How many people live in London?"
|
|
82 |
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
|
83 |
|
84 |
# Load model from HuggingFace Hub
|
85 |
-
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-bert-base-dot-
|
86 |
-
model = AutoModel.from_pretrained("sentence-transformers/msmarco-bert-base-dot-
|
87 |
|
88 |
#Encode query and docs
|
89 |
query_emb = encode(query)
|
@@ -121,7 +121,7 @@ In the following some technical details how this model must be used:
|
|
121 |
|
122 |
<!--- Describe how your model was evaluated -->
|
123 |
|
124 |
-
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=msmarco-bert-base-base-dot-
|
125 |
|
126 |
|
127 |
## Training
|
|
|
7 |
- transformers
|
8 |
---
|
9 |
|
10 |
+
# msmarco-bert-base-dot-v5
|
11 |
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 500K (query, answer) pairs from the [MS MARCO dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking/). For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
|
12 |
|
13 |
|
|
|
26 |
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
|
27 |
|
28 |
#Load the model
|
29 |
+
model = SentenceTransformer('sentence-transformers/msmarco-bert-base-dot-v5')
|
30 |
|
31 |
#Encode query and documents
|
32 |
query_emb = model.encode(query)
|
|
|
82 |
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
|
83 |
|
84 |
# Load model from HuggingFace Hub
|
85 |
+
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-bert-base-dot-v5")
|
86 |
+
model = AutoModel.from_pretrained("sentence-transformers/msmarco-bert-base-dot-v5")
|
87 |
|
88 |
#Encode query and docs
|
89 |
query_emb = encode(query)
|
|
|
121 |
|
122 |
<!--- Describe how your model was evaluated -->
|
123 |
|
124 |
+
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=msmarco-bert-base-base-dot-v5)
|
125 |
|
126 |
|
127 |
## Training
|