tomaarsen HF staff commited on
Commit
2d8f7a6
1 Parent(s): 726925f

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,721 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: microsoft/mpnet-base
3
+ datasets:
4
+ - tomaarsen/gooaq-hard-negatives
5
+ - sentence-transformers/gooaq
6
+ language:
7
+ - en
8
+ library_name: sentence-transformers
9
+ license: apache-2.0
10
+ metrics:
11
+ - cosine_accuracy@1
12
+ - cosine_accuracy@3
13
+ - cosine_accuracy@5
14
+ - cosine_accuracy@10
15
+ - cosine_precision@1
16
+ - cosine_precision@3
17
+ - cosine_precision@5
18
+ - cosine_precision@10
19
+ - cosine_recall@1
20
+ - cosine_recall@3
21
+ - cosine_recall@5
22
+ - cosine_recall@10
23
+ - cosine_ndcg@10
24
+ - cosine_mrr@10
25
+ - cosine_map@100
26
+ - dot_accuracy@1
27
+ - dot_accuracy@3
28
+ - dot_accuracy@5
29
+ - dot_accuracy@10
30
+ - dot_precision@1
31
+ - dot_precision@3
32
+ - dot_precision@5
33
+ - dot_precision@10
34
+ - dot_recall@1
35
+ - dot_recall@3
36
+ - dot_recall@5
37
+ - dot_recall@10
38
+ - dot_ndcg@10
39
+ - dot_mrr@10
40
+ - dot_map@100
41
+ pipeline_tag: sentence-similarity
42
+ tags:
43
+ - sentence-transformers
44
+ - sentence-similarity
45
+ - feature-extraction
46
+ - generated_from_trainer
47
+ - dataset_size:2286783
48
+ - loss:MultipleNegativesRankingLoss
49
+ widget:
50
+ - source_sentence: how to download a youtube video onto usb?
51
+ sentences:
52
+ - Copy YouTube URL to Download Go to YouTube video you want to download to USB and
53
+ copy its URL. Paste the link to download YouTube. Choose a necessary video or
54
+ audio format and quality.
55
+ - Before surgeons are qualified to operate, they must meet a set of challenging
56
+ education requirements. These generally include four years of undergraduate study,
57
+ four years of medical school leading to a Doctor of Medicine (M.D.) degree, and
58
+ three to eight years of surgical residency at a hospital.
59
+ - A Roman numeral representing the number eighteen (18).
60
+ - source_sentence: what is the best diet for a leaky gut?
61
+ sentences:
62
+ - When a woman is pregnant, she does not continue to ovulate and will not have a
63
+ period. Menstruation only occurs when a person is not pregnant. Although it is
64
+ possible for women to experience some bleeding during pregnancy, this will not
65
+ be due to their menstrual cycle.
66
+ - To combat leaky gut, eat foods that promote the growth of healthy gut bacteria,
67
+ including fruits, cultured dairy products, healthy fats, lean meats, and fibrous
68
+ and fermented vegetables.
69
+ - Popcorn Ceiling vs Asbestos Popcorn Ceiling Removal Cost CostHelper says Popcorn
70
+ ceilings not containing asbestos can expect to pay about $1 to $3 per square foot
71
+ or $250 to $900 to remove a popcorn ceiling from a 15'x20' room or $1,200 to $1,400
72
+ for a 1,6000 sq.
73
+ - source_sentence: what is the difference between joint tenancy and common tenancy?
74
+ sentences:
75
+ - You (TV series) You is an American psychological thriller television series developed
76
+ by Greg Berlanti and Sera Gamble. ... In December 2018, it was announced that
77
+ the series would move to Netflix as a Netflix Original title. The second season
78
+ was released exclusively on Netflix on December 26, 2019.
79
+ - A normal resting heart rate range is between 60 and 100 bpm.
80
+ - Joint tenancy also differs from tenancy in common because when one joint tenant
81
+ dies, the other remaining joint tenants inherit the deceased tenant's interest
82
+ in the property. However, a joint tenancy does allow owners to sell their interests.
83
+ If one owner sells, the tenancy is converted to a tenancy in common.
84
+ - source_sentence: what is the cause of blood clots in urine?
85
+ sentences:
86
+ - If sufficient blood is present in the urine, the blood may form a clot. The clot
87
+ can completely block the flow of urine, causing sudden extreme pain and inability
88
+ to urinate. Bleeding severe enough to cause such a clot is usually caused by an
89
+ injury to the urinary tract.
90
+ - Distance is the magnitude (length) of the displacement vector. Path length is
91
+ how far the object moved as it traveled from its initial position to its final
92
+ position.
93
+ - In fact, the brand is consistently ranked near the top of automakers in terms
94
+ of the most expensive cars to maintain. The total maintenance costs of the average
95
+ Audi over a 10-year span is $12,400. ... All cars are different, and many require
96
+ more maintenance than some depending on their age and driving history.
97
+ - source_sentence: are hard seltzers malt liquor?
98
+ sentences:
99
+ - The BCD method measures the distance from the apex of the breast down to the wire
100
+ line directly below it. That measurement in inches will determine your cup and
101
+ frame size. Then take your Rib Cage measurement directly under your bra. ... For
102
+ example, the BCD might be 4.0 and the Rib Cage of 32.
103
+ - Seltzer is carbonated water. “Hard seltzer” is a flavored malt beverage — essentially
104
+ the same as a Lime-A-Rita or a Colt 45 or a Smirnoff Ice. These products derive
105
+ their alcohol from fermented malted grains and are then carbonated, flavored and
106
+ sweetened.
107
+ - Bleaching action of chlorine is based on oxidation while that of sulphur is based
108
+ on reduction. Chlorine acts with water to produce nascent oxygen. ... Sulphour
109
+ dioxide removes oxygen from the coloured substance and makes it colourless.
110
+ co2_eq_emissions:
111
+ emissions: 1550.677005890232
112
+ energy_consumed: 3.989372336366245
113
+ source: codecarbon
114
+ training_type: fine-tuning
115
+ on_cloud: false
116
+ cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
117
+ ram_total_size: 31.777088165283203
118
+ hours_used: 11.599
119
+ hardware_used: 1 x NVIDIA GeForce RTX 3090
120
+ model-index:
121
+ - name: MPNet base trained on GooAQ triplets with hard negatives
122
+ results:
123
+ - task:
124
+ type: information-retrieval
125
+ name: Information Retrieval
126
+ dataset:
127
+ name: gooaq dev
128
+ type: gooaq-dev
129
+ metrics:
130
+ - type: cosine_accuracy@1
131
+ value: 0.7413
132
+ name: Cosine Accuracy@1
133
+ - type: cosine_accuracy@3
134
+ value: 0.8697
135
+ name: Cosine Accuracy@3
136
+ - type: cosine_accuracy@5
137
+ value: 0.9055
138
+ name: Cosine Accuracy@5
139
+ - type: cosine_accuracy@10
140
+ value: 0.9427
141
+ name: Cosine Accuracy@10
142
+ - type: cosine_precision@1
143
+ value: 0.7413
144
+ name: Cosine Precision@1
145
+ - type: cosine_precision@3
146
+ value: 0.2899
147
+ name: Cosine Precision@3
148
+ - type: cosine_precision@5
149
+ value: 0.1811
150
+ name: Cosine Precision@5
151
+ - type: cosine_precision@10
152
+ value: 0.09427000000000002
153
+ name: Cosine Precision@10
154
+ - type: cosine_recall@1
155
+ value: 0.7413
156
+ name: Cosine Recall@1
157
+ - type: cosine_recall@3
158
+ value: 0.8697
159
+ name: Cosine Recall@3
160
+ - type: cosine_recall@5
161
+ value: 0.9055
162
+ name: Cosine Recall@5
163
+ - type: cosine_recall@10
164
+ value: 0.9427
165
+ name: Cosine Recall@10
166
+ - type: cosine_ndcg@10
167
+ value: 0.8441925656083314
168
+ name: Cosine Ndcg@10
169
+ - type: cosine_mrr@10
170
+ value: 0.8123759920634883
171
+ name: Cosine Mrr@10
172
+ - type: cosine_map@100
173
+ value: 0.8147743017171518
174
+ name: Cosine Map@100
175
+ - type: dot_accuracy@1
176
+ value: 0.7384
177
+ name: Dot Accuracy@1
178
+ - type: dot_accuracy@3
179
+ value: 0.8669
180
+ name: Dot Accuracy@3
181
+ - type: dot_accuracy@5
182
+ value: 0.9039
183
+ name: Dot Accuracy@5
184
+ - type: dot_accuracy@10
185
+ value: 0.9389
186
+ name: Dot Accuracy@10
187
+ - type: dot_precision@1
188
+ value: 0.7384
189
+ name: Dot Precision@1
190
+ - type: dot_precision@3
191
+ value: 0.28896666666666665
192
+ name: Dot Precision@3
193
+ - type: dot_precision@5
194
+ value: 0.18078000000000002
195
+ name: Dot Precision@5
196
+ - type: dot_precision@10
197
+ value: 0.09389000000000002
198
+ name: Dot Precision@10
199
+ - type: dot_recall@1
200
+ value: 0.7384
201
+ name: Dot Recall@1
202
+ - type: dot_recall@3
203
+ value: 0.8669
204
+ name: Dot Recall@3
205
+ - type: dot_recall@5
206
+ value: 0.9039
207
+ name: Dot Recall@5
208
+ - type: dot_recall@10
209
+ value: 0.9389
210
+ name: Dot Recall@10
211
+ - type: dot_ndcg@10
212
+ value: 0.8410831459293242
213
+ name: Dot Ndcg@10
214
+ - type: dot_mrr@10
215
+ value: 0.8094504365079324
216
+ name: Dot Mrr@10
217
+ - type: dot_map@100
218
+ value: 0.8120497186357559
219
+ name: Dot Map@100
220
+ ---
221
+
222
+ # MPNet base trained on GooAQ triplets with hard negatives
223
+
224
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on the [train](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
225
+
226
+ ## Model Details
227
+
228
+ ### Model Description
229
+ - **Model Type:** Sentence Transformer
230
+ - **Base model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) <!-- at revision 6996ce1e91bd2a9c7d7f61daec37463394f73f09 -->
231
+ - **Maximum Sequence Length:** 512 tokens
232
+ - **Output Dimensionality:** 768 tokens
233
+ - **Similarity Function:** Cosine Similarity
234
+ - **Training Dataset:**
235
+ - [train](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives)
236
+ - **Language:** en
237
+ - **License:** apache-2.0
238
+
239
+ ### Model Sources
240
+
241
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
242
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
243
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
244
+
245
+ ### Full Model Architecture
246
+
247
+ ```
248
+ SentenceTransformer(
249
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
250
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
251
+ )
252
+ ```
253
+
254
+ ## Usage
255
+
256
+ ### Direct Usage (Sentence Transformers)
257
+
258
+ First install the Sentence Transformers library:
259
+
260
+ ```bash
261
+ pip install -U sentence-transformers
262
+ ```
263
+
264
+ Then you can load this model and run inference.
265
+ ```python
266
+ from sentence_transformers import SentenceTransformer
267
+
268
+ # Download from the 🤗 Hub
269
+ model = SentenceTransformer("tomaarsen/mpnet-base-gooaq-hard-negatives")
270
+ # Run inference
271
+ sentences = [
272
+ 'are hard seltzers malt liquor?',
273
+ 'Seltzer is carbonated water. “Hard seltzer” is a flavored malt beverage — essentially the same as a Lime-A-Rita or a Colt 45 or a Smirnoff Ice. These products derive their alcohol from fermented malted grains and are then carbonated, flavored and sweetened.',
274
+ 'Bleaching action of chlorine is based on oxidation while that of sulphur is based on reduction. Chlorine acts with water to produce nascent oxygen. ... Sulphour dioxide removes oxygen from the coloured substance and makes it colourless.',
275
+ ]
276
+ embeddings = model.encode(sentences)
277
+ print(embeddings.shape)
278
+ # [3, 768]
279
+
280
+ # Get the similarity scores for the embeddings
281
+ similarities = model.similarity(embeddings, embeddings)
282
+ print(similarities.shape)
283
+ # [3, 3]
284
+ ```
285
+
286
+ <!--
287
+ ### Direct Usage (Transformers)
288
+
289
+ <details><summary>Click to see the direct usage in Transformers</summary>
290
+
291
+ </details>
292
+ -->
293
+
294
+ <!--
295
+ ### Downstream Usage (Sentence Transformers)
296
+
297
+ You can finetune this model on your own dataset.
298
+
299
+ <details><summary>Click to expand</summary>
300
+
301
+ </details>
302
+ -->
303
+
304
+ <!--
305
+ ### Out-of-Scope Use
306
+
307
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
308
+ -->
309
+
310
+ ## Evaluation
311
+
312
+ ### Metrics
313
+
314
+ #### Information Retrieval
315
+ * Dataset: `gooaq-dev`
316
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
317
+
318
+ | Metric | Value |
319
+ |:--------------------|:-----------|
320
+ | cosine_accuracy@1 | 0.7413 |
321
+ | cosine_accuracy@3 | 0.8697 |
322
+ | cosine_accuracy@5 | 0.9055 |
323
+ | cosine_accuracy@10 | 0.9427 |
324
+ | cosine_precision@1 | 0.7413 |
325
+ | cosine_precision@3 | 0.2899 |
326
+ | cosine_precision@5 | 0.1811 |
327
+ | cosine_precision@10 | 0.0943 |
328
+ | cosine_recall@1 | 0.7413 |
329
+ | cosine_recall@3 | 0.8697 |
330
+ | cosine_recall@5 | 0.9055 |
331
+ | cosine_recall@10 | 0.9427 |
332
+ | cosine_ndcg@10 | 0.8442 |
333
+ | cosine_mrr@10 | 0.8124 |
334
+ | **cosine_map@100** | **0.8148** |
335
+ | dot_accuracy@1 | 0.7384 |
336
+ | dot_accuracy@3 | 0.8669 |
337
+ | dot_accuracy@5 | 0.9039 |
338
+ | dot_accuracy@10 | 0.9389 |
339
+ | dot_precision@1 | 0.7384 |
340
+ | dot_precision@3 | 0.289 |
341
+ | dot_precision@5 | 0.1808 |
342
+ | dot_precision@10 | 0.0939 |
343
+ | dot_recall@1 | 0.7384 |
344
+ | dot_recall@3 | 0.8669 |
345
+ | dot_recall@5 | 0.9039 |
346
+ | dot_recall@10 | 0.9389 |
347
+ | dot_ndcg@10 | 0.8411 |
348
+ | dot_mrr@10 | 0.8095 |
349
+ | dot_map@100 | 0.812 |
350
+
351
+ <!--
352
+ ## Bias, Risks and Limitations
353
+
354
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
355
+ -->
356
+
357
+ <!--
358
+ ### Recommendations
359
+
360
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
361
+ -->
362
+
363
+ ## Training Details
364
+
365
+ ### Training Dataset
366
+
367
+ #### train
368
+
369
+ * Dataset: [train](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives) at [87594a1](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives/tree/87594a1e6c58e88b5843afa9da3a97ffd75d01c2)
370
+ * Size: 2,286,783 training samples
371
+ * Columns: <code>question</code>, <code>answer</code>, <code>negative_1</code>, <code>negative_2</code>, <code>negative_3</code>, <code>negative_4</code>, and <code>negative_5</code>
372
+ * Approximate statistics based on the first 1000 samples:
373
+ | | question | answer | negative_1 | negative_2 | negative_3 | negative_4 | negative_5 |
374
+ |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
375
+ | type | string | string | string | string | string | string | string |
376
+ | details | <ul><li>min: 8 tokens</li><li>mean: 11.84 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 59.41 tokens</li><li>max: 158 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 59.09 tokens</li><li>max: 139 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 58.61 tokens</li><li>max: 139 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 58.98 tokens</li><li>max: 173 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 59.43 tokens</li><li>max: 137 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 60.03 tokens</li><li>max: 146 tokens</li></ul> |
377
+ * Samples:
378
+ | question | answer | negative_1 | negative_2 | negative_3 | negative_4 | negative_5 |
379
+ |:---------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
380
+ | <code>is toprol xl the same as metoprolol?</code> | <code>Metoprolol succinate is also known by the brand name Toprol XL. It is the extended-release form of metoprolol. Metoprolol succinate is approved to treat high blood pressure, chronic chest pain, and congestive heart failure.</code> | <code>Secondly, metoprolol and metoprolol ER have different brand-name equivalents: Brand version of metoprolol: Lopressor. Brand version of metoprolol ER: Toprol XL.</code> | <code>Pill with imprint 1 is White, Round and has been identified as Metoprolol Tartrate 25 mg.</code> | <code>Interactions between your drugs No interactions were found between Allergy Relief and metoprolol. This does not necessarily mean no interactions exist. Always consult your healthcare provider.</code> | <code>Metoprolol is a type of medication called a beta blocker. It works by relaxing blood vessels and slowing heart rate, which improves blood flow and lowers blood pressure. Metoprolol can also improve the likelihood of survival after a heart attack.</code> | <code>Metoprolol starts to work after about 2 hours, but it can take up to 1 week to fully take effect. You may not feel any different when you take metoprolol, but this doesn't mean it's not working. It's important to keep taking your medicine.</code> |
381
+ | <code>are you experienced cd steve hoffman?</code> | <code>The Are You Experienced album was apparently mastered from the original stereo UK master tapes (according to Steve Hoffman - one of the very few who has heard both the master tapes and the CDs produced over the years). ... The CD booklets were a little sparse, but at least they stayed true to the album's original design.</code> | <code>I Saw the Light. Showcasing the unique talent and musical influence of country-western artist Hank Williams, this candid biography also sheds light on the legacy of drug abuse and tormented relationships that contributes to the singer's legend.</code> | <code>(Read our ranking of his top 10.) And while Howard dresses the part of director, any notion of him as a tortured auteur or dictatorial taskmasker — the clichés of the Hollywood director — are tossed aside. He's very nice.</code> | <code>He was a music star too. Where're you people born and brought up? We 're born and brought up here in Anambra State at Nkpor town, near Onitsha.</code> | <code>At the age of 87 he has now retired from his live shows and all the traveling involved. And although he still picks up his Martin Guitar and does a show now and then, his life is now devoted to writing his memoirs.</code> | <code>The owner of the mysterious voice behind all these videos is a man who's seen a lot, visiting a total of 56 intimate celebrity spaces over the course of five years. His name is Joe Sabia — that's him in the photo — and he's currently the VP of creative development at Condé Nast Entertainment.</code> |
382
+ | <code>how are babushka dolls made?</code> | <code>Matryoshka dolls are made of wood from lime, balsa, alder, aspen, and birch trees; lime is probably the most common wood type. ... After cutting, the trees are stripped of most of their bark, although a few inner rings of bark are left to bind the wood and keep it from splitting.</code> | <code>A quick scan of the auction and buy-it-now listings on eBay finds porcelain doll values ranging from around $5 and $10 to several thousand dollars or more but no dolls listed above $10,000.</code> | <code>Japanese dolls are called as ningyō in Japanese and literally translates to 'human form'.</code> | <code>Matyoo: All Fresno Girl dolls come just as real children are born.</code> | <code>As of September 2016, there are over 100 characters. The main toy line includes 13-inch Dolls, the mini-series, and a variety of mini play-sets and plush dolls as well as Lalaloopsy Littles, smaller siblings of the 13-inch dolls. A spin-off known as "Lala-Oopsies" came out in late 2012.</code> | <code>LOL dolls are little baby dolls that come wrapped inside a surprise toy ball. Each ball has layers that contain stickers, secret messages, mix and match accessories–and finally–a doll. ... The doll on the ball is almost never the doll inside. Dolls are released in series, so not every doll is available all the time.</code> |
383
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
384
+ ```json
385
+ {
386
+ "scale": 20.0,
387
+ "similarity_fct": "cos_sim"
388
+ }
389
+ ```
390
+
391
+ ### Evaluation Dataset
392
+
393
+ #### sentence-transformers/gooaq
394
+
395
+ * Dataset: [sentence-transformers/gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq) at [b089f72](https://huggingface.co/datasets/sentence-transformers/gooaq/tree/b089f728748a068b7bc5234e5bcf5b25e3c8279c)
396
+ * Size: 10,000 evaluation samples
397
+ * Columns: <code>question</code> and <code>answer</code>
398
+ * Approximate statistics based on the first 1000 samples:
399
+ | | question | answer |
400
+ |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
401
+ | type | string | string |
402
+ | details | <ul><li>min: 8 tokens</li><li>mean: 11.89 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 59.65 tokens</li><li>max: 131 tokens</li></ul> |
403
+ * Samples:
404
+ | question | answer |
405
+ |:-------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
406
+ | <code>how to transfer data from ipad to usb?</code> | <code>First, in “Locations,” tap the “On My iPhone” or “On My iPad” section. Here, tap and hold the empty space, and then select “New Folder.” Name it, and then tap “Done” to create a new folder for the files you want to transfer. Now, from the “Locations” section, select your USB flash drive.</code> |
407
+ | <code>what quorn products are syn free?</code> | <code>['bacon style pieces.', 'bacon style rashers, chilled.', 'BBQ sliced fillets.', 'beef style and red onion burgers.', 'pieces.', 'chicken style slices.', 'fajita strips.', 'family roast.']</code> |
408
+ | <code>what is the difference between turmeric ginger?</code> | <code>Ginger offers a sweet and spicy zing to dishes. Turmeric provides a golden yellow colour and a warm and bitter taste with a peppery aroma.</code> |
409
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
410
+ ```json
411
+ {
412
+ "scale": 20.0,
413
+ "similarity_fct": "cos_sim"
414
+ }
415
+ ```
416
+
417
+ ### Training Hyperparameters
418
+ #### Non-Default Hyperparameters
419
+
420
+ - `eval_strategy`: steps
421
+ - `per_device_train_batch_size`: 32
422
+ - `per_device_eval_batch_size`: 32
423
+ - `learning_rate`: 2e-05
424
+ - `num_train_epochs`: 1
425
+ - `warmup_ratio`: 0.1
426
+ - `bf16`: True
427
+ - `batch_sampler`: no_duplicates
428
+
429
+ #### All Hyperparameters
430
+ <details><summary>Click to expand</summary>
431
+
432
+ - `overwrite_output_dir`: False
433
+ - `do_predict`: False
434
+ - `eval_strategy`: steps
435
+ - `prediction_loss_only`: True
436
+ - `per_device_train_batch_size`: 32
437
+ - `per_device_eval_batch_size`: 32
438
+ - `per_gpu_train_batch_size`: None
439
+ - `per_gpu_eval_batch_size`: None
440
+ - `gradient_accumulation_steps`: 1
441
+ - `eval_accumulation_steps`: None
442
+ - `learning_rate`: 2e-05
443
+ - `weight_decay`: 0.0
444
+ - `adam_beta1`: 0.9
445
+ - `adam_beta2`: 0.999
446
+ - `adam_epsilon`: 1e-08
447
+ - `max_grad_norm`: 1.0
448
+ - `num_train_epochs`: 1
449
+ - `max_steps`: -1
450
+ - `lr_scheduler_type`: linear
451
+ - `lr_scheduler_kwargs`: {}
452
+ - `warmup_ratio`: 0.1
453
+ - `warmup_steps`: 0
454
+ - `log_level`: passive
455
+ - `log_level_replica`: warning
456
+ - `log_on_each_node`: True
457
+ - `logging_nan_inf_filter`: True
458
+ - `save_safetensors`: True
459
+ - `save_on_each_node`: False
460
+ - `save_only_model`: False
461
+ - `restore_callback_states_from_checkpoint`: False
462
+ - `no_cuda`: False
463
+ - `use_cpu`: False
464
+ - `use_mps_device`: False
465
+ - `seed`: 42
466
+ - `data_seed`: None
467
+ - `jit_mode_eval`: False
468
+ - `use_ipex`: False
469
+ - `bf16`: True
470
+ - `fp16`: False
471
+ - `fp16_opt_level`: O1
472
+ - `half_precision_backend`: auto
473
+ - `bf16_full_eval`: False
474
+ - `fp16_full_eval`: False
475
+ - `tf32`: None
476
+ - `local_rank`: 0
477
+ - `ddp_backend`: None
478
+ - `tpu_num_cores`: None
479
+ - `tpu_metrics_debug`: False
480
+ - `debug`: []
481
+ - `dataloader_drop_last`: False
482
+ - `dataloader_num_workers`: 0
483
+ - `dataloader_prefetch_factor`: None
484
+ - `past_index`: -1
485
+ - `disable_tqdm`: False
486
+ - `remove_unused_columns`: True
487
+ - `label_names`: None
488
+ - `load_best_model_at_end`: False
489
+ - `ignore_data_skip`: False
490
+ - `fsdp`: []
491
+ - `fsdp_min_num_params`: 0
492
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
493
+ - `fsdp_transformer_layer_cls_to_wrap`: None
494
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
495
+ - `deepspeed`: None
496
+ - `label_smoothing_factor`: 0.0
497
+ - `optim`: adamw_torch
498
+ - `optim_args`: None
499
+ - `adafactor`: False
500
+ - `group_by_length`: False
501
+ - `length_column_name`: length
502
+ - `ddp_find_unused_parameters`: None
503
+ - `ddp_bucket_cap_mb`: None
504
+ - `ddp_broadcast_buffers`: False
505
+ - `dataloader_pin_memory`: True
506
+ - `dataloader_persistent_workers`: False
507
+ - `skip_memory_metrics`: True
508
+ - `use_legacy_prediction_loop`: False
509
+ - `push_to_hub`: False
510
+ - `resume_from_checkpoint`: None
511
+ - `hub_model_id`: None
512
+ - `hub_strategy`: every_save
513
+ - `hub_private_repo`: False
514
+ - `hub_always_push`: False
515
+ - `gradient_checkpointing`: False
516
+ - `gradient_checkpointing_kwargs`: None
517
+ - `include_inputs_for_metrics`: False
518
+ - `eval_do_concat_batches`: True
519
+ - `fp16_backend`: auto
520
+ - `push_to_hub_model_id`: None
521
+ - `push_to_hub_organization`: None
522
+ - `mp_parameters`:
523
+ - `auto_find_batch_size`: False
524
+ - `full_determinism`: False
525
+ - `torchdynamo`: None
526
+ - `ray_scope`: last
527
+ - `ddp_timeout`: 1800
528
+ - `torch_compile`: False
529
+ - `torch_compile_backend`: None
530
+ - `torch_compile_mode`: None
531
+ - `dispatch_batches`: None
532
+ - `split_batches`: None
533
+ - `include_tokens_per_second`: False
534
+ - `include_num_input_tokens_seen`: False
535
+ - `neftune_noise_alpha`: None
536
+ - `optim_target_modules`: None
537
+ - `batch_eval_metrics`: False
538
+ - `batch_sampler`: no_duplicates
539
+ - `multi_dataset_batch_sampler`: proportional
540
+
541
+ </details>
542
+
543
+ ### Training Logs
544
+ <details><summary>Click to expand</summary>
545
+
546
+ | Epoch | Step | Training Loss | loss | gooaq-dev_cosine_map@100 |
547
+ |:------:|:-----:|:-------------:|:------:|:------------------------:|
548
+ | 0 | 0 | - | - | 0.1405 |
549
+ | 0.2869 | 20500 | 0.5303 | - | - |
550
+ | 0.2939 | 21000 | 0.5328 | - | - |
551
+ | 0.3009 | 21500 | 0.515 | - | - |
552
+ | 0.3079 | 22000 | 0.5264 | 0.0297 | 0.7919 |
553
+ | 0.3149 | 22500 | 0.5189 | - | - |
554
+ | 0.3218 | 23000 | 0.5284 | - | - |
555
+ | 0.3288 | 23500 | 0.5308 | - | - |
556
+ | 0.3358 | 24000 | 0.509 | 0.0281 | 0.7932 |
557
+ | 0.3428 | 24500 | 0.5074 | - | - |
558
+ | 0.3498 | 25000 | 0.5196 | - | - |
559
+ | 0.3568 | 25500 | 0.5041 | - | - |
560
+ | 0.3638 | 26000 | 0.4976 | 0.0291 | 0.7950 |
561
+ | 0.3708 | 26500 | 0.5025 | - | - |
562
+ | 0.3778 | 27000 | 0.5175 | - | - |
563
+ | 0.3848 | 27500 | 0.4921 | - | - |
564
+ | 0.3918 | 28000 | 0.4924 | 0.0298 | 0.7938 |
565
+ | 0.3988 | 28500 | 0.49 | - | - |
566
+ | 0.4058 | 29000 | 0.4924 | - | - |
567
+ | 0.4128 | 29500 | 0.4902 | - | - |
568
+ | 0.4198 | 30000 | 0.4846 | 0.0269 | 0.7966 |
569
+ | 0.4268 | 30500 | 0.4815 | - | - |
570
+ | 0.4338 | 31000 | 0.4881 | - | - |
571
+ | 0.4408 | 31500 | 0.4848 | - | - |
572
+ | 0.4478 | 32000 | 0.4882 | 0.0264 | 0.8004 |
573
+ | 0.4548 | 32500 | 0.4809 | - | - |
574
+ | 0.4618 | 33000 | 0.4896 | - | - |
575
+ | 0.4688 | 33500 | 0.4744 | - | - |
576
+ | 0.4758 | 34000 | 0.4827 | 0.0252 | 0.8038 |
577
+ | 0.4828 | 34500 | 0.4703 | - | - |
578
+ | 0.4898 | 35000 | 0.4765 | - | - |
579
+ | 0.4968 | 35500 | 0.4625 | - | - |
580
+ | 0.5038 | 36000 | 0.4698 | 0.0269 | 0.8025 |
581
+ | 0.5108 | 36500 | 0.4666 | - | - |
582
+ | 0.5178 | 37000 | 0.4594 | - | - |
583
+ | 0.5248 | 37500 | 0.4621 | - | - |
584
+ | 0.5318 | 38000 | 0.4538 | 0.0266 | 0.8047 |
585
+ | 0.5387 | 38500 | 0.4576 | - | - |
586
+ | 0.5457 | 39000 | 0.4594 | - | - |
587
+ | 0.5527 | 39500 | 0.4503 | - | - |
588
+ | 0.5597 | 40000 | 0.4538 | 0.0265 | 0.8038 |
589
+ | 0.5667 | 40500 | 0.4521 | - | - |
590
+ | 0.5737 | 41000 | 0.4575 | - | - |
591
+ | 0.5807 | 41500 | 0.4544 | - | - |
592
+ | 0.5877 | 42000 | 0.4462 | 0.0245 | 0.8077 |
593
+ | 0.5947 | 42500 | 0.4491 | - | - |
594
+ | 0.6017 | 43000 | 0.4651 | - | - |
595
+ | 0.6087 | 43500 | 0.4549 | - | - |
596
+ | 0.6157 | 44000 | 0.4461 | 0.0262 | 0.8046 |
597
+ | 0.6227 | 44500 | 0.4571 | - | - |
598
+ | 0.6297 | 45000 | 0.4478 | - | - |
599
+ | 0.6367 | 45500 | 0.4482 | - | - |
600
+ | 0.6437 | 46000 | 0.4439 | 0.0244 | 0.8070 |
601
+ | 0.6507 | 46500 | 0.4384 | - | - |
602
+ | 0.6577 | 47000 | 0.446 | - | - |
603
+ | 0.6647 | 47500 | 0.4425 | - | - |
604
+ | 0.6717 | 48000 | 0.4308 | 0.0248 | 0.8067 |
605
+ | 0.6787 | 48500 | 0.4374 | - | - |
606
+ | 0.6857 | 49000 | 0.4342 | - | - |
607
+ | 0.6927 | 49500 | 0.4455 | - | - |
608
+ | 0.6997 | 50000 | 0.4322 | 0.0242 | 0.8077 |
609
+ | 0.7067 | 50500 | 0.4288 | - | - |
610
+ | 0.7137 | 51000 | 0.4317 | - | - |
611
+ | 0.7207 | 51500 | 0.4295 | - | - |
612
+ | 0.7277 | 52000 | 0.4291 | 0.0231 | 0.8130 |
613
+ | 0.7347 | 52500 | 0.4279 | - | - |
614
+ | 0.7417 | 53000 | 0.4287 | - | - |
615
+ | 0.7486 | 53500 | 0.4252 | - | - |
616
+ | 0.7556 | 54000 | 0.4341 | 0.0243 | 0.8112 |
617
+ | 0.7626 | 54500 | 0.419 | - | - |
618
+ | 0.7696 | 55000 | 0.4323 | - | - |
619
+ | 0.7766 | 55500 | 0.4252 | - | - |
620
+ | 0.7836 | 56000 | 0.4313 | 0.0264 | 0.8107 |
621
+ | 0.7906 | 56500 | 0.4222 | - | - |
622
+ | 0.7976 | 57000 | 0.4226 | - | - |
623
+ | 0.8046 | 57500 | 0.4152 | - | - |
624
+ | 0.8116 | 58000 | 0.4222 | 0.0236 | 0.8131 |
625
+ | 0.8186 | 58500 | 0.4184 | - | - |
626
+ | 0.8256 | 59000 | 0.4144 | - | - |
627
+ | 0.8326 | 59500 | 0.4242 | - | - |
628
+ | 0.8396 | 60000 | 0.4148 | 0.0242 | 0.8125 |
629
+ | 0.8466 | 60500 | 0.4222 | - | - |
630
+ | 0.8536 | 61000 | 0.4184 | - | - |
631
+ | 0.8606 | 61500 | 0.4138 | - | - |
632
+ | 0.8676 | 62000 | 0.4119 | 0.0240 | 0.8133 |
633
+ | 0.8746 | 62500 | 0.411 | - | - |
634
+ | 0.8816 | 63000 | 0.4172 | - | - |
635
+ | 0.8886 | 63500 | 0.4145 | - | - |
636
+ | 0.8956 | 64000 | 0.4168 | 0.0240 | 0.8137 |
637
+ | 0.9026 | 64500 | 0.4071 | - | - |
638
+ | 0.9096 | 65000 | 0.4119 | - | - |
639
+ | 0.9166 | 65500 | 0.403 | - | - |
640
+ | 0.9236 | 66000 | 0.4092 | 0.0238 | 0.8141 |
641
+ | 0.9306 | 66500 | 0.4079 | - | - |
642
+ | 0.9376 | 67000 | 0.4129 | - | - |
643
+ | 0.9446 | 67500 | 0.4082 | - | - |
644
+ | 0.9516 | 68000 | 0.4054 | 0.0235 | 0.8149 |
645
+ | 0.9586 | 68500 | 0.4129 | - | - |
646
+ | 0.9655 | 69000 | 0.4085 | - | - |
647
+ | 0.9725 | 69500 | 0.414 | - | - |
648
+ | 0.9795 | 70000 | 0.4075 | 0.0239 | 0.8142 |
649
+ | 0.9865 | 70500 | 0.4104 | - | - |
650
+ | 0.9935 | 71000 | 0.4087 | - | - |
651
+ | 1.0 | 71462 | - | - | 0.8148 |
652
+
653
+ </details>
654
+
655
+ ### Environmental Impact
656
+ Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
657
+ - **Energy Consumed**: 3.989 kWh
658
+ - **Carbon Emitted**: 1.551 kg of CO2
659
+ - **Hours Used**: 11.599 hours
660
+
661
+ ### Training Hardware
662
+ - **On Cloud**: No
663
+ - **GPU Model**: 1 x NVIDIA GeForce RTX 3090
664
+ - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
665
+ - **RAM Size**: 31.78 GB
666
+
667
+ ### Framework Versions
668
+ - Python: 3.11.6
669
+ - Sentence Transformers: 3.1.0.dev0
670
+ - Transformers: 4.41.2
671
+ - PyTorch: 2.3.0+cu121
672
+ - Accelerate: 0.31.0
673
+ - Datasets: 2.20.0
674
+ - Tokenizers: 0.19.1
675
+
676
+ ## Citation
677
+
678
+ ### BibTeX
679
+
680
+ #### Sentence Transformers
681
+ ```bibtex
682
+ @inproceedings{reimers-2019-sentence-bert,
683
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
684
+ author = "Reimers, Nils and Gurevych, Iryna",
685
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
686
+ month = "11",
687
+ year = "2019",
688
+ publisher = "Association for Computational Linguistics",
689
+ url = "https://arxiv.org/abs/1908.10084",
690
+ }
691
+ ```
692
+
693
+ #### MultipleNegativesRankingLoss
694
+ ```bibtex
695
+ @misc{henderson2017efficient,
696
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
697
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
698
+ year={2017},
699
+ eprint={1705.00652},
700
+ archivePrefix={arXiv},
701
+ primaryClass={cs.CL}
702
+ }
703
+ ```
704
+
705
+ <!--
706
+ ## Glossary
707
+
708
+ *Clearly define terms in order to be accessible across audiences.*
709
+ -->
710
+
711
+ <!--
712
+ ## Model Card Authors
713
+
714
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
715
+ -->
716
+
717
+ <!--
718
+ ## Model Card Contact
719
+
720
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
721
+ -->
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "microsoft/mpnet-base",
3
+ "architectures": [
4
+ "MPNetModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-05,
15
+ "max_position_embeddings": 514,
16
+ "model_type": "mpnet",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 1,
20
+ "relative_attention_num_buckets": 32,
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.41.2",
23
+ "vocab_size": 30527
24
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.1.0.dev0",
4
+ "transformers": "4.41.2",
5
+ "pytorch": "2.3.0+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b6148a537361662edbc823927164e9767eb4549643fe5dfa35f11d44f5a8918
3
+ size 437967672
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": true,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "[UNK]",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "104": {
36
+ "content": "[UNK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "30526": {
44
+ "content": "<mask>",
45
+ "lstrip": true,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ }
51
+ },
52
+ "bos_token": "<s>",
53
+ "clean_up_tokenization_spaces": true,
54
+ "cls_token": "<s>",
55
+ "do_lower_case": true,
56
+ "eos_token": "</s>",
57
+ "mask_token": "<mask>",
58
+ "model_max_length": 512,
59
+ "pad_token": "<pad>",
60
+ "sep_token": "</s>",
61
+ "strip_accents": null,
62
+ "tokenize_chinese_chars": true,
63
+ "tokenizer_class": "MPNetTokenizer",
64
+ "unk_token": "[UNK]"
65
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff