dbourget commited on
Commit
38a8e81
1 Parent(s): c31a394

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,446 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: sentence-transformers
3
+ metrics:
4
+ - cosine_accuracy
5
+ - dot_accuracy
6
+ - manhattan_accuracy
7
+ - euclidean_accuracy
8
+ - max_accuracy
9
+ pipeline_tag: sentence-similarity
10
+ tags:
11
+ - sentence-transformers
12
+ - sentence-similarity
13
+ - feature-extraction
14
+ - generated_from_trainer
15
+ - dataset_size:9504
16
+ - loss:TripletLoss
17
+ widget:
18
+ - source_sentence: cap product
19
+ sentences:
20
+ - method of adjoining a chain of degree p with a co-chain of degree q, where q is
21
+ less than or equal to p, to form a composite chain of degree p-q
22
+ - 'Ontology '
23
+ - hat commodity
24
+ - source_sentence: cognitivism
25
+ sentences:
26
+ - supporting cognitive science
27
+ - study of changes in organisms caused by modification of gene expression rather
28
+ than alteration of the genetic code
29
+ - 'the idea that mind works like an algorithmic symbol manipulation '
30
+ - source_sentence: doxastic voluntarism
31
+ sentences:
32
+ - Land surrounded by water
33
+ - belief one is free
34
+ - the ability to will beliefs
35
+ - source_sentence: conceptual role
36
+ sentences:
37
+ - concept
38
+ - inferential role
39
+ - 'Theory of knowledge '
40
+ - source_sentence: scientific revolutions
41
+ sentences:
42
+ - scientific realism
43
+ - Universal moral principles govern legal systems
44
+ - paradigm shifts
45
+ model-index:
46
+ - name: SentenceTransformer
47
+ results:
48
+ - task:
49
+ type: triplet
50
+ name: Triplet
51
+ dataset:
52
+ name: beatai dev
53
+ type: beatai-dev
54
+ metrics:
55
+ - type: cosine_accuracy
56
+ value: 0.8080808080808081
57
+ name: Cosine Accuracy
58
+ - type: dot_accuracy
59
+ value: 0.28114478114478114
60
+ name: Dot Accuracy
61
+ - type: manhattan_accuracy
62
+ value: 0.8316498316498316
63
+ name: Manhattan Accuracy
64
+ - type: euclidean_accuracy
65
+ value: 0.8249158249158249
66
+ name: Euclidean Accuracy
67
+ - type: max_accuracy
68
+ value: 0.8316498316498316
69
+ name: Max Accuracy
70
+ ---
71
+
72
+ # SentenceTransformer
73
+
74
+ This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
75
+
76
+ ## Model Details
77
+
78
+ ### Model Description
79
+ - **Model Type:** Sentence Transformer
80
+ <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
81
+ - **Maximum Sequence Length:** 512 tokens
82
+ - **Output Dimensionality:** 1024 tokens
83
+ - **Similarity Function:** Cosine Similarity
84
+ <!-- - **Training Dataset:** Unknown -->
85
+ <!-- - **Language:** Unknown -->
86
+ <!-- - **License:** Unknown -->
87
+
88
+ ### Model Sources
89
+
90
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
91
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
92
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
93
+
94
+ ### Full Model Architecture
95
+
96
+ ```
97
+ SentenceTransformer(
98
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
99
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
100
+ )
101
+ ```
102
+
103
+ ## Usage
104
+
105
+ ### Direct Usage (Sentence Transformers)
106
+
107
+ First install the Sentence Transformers library:
108
+
109
+ ```bash
110
+ pip install -U sentence-transformers
111
+ ```
112
+
113
+ Then you can load this model and run inference.
114
+ ```python
115
+ from sentence_transformers import SentenceTransformer
116
+
117
+ # Download from the 🤗 Hub
118
+ model = SentenceTransformer("dbourget/pb-small-10e-tsdae6e-philsim-cosine-6e-beatai-20e")
119
+ # Run inference
120
+ sentences = [
121
+ 'scientific revolutions',
122
+ 'paradigm shifts',
123
+ 'scientific realism',
124
+ ]
125
+ embeddings = model.encode(sentences)
126
+ print(embeddings.shape)
127
+ # [3, 1024]
128
+
129
+ # Get the similarity scores for the embeddings
130
+ similarities = model.similarity(embeddings, embeddings)
131
+ print(similarities.shape)
132
+ # [3, 3]
133
+ ```
134
+
135
+ <!--
136
+ ### Direct Usage (Transformers)
137
+
138
+ <details><summary>Click to see the direct usage in Transformers</summary>
139
+
140
+ </details>
141
+ -->
142
+
143
+ <!--
144
+ ### Downstream Usage (Sentence Transformers)
145
+
146
+ You can finetune this model on your own dataset.
147
+
148
+ <details><summary>Click to expand</summary>
149
+
150
+ </details>
151
+ -->
152
+
153
+ <!--
154
+ ### Out-of-Scope Use
155
+
156
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
157
+ -->
158
+
159
+ ## Evaluation
160
+
161
+ ### Metrics
162
+
163
+ #### Triplet
164
+ * Dataset: `beatai-dev`
165
+ * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
166
+
167
+ | Metric | Value |
168
+ |:-------------------|:-----------|
169
+ | cosine_accuracy | 0.8081 |
170
+ | dot_accuracy | 0.2811 |
171
+ | manhattan_accuracy | 0.8316 |
172
+ | euclidean_accuracy | 0.8249 |
173
+ | **max_accuracy** | **0.8316** |
174
+
175
+ <!--
176
+ ## Bias, Risks and Limitations
177
+
178
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
179
+ -->
180
+
181
+ <!--
182
+ ### Recommendations
183
+
184
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
185
+ -->
186
+
187
+ ## Training Details
188
+
189
+ ### Training Hyperparameters
190
+ #### Non-Default Hyperparameters
191
+
192
+ - `eval_strategy`: steps
193
+ - `per_device_train_batch_size`: 138
194
+ - `per_device_eval_batch_size`: 138
195
+ - `learning_rate`: 2e-06
196
+ - `num_train_epochs`: 10
197
+ - `lr_scheduler_type`: constant
198
+ - `bf16`: True
199
+ - `dataloader_drop_last`: True
200
+
201
+ #### All Hyperparameters
202
+ <details><summary>Click to expand</summary>
203
+
204
+ - `overwrite_output_dir`: False
205
+ - `do_predict`: False
206
+ - `eval_strategy`: steps
207
+ - `prediction_loss_only`: True
208
+ - `per_device_train_batch_size`: 138
209
+ - `per_device_eval_batch_size`: 138
210
+ - `per_gpu_train_batch_size`: None
211
+ - `per_gpu_eval_batch_size`: None
212
+ - `gradient_accumulation_steps`: 1
213
+ - `eval_accumulation_steps`: None
214
+ - `torch_empty_cache_steps`: None
215
+ - `learning_rate`: 2e-06
216
+ - `weight_decay`: 0
217
+ - `adam_beta1`: 0.9
218
+ - `adam_beta2`: 0.999
219
+ - `adam_epsilon`: 1e-08
220
+ - `max_grad_norm`: 1.0
221
+ - `num_train_epochs`: 10
222
+ - `max_steps`: -1
223
+ - `lr_scheduler_type`: constant
224
+ - `lr_scheduler_kwargs`: {}
225
+ - `warmup_ratio`: 0
226
+ - `warmup_steps`: 0
227
+ - `log_level`: passive
228
+ - `log_level_replica`: warning
229
+ - `log_on_each_node`: True
230
+ - `logging_nan_inf_filter`: True
231
+ - `save_safetensors`: True
232
+ - `save_on_each_node`: False
233
+ - `save_only_model`: False
234
+ - `restore_callback_states_from_checkpoint`: False
235
+ - `no_cuda`: False
236
+ - `use_cpu`: False
237
+ - `use_mps_device`: False
238
+ - `seed`: 42
239
+ - `data_seed`: None
240
+ - `jit_mode_eval`: False
241
+ - `use_ipex`: False
242
+ - `bf16`: True
243
+ - `fp16`: False
244
+ - `fp16_opt_level`: O1
245
+ - `half_precision_backend`: auto
246
+ - `bf16_full_eval`: False
247
+ - `fp16_full_eval`: False
248
+ - `tf32`: None
249
+ - `local_rank`: 0
250
+ - `ddp_backend`: None
251
+ - `tpu_num_cores`: None
252
+ - `tpu_metrics_debug`: False
253
+ - `debug`: []
254
+ - `dataloader_drop_last`: True
255
+ - `dataloader_num_workers`: 0
256
+ - `dataloader_prefetch_factor`: 2
257
+ - `past_index`: -1
258
+ - `disable_tqdm`: False
259
+ - `remove_unused_columns`: True
260
+ - `label_names`: None
261
+ - `load_best_model_at_end`: False
262
+ - `ignore_data_skip`: False
263
+ - `fsdp`: []
264
+ - `fsdp_min_num_params`: 0
265
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
266
+ - `fsdp_transformer_layer_cls_to_wrap`: None
267
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
268
+ - `deepspeed`: None
269
+ - `label_smoothing_factor`: 0.0
270
+ - `optim`: adamw_torch
271
+ - `optim_args`: None
272
+ - `adafactor`: False
273
+ - `group_by_length`: False
274
+ - `length_column_name`: length
275
+ - `ddp_find_unused_parameters`: None
276
+ - `ddp_bucket_cap_mb`: None
277
+ - `ddp_broadcast_buffers`: False
278
+ - `dataloader_pin_memory`: True
279
+ - `dataloader_persistent_workers`: False
280
+ - `skip_memory_metrics`: True
281
+ - `use_legacy_prediction_loop`: False
282
+ - `push_to_hub`: False
283
+ - `resume_from_checkpoint`: None
284
+ - `hub_model_id`: None
285
+ - `hub_strategy`: every_save
286
+ - `hub_private_repo`: False
287
+ - `hub_always_push`: False
288
+ - `gradient_checkpointing`: False
289
+ - `gradient_checkpointing_kwargs`: None
290
+ - `include_inputs_for_metrics`: False
291
+ - `eval_do_concat_batches`: True
292
+ - `fp16_backend`: auto
293
+ - `push_to_hub_model_id`: None
294
+ - `push_to_hub_organization`: None
295
+ - `mp_parameters`:
296
+ - `auto_find_batch_size`: False
297
+ - `full_determinism`: False
298
+ - `torchdynamo`: None
299
+ - `ray_scope`: last
300
+ - `ddp_timeout`: 1800
301
+ - `torch_compile`: False
302
+ - `torch_compile_backend`: None
303
+ - `torch_compile_mode`: None
304
+ - `dispatch_batches`: None
305
+ - `split_batches`: None
306
+ - `include_tokens_per_second`: False
307
+ - `include_num_input_tokens_seen`: False
308
+ - `neftune_noise_alpha`: None
309
+ - `optim_target_modules`: None
310
+ - `batch_eval_metrics`: False
311
+ - `eval_on_start`: False
312
+ - `eval_use_gather_object`: False
313
+ - `batch_sampler`: batch_sampler
314
+ - `multi_dataset_batch_sampler`: proportional
315
+
316
+ </details>
317
+
318
+ ### Training Logs
319
+ | Epoch | Step | Training Loss | loss | beatai-dev_max_accuracy |
320
+ |:------:|:----:|:-------------:|:------:|:-----------------------:|
321
+ | 0 | 0 | - | - | 0.8072 |
322
+ | 0.1471 | 10 | 1.8573 | - | - |
323
+ | 0.2941 | 20 | 1.8196 | - | - |
324
+ | 0.4412 | 30 | 1.8594 | - | - |
325
+ | 0.5882 | 40 | 1.8581 | - | - |
326
+ | 0.7353 | 50 | 1.8766 | 2.3603 | 0.8047 |
327
+ | 0.8824 | 60 | 1.8596 | - | - |
328
+ | 1.0294 | 70 | 1.6816 | - | - |
329
+ | 1.1765 | 80 | 1.7564 | - | - |
330
+ | 1.3235 | 90 | 1.7191 | - | - |
331
+ | 1.4706 | 100 | 1.6521 | 2.3296 | 0.8064 |
332
+ | 1.6176 | 110 | 1.7054 | - | - |
333
+ | 1.7647 | 120 | 1.6895 | - | - |
334
+ | 1.9118 | 130 | 1.6724 | - | - |
335
+ | 2.0588 | 140 | 1.6369 | - | - |
336
+ | 2.2059 | 150 | 1.705 | 2.2941 | 0.8123 |
337
+ | 2.3529 | 160 | 1.8329 | - | - |
338
+ | 2.5 | 170 | 1.6071 | - | - |
339
+ | 2.6471 | 180 | 1.5157 | - | - |
340
+ | 2.7941 | 190 | 1.624 | - | - |
341
+ | 2.9412 | 200 | 1.6185 | 2.2668 | 0.8140 |
342
+ | 3.0882 | 210 | 1.6259 | - | - |
343
+ | 3.2353 | 220 | 1.5749 | - | - |
344
+ | 3.3824 | 230 | 1.5426 | - | - |
345
+ | 3.5294 | 240 | 1.5522 | - | - |
346
+ | 3.6765 | 250 | 1.5141 | 2.2498 | 0.8157 |
347
+ | 3.8235 | 260 | 1.5215 | - | - |
348
+ | 3.9706 | 270 | 1.4983 | - | - |
349
+ | 4.1176 | 280 | 1.4819 | - | - |
350
+ | 4.2647 | 290 | 1.4552 | - | - |
351
+ | 4.4118 | 300 | 1.5597 | 2.2226 | 0.8199 |
352
+ | 4.5588 | 310 | 1.3983 | - | - |
353
+ | 4.7059 | 320 | 1.5386 | - | - |
354
+ | 4.8529 | 330 | 1.4541 | - | - |
355
+ | 5.0 | 340 | 1.4097 | - | - |
356
+ | 5.1471 | 350 | 1.3741 | 2.2129 | 0.8207 |
357
+ | 5.2941 | 360 | 1.3909 | - | - |
358
+ | 5.4412 | 370 | 1.4116 | - | - |
359
+ | 5.5882 | 380 | 1.52 | - | - |
360
+ | 5.7353 | 390 | 1.3644 | - | - |
361
+ | 5.8824 | 400 | 1.3016 | 2.1699 | 0.8266 |
362
+ | 6.0294 | 410 | 1.4435 | - | - |
363
+ | 6.1765 | 420 | 1.3112 | - | - |
364
+ | 6.3235 | 430 | 1.4056 | - | - |
365
+ | 6.4706 | 440 | 1.4541 | - | - |
366
+ | 6.6176 | 450 | 1.3312 | 2.1486 | 0.8224 |
367
+ | 6.7647 | 460 | 1.2879 | - | - |
368
+ | 6.9118 | 470 | 1.227 | - | - |
369
+ | 7.0588 | 480 | 1.3834 | - | - |
370
+ | 7.2059 | 490 | 1.3242 | - | - |
371
+ | 7.3529 | 500 | 1.3756 | 2.1507 | 0.8274 |
372
+ | 7.5 | 510 | 1.2872 | - | - |
373
+ | 7.6471 | 520 | 1.3288 | - | - |
374
+ | 7.7941 | 530 | 1.2689 | - | - |
375
+ | 7.9412 | 540 | 1.3102 | - | - |
376
+ | 8.0882 | 550 | 1.2929 | 2.1355 | 0.8207 |
377
+ | 8.2353 | 560 | 1.2511 | - | - |
378
+ | 8.3824 | 570 | 1.1849 | - | - |
379
+ | 8.5294 | 580 | 1.2774 | - | - |
380
+ | 8.6765 | 590 | 1.1923 | - | - |
381
+ | 8.8235 | 600 | 1.1927 | 2.1111 | 0.8283 |
382
+ | 8.9706 | 610 | 1.2556 | - | - |
383
+ | 9.1176 | 620 | 1.2767 | - | - |
384
+ | 9.2647 | 630 | 1.1082 | - | - |
385
+ | 9.4118 | 640 | 1.3077 | - | - |
386
+ | 9.5588 | 650 | 1.1435 | 2.0922 | 0.8316 |
387
+ | 9.7059 | 660 | 1.1888 | - | - |
388
+ | 9.8529 | 670 | 1.2123 | - | - |
389
+ | 10.0 | 680 | 1.2554 | - | - |
390
+
391
+
392
+ ### Framework Versions
393
+ - Python: 3.8.18
394
+ - Sentence Transformers: 3.1.1
395
+ - Transformers: 4.44.2
396
+ - PyTorch: 1.13.1+cu117
397
+ - Accelerate: 0.34.2
398
+ - Datasets: 3.0.0
399
+ - Tokenizers: 0.19.1
400
+
401
+ ## Citation
402
+
403
+ ### BibTeX
404
+
405
+ #### Sentence Transformers
406
+ ```bibtex
407
+ @inproceedings{reimers-2019-sentence-bert,
408
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
409
+ author = "Reimers, Nils and Gurevych, Iryna",
410
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
411
+ month = "11",
412
+ year = "2019",
413
+ publisher = "Association for Computational Linguistics",
414
+ url = "https://arxiv.org/abs/1908.10084",
415
+ }
416
+ ```
417
+
418
+ #### TripletLoss
419
+ ```bibtex
420
+ @misc{hermans2017defense,
421
+ title={In Defense of the Triplet Loss for Person Re-Identification},
422
+ author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
423
+ year={2017},
424
+ eprint={1703.07737},
425
+ archivePrefix={arXiv},
426
+ primaryClass={cs.CV}
427
+ }
428
+ ```
429
+
430
+ <!--
431
+ ## Glossary
432
+
433
+ *Clearly define terms in order to be accessible across audiences.*
434
+ -->
435
+
436
+ <!--
437
+ ## Model Card Authors
438
+
439
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
440
+ -->
441
+
442
+ <!--
443
+ ## Model Card Contact
444
+
445
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
446
+ -->
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "../models/pb-small-10e-tsdae6e-philsim-cosine-6e-beatai-10e",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 3072,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 12,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "tokenizer_class": "PreTrainedTokenizerFast",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.44.2",
23
+ "type_vocab_size": 2,
24
+ "use_cache": true,
25
+ "vocab_size": 30522
26
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.1.1",
4
+ "transformers": "4.44.2",
5
+ "pytorch": "1.13.1+cu117"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46744ec43fabf3a3be9d3d4f33812a707b236c0a82b52d62396cd27f1d280120
3
+ size 437951328
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "[PAD]",
4
+ "[UNK]",
5
+ "[CLS]",
6
+ "[SEP]",
7
+ "[MASK]"
8
+ ],
9
+ "cls_token": {
10
+ "content": "[CLS]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "mask_token": {
17
+ "content": "[MASK]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "pad_token": {
24
+ "content": "[PAD]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "sep_token": {
31
+ "content": "[SEP]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "unk_token": {
38
+ "content": "[UNK]",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ }
44
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "4": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "additional_special_tokens": [
45
+ "[PAD]",
46
+ "[UNK]",
47
+ "[CLS]",
48
+ "[SEP]",
49
+ "[MASK]"
50
+ ],
51
+ "clean_up_tokenization_spaces": true,
52
+ "cls_token": "[CLS]",
53
+ "mask_token": "[MASK]",
54
+ "max_length": 512,
55
+ "model_max_length": 512,
56
+ "pad_to_multiple_of": null,
57
+ "pad_token": "[PAD]",
58
+ "pad_token_type_id": 0,
59
+ "padding_side": "right",
60
+ "sep_token": "[SEP]",
61
+ "stride": 0,
62
+ "tokenizer_class": "PreTrainedTokenizerFast",
63
+ "truncation_side": "right",
64
+ "truncation_strategy": "longest_first",
65
+ "unk_token": "[UNK]"
66
+ }