lealdaniel commited on
Commit
91b878b
·
verified ·
1 Parent(s): cb59614

Embeddings model v2

Browse files
README.md CHANGED
@@ -4,35 +4,35 @@ tags:
4
  - sentence-similarity
5
  - feature-extraction
6
  - generated_from_trainer
7
- - dataset_size:5005
8
  - loss:MultipleNegativesRankingLoss
9
  base_model: sentence-transformers/all-mpnet-base-v2
10
  widget:
11
- - source_sentence: especialista de risco e prevenção a fraudes​
12
- sentences:
13
- - risk & compliance
14
- - internal communication
15
- - accounting
16
- - source_sentence: coord integracao do cliente ii
17
- sentences:
18
- - strategic planning
19
- - customer experience
20
- - não encontrado (adicione nas observações)
21
- - source_sentence: gerente sr. marketing e performance
22
  sentences:
 
23
  - business operations
24
- - d&i
25
- - performance marketing
26
- - source_sentence: gerente executivo de operacoes
27
  sentences:
28
- - business operations
29
- - sdr
30
  - product management
31
- - source_sentence: sr designer
 
 
 
 
 
 
 
 
 
 
 
 
32
  sentences:
33
- - product design
34
- - talent acquisition
35
- - lawyer
36
  pipeline_tag: sentence-similarity
37
  library_name: sentence-transformers
38
  metrics:
@@ -51,21 +51,6 @@ metrics:
51
  - cosine_ndcg@10
52
  - cosine_mrr@10
53
  - cosine_map@100
54
- - dot_accuracy@1
55
- - dot_accuracy@3
56
- - dot_accuracy@5
57
- - dot_accuracy@10
58
- - dot_precision@1
59
- - dot_precision@3
60
- - dot_precision@5
61
- - dot_precision@10
62
- - dot_recall@1
63
- - dot_recall@3
64
- - dot_recall@5
65
- - dot_recall@10
66
- - dot_ndcg@10
67
- - dot_mrr@10
68
- - dot_map@100
69
  model-index:
70
  - name: SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
71
  results:
@@ -77,95 +62,50 @@ model-index:
77
  type: unknown
78
  metrics:
79
  - type: cosine_accuracy@1
80
- value: 0.6245583038869258
81
  name: Cosine Accuracy@1
82
  - type: cosine_accuracy@3
83
- value: 0.8206713780918727
84
  name: Cosine Accuracy@3
85
  - type: cosine_accuracy@5
86
- value: 0.8754416961130742
87
  name: Cosine Accuracy@5
88
  - type: cosine_accuracy@10
89
- value: 0.926678445229682
90
  name: Cosine Accuracy@10
91
  - type: cosine_precision@1
92
- value: 0.6245583038869258
93
  name: Cosine Precision@1
94
  - type: cosine_precision@3
95
- value: 0.2735571260306242
96
  name: Cosine Precision@3
97
  - type: cosine_precision@5
98
- value: 0.17508833922261482
99
  name: Cosine Precision@5
100
  - type: cosine_precision@10
101
- value: 0.0926678445229682
102
  name: Cosine Precision@10
103
  - type: cosine_recall@1
104
- value: 0.6245583038869258
105
  name: Cosine Recall@1
106
  - type: cosine_recall@3
107
- value: 0.8206713780918727
108
  name: Cosine Recall@3
109
  - type: cosine_recall@5
110
- value: 0.8754416961130742
111
  name: Cosine Recall@5
112
  - type: cosine_recall@10
113
- value: 0.926678445229682
114
  name: Cosine Recall@10
115
  - type: cosine_ndcg@10
116
- value: 0.7790196193570564
117
  name: Cosine Ndcg@10
118
  - type: cosine_mrr@10
119
- value: 0.7312496494475299
120
  name: Cosine Mrr@10
121
  - type: cosine_map@100
122
- value: 0.7347864977321262
123
  name: Cosine Map@100
124
- - type: dot_accuracy@1
125
- value: 0.6245583038869258
126
- name: Dot Accuracy@1
127
- - type: dot_accuracy@3
128
- value: 0.8206713780918727
129
- name: Dot Accuracy@3
130
- - type: dot_accuracy@5
131
- value: 0.8754416961130742
132
- name: Dot Accuracy@5
133
- - type: dot_accuracy@10
134
- value: 0.926678445229682
135
- name: Dot Accuracy@10
136
- - type: dot_precision@1
137
- value: 0.6245583038869258
138
- name: Dot Precision@1
139
- - type: dot_precision@3
140
- value: 0.2735571260306242
141
- name: Dot Precision@3
142
- - type: dot_precision@5
143
- value: 0.17508833922261482
144
- name: Dot Precision@5
145
- - type: dot_precision@10
146
- value: 0.0926678445229682
147
- name: Dot Precision@10
148
- - type: dot_recall@1
149
- value: 0.6245583038869258
150
- name: Dot Recall@1
151
- - type: dot_recall@3
152
- value: 0.8206713780918727
153
- name: Dot Recall@3
154
- - type: dot_recall@5
155
- value: 0.8754416961130742
156
- name: Dot Recall@5
157
- - type: dot_recall@10
158
- value: 0.926678445229682
159
- name: Dot Recall@10
160
- - type: dot_ndcg@10
161
- value: 0.7790196193570564
162
- name: Dot Ndcg@10
163
- - type: dot_mrr@10
164
- value: 0.7312496494475299
165
- name: Dot Mrr@10
166
- - type: dot_map@100
167
- value: 0.7347864977321262
168
- name: Dot Map@100
169
  ---
170
 
171
  # SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
@@ -178,7 +118,7 @@ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [s
178
  - **Model Type:** Sentence Transformer
179
  - **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 9a3225965996d404b775526de6dbfe85d3368642 -->
180
  - **Maximum Sequence Length:** 384 tokens
181
- - **Output Dimensionality:** 768 tokens
182
  - **Similarity Function:** Cosine Similarity
183
  <!-- - **Training Dataset:** Unknown -->
184
  <!-- - **Language:** Unknown -->
@@ -218,9 +158,9 @@ from sentence_transformers import SentenceTransformer
218
  model = SentenceTransformer("sentence_transformers_model_id")
219
  # Run inference
220
  sentences = [
221
- 'sr designer',
222
- 'product design',
223
- 'talent acquisition',
224
  ]
225
  embeddings = model.encode(sentences)
226
  print(embeddings.shape)
@@ -266,36 +206,21 @@ You can finetune this model on your own dataset.
266
 
267
  | Metric | Value |
268
  |:--------------------|:-----------|
269
- | cosine_accuracy@1 | 0.6246 |
270
- | cosine_accuracy@3 | 0.8207 |
271
- | cosine_accuracy@5 | 0.8754 |
272
- | cosine_accuracy@10 | 0.9267 |
273
- | cosine_precision@1 | 0.6246 |
274
- | cosine_precision@3 | 0.2736 |
275
- | cosine_precision@5 | 0.1751 |
276
- | cosine_precision@10 | 0.0927 |
277
- | cosine_recall@1 | 0.6246 |
278
- | cosine_recall@3 | 0.8207 |
279
- | cosine_recall@5 | 0.8754 |
280
- | cosine_recall@10 | 0.9267 |
281
- | cosine_ndcg@10 | 0.779 |
282
- | cosine_mrr@10 | 0.7312 |
283
- | **cosine_map@100** | **0.7348** |
284
- | dot_accuracy@1 | 0.6246 |
285
- | dot_accuracy@3 | 0.8207 |
286
- | dot_accuracy@5 | 0.8754 |
287
- | dot_accuracy@10 | 0.9267 |
288
- | dot_precision@1 | 0.6246 |
289
- | dot_precision@3 | 0.2736 |
290
- | dot_precision@5 | 0.1751 |
291
- | dot_precision@10 | 0.0927 |
292
- | dot_recall@1 | 0.6246 |
293
- | dot_recall@3 | 0.8207 |
294
- | dot_recall@5 | 0.8754 |
295
- | dot_recall@10 | 0.9267 |
296
- | dot_ndcg@10 | 0.779 |
297
- | dot_mrr@10 | 0.7312 |
298
- | dot_map@100 | 0.7348 |
299
 
300
  <!--
301
  ## Bias, Risks and Limitations
@@ -316,19 +241,19 @@ You can finetune this model on your own dataset.
316
  #### Unnamed Dataset
317
 
318
 
319
- * Size: 5,005 training samples
320
  * Columns: <code>input</code> and <code>output</code>
321
  * Approximate statistics based on the first 1000 samples:
322
- | | input | output |
323
- |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
324
- | type | string | string |
325
- | details | <ul><li>min: 3 tokens</li><li>mean: 8.83 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 7.21 tokens</li><li>max: 18 tokens</li></ul> |
326
  * Samples:
327
- | input | output |
328
- |:--------------------------------------------|:-------------------------------------------------------|
329
- | <code>fresador mecanico ii</code> | <code>não encontrado (adicione nas observações)</code> |
330
- | <code>analista de sistemas ui ux iii</code> | <code>product design</code> |
331
- | <code>devops</code> | <code>devops engineering</code> |
332
  * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
333
  ```json
334
  {
@@ -342,19 +267,19 @@ You can finetune this model on your own dataset.
342
  #### Unnamed Dataset
343
 
344
 
345
- * Size: 1,132 evaluation samples
346
  * Columns: <code>input</code> and <code>output</code>
347
  * Approximate statistics based on the first 1000 samples:
348
- | | input | output |
349
- |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
350
- | type | string | string |
351
- | details | <ul><li>min: 3 tokens</li><li>mean: 8.76 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 7.08 tokens</li><li>max: 18 tokens</li></ul> |
352
  * Samples:
353
- | input | output |
354
- |:-----------------------------------------|:-------------------------------------------------------|
355
- | <code>produtor (a) de video pleno</code> | <code>não encontrado (adicione nas observações)</code> |
356
- | <code>ai staff software engineer</code> | <code>software engineering</code> |
357
- | <code>montador digital i</code> | <code>não encontrado (adicione nas observações)</code> |
358
  * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
359
  ```json
360
  {
@@ -368,6 +293,8 @@ You can finetune this model on your own dataset.
368
 
369
  - `eval_strategy`: steps
370
  - `warmup_ratio`: 0.1
 
 
371
 
372
  #### All Hyperparameters
373
  <details><summary>Click to expand</summary>
@@ -389,7 +316,7 @@ You can finetune this model on your own dataset.
389
  - `adam_beta2`: 0.999
390
  - `adam_epsilon`: 1e-08
391
  - `max_grad_norm`: 1.0
392
- - `num_train_epochs`: 3.0
393
  - `max_steps`: -1
394
  - `lr_scheduler_type`: linear
395
  - `lr_scheduler_kwargs`: {}
@@ -429,7 +356,7 @@ You can finetune this model on your own dataset.
429
  - `disable_tqdm`: False
430
  - `remove_unused_columns`: True
431
  - `label_names`: None
432
- - `load_best_model_at_end`: False
433
  - `ignore_data_skip`: False
434
  - `fsdp`: []
435
  - `fsdp_min_num_params`: 0
@@ -459,6 +386,7 @@ You can finetune this model on your own dataset.
459
  - `gradient_checkpointing`: False
460
  - `gradient_checkpointing_kwargs`: None
461
  - `include_inputs_for_metrics`: False
 
462
  - `eval_do_concat_batches`: True
463
  - `fp16_backend`: auto
464
  - `push_to_hub_model_id`: None
@@ -482,35 +410,26 @@ You can finetune this model on your own dataset.
482
  - `eval_on_start`: False
483
  - `use_liger_kernel`: False
484
  - `eval_use_gather_object`: False
485
- - `batch_sampler`: batch_sampler
 
 
486
  - `multi_dataset_batch_sampler`: proportional
487
 
488
  </details>
489
 
490
  ### Training Logs
491
- | Epoch | Step | Training Loss | loss | cosine_map@100 |
492
- |:------:|:----:|:-------------:|:------:|:--------------:|
493
- | 0 | 0 | - | - | 0.3578 |
494
- | 0.3195 | 200 | - | 0.9975 | 0.5035 |
495
- | 0.6390 | 400 | - | 0.8471 | 0.5845 |
496
- | 0.7987 | 500 | 1.0355 | - | - |
497
- | 0.9585 | 600 | - | 0.7569 | 0.6157 |
498
- | 1.2780 | 800 | - | 0.7542 | 0.6565 |
499
- | 1.5974 | 1000 | 0.648 | 0.6835 | 0.6786 |
500
- | 1.9169 | 1200 | - | 0.6569 | 0.6851 |
501
- | 2.2364 | 1400 | - | 0.6480 | 0.7167 |
502
- | 2.3962 | 1500 | 0.5253 | - | - |
503
- | 2.5559 | 1600 | - | 0.6506 | 0.7110 |
504
- | 2.8754 | 1800 | - | 0.6391 | 0.7348 |
505
 
506
 
507
  ### Framework Versions
508
- - Python: 3.11.6
509
- - Sentence Transformers: 3.1.1
510
- - Transformers: 4.45.2
511
- - PyTorch: 2.5.1+cu124
512
  - Accelerate: 1.1.1
513
- - Datasets: 2.14.4
514
  - Tokenizers: 0.20.3
515
 
516
  ## Citation
 
4
  - sentence-similarity
5
  - feature-extraction
6
  - generated_from_trainer
7
+ - dataset_size:4372
8
  - loss:MultipleNegativesRankingLoss
9
  base_model: sentence-transformers/all-mpnet-base-v2
10
  widget:
11
+ - source_sentence: analista de produtos pl
 
 
 
 
 
 
 
 
 
 
12
  sentences:
13
+ - product management
14
  - business operations
15
+ - logistic management generalist
16
+ - source_sentence: product analyst ii
 
17
  sentences:
 
 
18
  - product management
19
+ - business development (bizdev)
20
+ - compliance
21
+ - source_sentence: analista de gestão de gente pl
22
+ sentences:
23
+ - data engineering
24
+ - hr generalist
25
+ - data analysis
26
+ - source_sentence: general services
27
+ sentences:
28
+ - financial planning and analysis (fp&a)
29
+ - customer success
30
+ - general services
31
+ - source_sentence: const parceria de negocio ii
32
  sentences:
33
+ - hr generalist
34
+ - copywriter
35
+ - business development (bizdev)
36
  pipeline_tag: sentence-similarity
37
  library_name: sentence-transformers
38
  metrics:
 
51
  - cosine_ndcg@10
52
  - cosine_mrr@10
53
  - cosine_map@100
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
  model-index:
55
  - name: SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
56
  results:
 
62
  type: unknown
63
  metrics:
64
  - type: cosine_accuracy@1
65
+ value: 0.3202195791399817
66
  name: Cosine Accuracy@1
67
  - type: cosine_accuracy@3
68
+ value: 0.454711802378774
69
  name: Cosine Accuracy@3
70
  - type: cosine_accuracy@5
71
+ value: 0.5224153705397987
72
  name: Cosine Accuracy@5
73
  - type: cosine_accuracy@10
74
+ value: 0.6184812442817932
75
  name: Cosine Accuracy@10
76
  - type: cosine_precision@1
77
+ value: 0.3202195791399817
78
  name: Cosine Precision@1
79
  - type: cosine_precision@3
80
+ value: 0.15157060079292467
81
  name: Cosine Precision@3
82
  - type: cosine_precision@5
83
+ value: 0.10448307410795975
84
  name: Cosine Precision@5
85
  - type: cosine_precision@10
86
+ value: 0.061848124428179316
87
  name: Cosine Precision@10
88
  - type: cosine_recall@1
89
+ value: 0.3202195791399817
90
  name: Cosine Recall@1
91
  - type: cosine_recall@3
92
+ value: 0.454711802378774
93
  name: Cosine Recall@3
94
  - type: cosine_recall@5
95
+ value: 0.5224153705397987
96
  name: Cosine Recall@5
97
  - type: cosine_recall@10
98
+ value: 0.6184812442817932
99
  name: Cosine Recall@10
100
  - type: cosine_ndcg@10
101
+ value: 0.45577270813945114
102
  name: Cosine Ndcg@10
103
  - type: cosine_mrr@10
104
+ value: 0.4052037496913979
105
  name: Cosine Mrr@10
106
  - type: cosine_map@100
107
+ value: 0.4178228611548902
108
  name: Cosine Map@100
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
109
  ---
110
 
111
  # SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
 
118
  - **Model Type:** Sentence Transformer
119
  - **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 9a3225965996d404b775526de6dbfe85d3368642 -->
120
  - **Maximum Sequence Length:** 384 tokens
121
+ - **Output Dimensionality:** 768 dimensions
122
  - **Similarity Function:** Cosine Similarity
123
  <!-- - **Training Dataset:** Unknown -->
124
  <!-- - **Language:** Unknown -->
 
158
  model = SentenceTransformer("sentence_transformers_model_id")
159
  # Run inference
160
  sentences = [
161
+ 'const parceria de negocio ii',
162
+ 'business development (bizdev)',
163
+ 'hr generalist',
164
  ]
165
  embeddings = model.encode(sentences)
166
  print(embeddings.shape)
 
206
 
207
  | Metric | Value |
208
  |:--------------------|:-----------|
209
+ | cosine_accuracy@1 | 0.3202 |
210
+ | cosine_accuracy@3 | 0.4547 |
211
+ | cosine_accuracy@5 | 0.5224 |
212
+ | cosine_accuracy@10 | 0.6185 |
213
+ | cosine_precision@1 | 0.3202 |
214
+ | cosine_precision@3 | 0.1516 |
215
+ | cosine_precision@5 | 0.1045 |
216
+ | cosine_precision@10 | 0.0618 |
217
+ | cosine_recall@1 | 0.3202 |
218
+ | cosine_recall@3 | 0.4547 |
219
+ | cosine_recall@5 | 0.5224 |
220
+ | cosine_recall@10 | 0.6185 |
221
+ | **cosine_ndcg@10** | **0.4558** |
222
+ | cosine_mrr@10 | 0.4052 |
223
+ | cosine_map@100 | 0.4178 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
224
 
225
  <!--
226
  ## Bias, Risks and Limitations
 
241
  #### Unnamed Dataset
242
 
243
 
244
+ * Size: 4,372 training samples
245
  * Columns: <code>input</code> and <code>output</code>
246
  * Approximate statistics based on the first 1000 samples:
247
+ | | input | output |
248
+ |:--------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
249
+ | type | string | string |
250
+ | details | <ul><li>min: 3 tokens</li><li>mean: 10.55 tokens</li><li>max: 141 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 5.03 tokens</li><li>max: 12 tokens</li></ul> |
251
  * Samples:
252
+ | input | output |
253
+ |:--------------------------------------------------------|:------------------------------------|
254
+ | <code>analista de desenvolvimento organizacional</code> | <code>learning & development</code> |
255
+ | <code>software engineer sr</code> | <code>software engineering</code> |
256
+ | <code>gerente de grupo de produtos i</code> | <code>product management</code> |
257
  * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
258
  ```json
259
  {
 
267
  #### Unnamed Dataset
268
 
269
 
270
+ * Size: 1,093 evaluation samples
271
  * Columns: <code>input</code> and <code>output</code>
272
  * Approximate statistics based on the first 1000 samples:
273
+ | | input | output |
274
+ |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
275
+ | type | string | string |
276
+ | details | <ul><li>min: 3 tokens</li><li>mean: 9.91 tokens</li><li>max: 122 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 4.97 tokens</li><li>max: 12 tokens</li></ul> |
277
  * Samples:
278
+ | input | output |
279
+ |:-----------------------------------------------|:------------------------------------|
280
+ | <code>analista de student experience ii</code> | <code>customer support</code> |
281
+ | <code>legal support</code> | <code>legal support</code> |
282
+ | <code>analista de dho</code> | <code>learning & development</code> |
283
  * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
284
  ```json
285
  {
 
293
 
294
  - `eval_strategy`: steps
295
  - `warmup_ratio`: 0.1
296
+ - `load_best_model_at_end`: True
297
+ - `batch_sampler`: no_duplicates
298
 
299
  #### All Hyperparameters
300
  <details><summary>Click to expand</summary>
 
316
  - `adam_beta2`: 0.999
317
  - `adam_epsilon`: 1e-08
318
  - `max_grad_norm`: 1.0
319
+ - `num_train_epochs`: 3
320
  - `max_steps`: -1
321
  - `lr_scheduler_type`: linear
322
  - `lr_scheduler_kwargs`: {}
 
356
  - `disable_tqdm`: False
357
  - `remove_unused_columns`: True
358
  - `label_names`: None
359
+ - `load_best_model_at_end`: True
360
  - `ignore_data_skip`: False
361
  - `fsdp`: []
362
  - `fsdp_min_num_params`: 0
 
386
  - `gradient_checkpointing`: False
387
  - `gradient_checkpointing_kwargs`: None
388
  - `include_inputs_for_metrics`: False
389
+ - `include_for_metrics`: []
390
  - `eval_do_concat_batches`: True
391
  - `fp16_backend`: auto
392
  - `push_to_hub_model_id`: None
 
410
  - `eval_on_start`: False
411
  - `use_liger_kernel`: False
412
  - `eval_use_gather_object`: False
413
+ - `average_tokens_across_devices`: False
414
+ - `prompts`: None
415
+ - `batch_sampler`: no_duplicates
416
  - `multi_dataset_batch_sampler`: proportional
417
 
418
  </details>
419
 
420
  ### Training Logs
421
+ | Epoch | Step | cosine_ndcg@10 |
422
+ |:-----:|:----:|:--------------:|
423
+ | 0 | 0 | 0.4558 |
 
 
 
 
 
 
 
 
 
 
 
424
 
425
 
426
  ### Framework Versions
427
+ - Python: 3.11.0
428
+ - Sentence Transformers: 3.3.1
429
+ - Transformers: 4.46.3
430
+ - PyTorch: 2.2.2
431
  - Accelerate: 1.1.1
432
+ - Datasets: 3.1.0
433
  - Tokenizers: 0.20.3
434
 
435
  ## Citation
config.json CHANGED
@@ -19,6 +19,6 @@
19
  "pad_token_id": 1,
20
  "relative_attention_num_buckets": 32,
21
  "torch_dtype": "float32",
22
- "transformers_version": "4.45.2",
23
  "vocab_size": 30527
24
  }
 
19
  "pad_token_id": 1,
20
  "relative_attention_num_buckets": 32,
21
  "torch_dtype": "float32",
22
+ "transformers_version": "4.46.3",
23
  "vocab_size": 30527
24
  }
config_sentence_transformers.json CHANGED
@@ -1,10 +1,10 @@
1
  {
2
  "__version__": {
3
- "sentence_transformers": "3.1.1",
4
- "transformers": "4.45.2",
5
- "pytorch": "2.5.1+cu124"
6
  },
7
  "prompts": {},
8
  "default_prompt_name": null,
9
- "similarity_fn_name": null
10
  }
 
1
  {
2
  "__version__": {
3
+ "sentence_transformers": "3.3.1",
4
+ "transformers": "4.46.3",
5
+ "pytorch": "2.2.2"
6
  },
7
  "prompts": {},
8
  "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b5855a55cd3835eec991b1c6b1d902581ed783c5a6ac097472f3296a3e642cc6
3
  size 437967672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b12db7f02b40be2f96f0917beaaf9462baea0bc46b6ca85a26613d5db4d792d4
3
  size 437967672
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d1dab884e2c5d7c8d23955392573b1b67fdafe15fd6f1a52d4dbe0eaf6ab1baf
3
+ size 5560