universalml commited on
Commit
a39b864
1 Parent(s): 6a1372f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -394
README.md CHANGED
@@ -12,13 +12,6 @@ tags:
12
  - dataset_size:45199
13
  - loss:MultipleNegativesRankingLoss
14
  widget:
15
- - source_sentence: प्रधानमन्त्री नरेन्द्र मोदी सरकारका असफलताहरू के के हुन्?
16
- sentences:
17
- - >-
18
- पूर्वोत्तर राज्यहरूका मुख्य समस्याहरू के के हुन् र तिनीहरूको केन्द्रीय
19
- सरकारसँग असन्तोष के हो?
20
- - पूर्णांक के हो?
21
- - नरेन्द्र मोदी सरकारले कुन क्षेत्रमा असफल भएको छ?
22
  - source_sentence: >-
23
  मैले विचार गर्नुपर्ने कलेजहरू के के हुन्, विचार गर्नुपर्ने कारकहरू: केएमसी
24
  म्यानिपल वा केएमसी मंगोलमा?
@@ -52,30 +45,12 @@ license: apache-2.0
52
 
53
  ### Model Description
54
  - **Model Type:** Sentence Transformer
55
- - **Base model:** [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) <!-- at revision baa7be480a7de1539afce709c8f13f833a510e0a -->
56
  - **Maximum Sequence Length:** 512 tokens
57
  - **Output Dimensionality:** 1024 tokens
58
  - **Similarity Function:** Cosine Similarity
59
  - **Training Dataset:**
60
  - universalml0/nepali_embedding_dataset
61
- <!-- - **Language:** Unknown -->
62
- <!-- - **License:** Unknown -->
63
-
64
- ### Model Sources
65
 
66
- - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
67
- - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
68
- - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
69
-
70
- ### Full Model Architecture
71
-
72
- ```
73
- SentenceTransformer(
74
- (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
75
- (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
76
- (2): Normalize()
77
- )
78
- ```
79
 
80
  ## Usage
81
 
@@ -92,7 +67,7 @@ Then you can load this model and run inference.
92
  from sentence_transformers import SentenceTransformer
93
 
94
  # Download from the 🤗 Hub
95
- model = SentenceTransformer("universalml0/finetuned_embedding_model_e5-large-multilingual-large")
96
  # Run inference
97
  sentences = [
98
  'म कसरी बिस्तारै तौल घटाउन सक्छु?',
@@ -107,372 +82,4 @@ print(embeddings.shape)
107
  similarities = model.similarity(embeddings, embeddings)
108
  print(similarities.shape)
109
  # [3, 3]
110
- ```
111
-
112
- <!--
113
- ### Direct Usage (Transformers)
114
-
115
- <details><summary>Click to see the direct usage in Transformers</summary>
116
-
117
- </details>
118
- -->
119
-
120
- <!--
121
- ### Downstream Usage (Sentence Transformers)
122
-
123
- You can finetune this model on your own dataset.
124
-
125
- <details><summary>Click to expand</summary>
126
-
127
- </details>
128
- -->
129
-
130
- <!--
131
- ### Out-of-Scope Use
132
-
133
- *List how the model may foreseeably be misused and address what users ought not to do with the model.*
134
- -->
135
-
136
- <!--
137
- ## Bias, Risks and Limitations
138
-
139
- *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
140
- -->
141
-
142
- <!--
143
- ### Recommendations
144
-
145
- *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
146
- -->
147
-
148
- ## Training Details
149
-
150
- ### Training Dataset
151
-
152
- #### universalml0/nepali_embedding_dataset
153
-
154
- * Dataset: universalml0/nepali_embedding_dataset
155
- * Size: 45,199 training samples
156
- * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
157
- * Approximate statistics based on the first 1000 samples:
158
- | | anchor | positive | negative |
159
- |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
160
- | type | string | string | string |
161
- | details | <ul><li>min: 7 tokens</li><li>mean: 17.53 tokens</li><li>max: 486 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 17.68 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 18.9 tokens</li><li>max: 156 tokens</li></ul> |
162
- * Samples:
163
- | anchor | positive | negative |
164
- |:----------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------|
165
- | <code>भारतीय सरकारले ५०० र १००० रुपयाको नोटमाथि प्रतिबन्ध लगाउनुको कारण के थियो?</code> | <code>भारतीय सरकारले ५०० र १००० को नोटलाई निष्क्रिय पारेको छ तर तिनीहरूलाई ५०० र २००० को नोटहरूसँग प्रतिस्थापन गरेको छ। के यो विरोधाभासी छैन?</code> | <code>भारतीय सरकारले किन चाहेको भए सीमित मात्रामा नोटहरू मुद्रण गर्न र बजेट घाटा क्लियर गर्न सक्दैन? विशेष गरी, किन कुनै पनि देशले यो गर्न सक्दैन?</code> |
166
- | <code>भारतीय हुनुको अनुभूति कस्तो हुन्छ?</code> | <code>भारतीय हुनुको अनुभूति कस्तो हुन्छ?</code> | <code>भारतीय महिला हुनुको अनुभव कस्तो हुन्छ?</code> |
167
- | <code>के कुनै व्यक्तिले edWisor मार्फत कुनै नौकरी पाएको छ?</code> | <code>एडवाइजर वैध छ र के कसैले यस मार्फत कुनै नौकरी पाएको छ?</code> | <code>एलिटमसको माध्यमबाट कसैले काम पाएको छ?</code> |
168
- * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
169
- ```json
170
- {
171
- "scale": 20.0,
172
- "similarity_fct": "cos_sim"
173
- }
174
- ```
175
-
176
- ### Training Hyperparameters
177
- #### Non-Default Hyperparameters
178
-
179
- - `per_device_train_batch_size`: 4
180
- - `learning_rate`: 1e-06
181
- - `num_train_epochs`: 1
182
- - `warmup_ratio`: 0.3
183
- - `bf16`: True
184
- - `batch_sampler`: no_duplicates
185
-
186
- #### All Hyperparameters
187
- <details><summary>Click to expand</summary>
188
-
189
- - `overwrite_output_dir`: False
190
- - `do_predict`: False
191
- - `eval_strategy`: no
192
- - `prediction_loss_only`: True
193
- - `per_device_train_batch_size`: 4
194
- - `per_device_eval_batch_size`: 8
195
- - `per_gpu_train_batch_size`: None
196
- - `per_gpu_eval_batch_size`: None
197
- - `gradient_accumulation_steps`: 1
198
- - `eval_accumulation_steps`: None
199
- - `torch_empty_cache_steps`: None
200
- - `learning_rate`: 1e-06
201
- - `weight_decay`: 0.0
202
- - `adam_beta1`: 0.9
203
- - `adam_beta2`: 0.999
204
- - `adam_epsilon`: 1e-08
205
- - `max_grad_norm`: 1.0
206
- - `num_train_epochs`: 1
207
- - `max_steps`: -1
208
- - `lr_scheduler_type`: linear
209
- - `lr_scheduler_kwargs`: {}
210
- - `warmup_ratio`: 0.3
211
- - `warmup_steps`: 0
212
- - `log_level`: passive
213
- - `log_level_replica`: warning
214
- - `log_on_each_node`: True
215
- - `logging_nan_inf_filter`: True
216
- - `save_safetensors`: True
217
- - `save_on_each_node`: False
218
- - `save_only_model`: False
219
- - `restore_callback_states_from_checkpoint`: False
220
- - `no_cuda`: False
221
- - `use_cpu`: False
222
- - `use_mps_device`: False
223
- - `seed`: 42
224
- - `data_seed`: None
225
- - `jit_mode_eval`: False
226
- - `use_ipex`: False
227
- - `bf16`: True
228
- - `fp16`: False
229
- - `fp16_opt_level`: O1
230
- - `half_precision_backend`: auto
231
- - `bf16_full_eval`: False
232
- - `fp16_full_eval`: False
233
- - `tf32`: None
234
- - `local_rank`: 0
235
- - `ddp_backend`: None
236
- - `tpu_num_cores`: None
237
- - `tpu_metrics_debug`: False
238
- - `debug`: []
239
- - `dataloader_drop_last`: False
240
- - `dataloader_num_workers`: 0
241
- - `dataloader_prefetch_factor`: None
242
- - `past_index`: -1
243
- - `disable_tqdm`: False
244
- - `remove_unused_columns`: True
245
- - `label_names`: None
246
- - `load_best_model_at_end`: False
247
- - `ignore_data_skip`: False
248
- - `fsdp`: []
249
- - `fsdp_min_num_params`: 0
250
- - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
251
- - `fsdp_transformer_layer_cls_to_wrap`: None
252
- - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
253
- - `deepspeed`: None
254
- - `label_smoothing_factor`: 0.0
255
- - `optim`: adamw_torch
256
- - `optim_args`: None
257
- - `adafactor`: False
258
- - `group_by_length`: False
259
- - `length_column_name`: length
260
- - `ddp_find_unused_parameters`: None
261
- - `ddp_bucket_cap_mb`: None
262
- - `ddp_broadcast_buffers`: False
263
- - `dataloader_pin_memory`: True
264
- - `dataloader_persistent_workers`: False
265
- - `skip_memory_metrics`: True
266
- - `use_legacy_prediction_loop`: False
267
- - `push_to_hub`: False
268
- - `resume_from_checkpoint`: None
269
- - `hub_model_id`: None
270
- - `hub_strategy`: every_save
271
- - `hub_private_repo`: False
272
- - `hub_always_push`: False
273
- - `gradient_checkpointing`: False
274
- - `gradient_checkpointing_kwargs`: None
275
- - `include_inputs_for_metrics`: False
276
- - `eval_do_concat_batches`: True
277
- - `fp16_backend`: auto
278
- - `push_to_hub_model_id`: None
279
- - `push_to_hub_organization`: None
280
- - `mp_parameters`:
281
- - `auto_find_batch_size`: False
282
- - `full_determinism`: False
283
- - `torchdynamo`: None
284
- - `ray_scope`: last
285
- - `ddp_timeout`: 1800
286
- - `torch_compile`: False
287
- - `torch_compile_backend`: None
288
- - `torch_compile_mode`: None
289
- - `dispatch_batches`: None
290
- - `split_batches`: None
291
- - `include_tokens_per_second`: False
292
- - `include_num_input_tokens_seen`: False
293
- - `neftune_noise_alpha`: None
294
- - `optim_target_modules`: None
295
- - `batch_eval_metrics`: False
296
- - `eval_on_start`: False
297
- - `eval_use_gather_object`: False
298
- - `batch_sampler`: no_duplicates
299
- - `multi_dataset_batch_sampler`: proportional
300
-
301
- </details>
302
-
303
- ### Training Logs
304
- <details><summary>Click to expand</summary>
305
-
306
- | Epoch | Step | Training Loss |
307
- |:------:|:-----:|:-------------:|
308
- | 0.0088 | 100 | 0.8671 |
309
- | 0.0177 | 200 | 0.8234 |
310
- | 0.0265 | 300 | 0.8223 |
311
- | 0.0354 | 400 | 0.7423 |
312
- | 0.0442 | 500 | 0.6605 |
313
- | 0.0531 | 600 | 0.5558 |
314
- | 0.0619 | 700 | 0.4076 |
315
- | 0.0708 | 800 | 0.3617 |
316
- | 0.0796 | 900 | 0.3087 |
317
- | 0.0885 | 1000 | 0.2747 |
318
- | 0.0973 | 1100 | 0.2409 |
319
- | 0.1062 | 1200 | 0.229 |
320
- | 0.1150 | 1300 | 0.209 |
321
- | 0.1239 | 1400 | 0.2556 |
322
- | 0.1327 | 1500 | 0.2536 |
323
- | 0.1416 | 1600 | 0.2092 |
324
- | 0.1504 | 1700 | 0.2464 |
325
- | 0.1593 | 1800 | 0.1727 |
326
- | 0.1681 | 1900 | 0.281 |
327
- | 0.1770 | 2000 | 0.2289 |
328
- | 0.1858 | 2100 | 0.2065 |
329
- | 0.1947 | 2200 | 0.1751 |
330
- | 0.2035 | 2300 | 0.231 |
331
- | 0.2124 | 2400 | 0.2127 |
332
- | 0.2212 | 2500 | 0.1908 |
333
- | 0.2301 | 2600 | 0.2131 |
334
- | 0.2389 | 2700 | 0.1704 |
335
- | 0.2478 | 2800 | 0.1923 |
336
- | 0.2566 | 2900 | 0.1635 |
337
- | 0.2655 | 3000 | 0.2061 |
338
- | 0.2743 | 3100 | 0.1843 |
339
- | 0.2832 | 3200 | 0.1443 |
340
- | 0.2920 | 3300 | 0.1513 |
341
- | 0.3009 | 3400 | 0.1879 |
342
- | 0.3097 | 3500 | 0.2372 |
343
- | 0.3186 | 3600 | 0.1542 |
344
- | 0.3274 | 3700 | 0.2523 |
345
- | 0.3363 | 3800 | 0.2055 |
346
- | 0.3451 | 3900 | 0.1474 |
347
- | 0.3540 | 4000 | 0.1647 |
348
- | 0.3628 | 4100 | 0.1615 |
349
- | 0.3717 | 4200 | 0.1271 |
350
- | 0.3805 | 4300 | 0.1451 |
351
- | 0.3894 | 4400 | 0.1887 |
352
- | 0.3982 | 4500 | 0.1334 |
353
- | 0.4071 | 4600 | 0.1962 |
354
- | 0.4159 | 4700 | 0.1695 |
355
- | 0.4248 | 4800 | 0.1561 |
356
- | 0.4336 | 4900 | 0.1146 |
357
- | 0.4425 | 5000 | 0.1381 |
358
- | 0.4513 | 5100 | 0.1452 |
359
- | 0.4602 | 5200 | 0.2388 |
360
- | 0.4690 | 5300 | 0.1951 |
361
- | 0.4779 | 5400 | 0.1142 |
362
- | 0.4867 | 5500 | 0.182 |
363
- | 0.4956 | 5600 | 0.1968 |
364
- | 0.5044 | 5700 | 0.1744 |
365
- | 0.5133 | 5800 | 0.1868 |
366
- | 0.5221 | 5900 | 0.1452 |
367
- | 0.5310 | 6000 | 0.1345 |
368
- | 0.5398 | 6100 | 0.1318 |
369
- | 0.5487 | 6200 | 0.218 |
370
- | 0.5575 | 6300 | 0.2118 |
371
- | 0.5664 | 6400 | 0.1972 |
372
- | 0.5752 | 6500 | 0.0935 |
373
- | 0.5841 | 6600 | 0.1991 |
374
- | 0.5929 | 6700 | 0.1252 |
375
- | 0.6018 | 6800 | 0.1128 |
376
- | 0.6106 | 6900 | 0.1585 |
377
- | 0.6195 | 7000 | 0.2293 |
378
- | 0.6283 | 7100 | 0.2104 |
379
- | 0.6372 | 7200 | 0.1416 |
380
- | 0.6460 | 7300 | 0.2004 |
381
- | 0.6549 | 7400 | 0.1446 |
382
- | 0.6637 | 7500 | 0.1171 |
383
- | 0.6726 | 7600 | 0.1386 |
384
- | 0.6814 | 7700 | 0.1291 |
385
- | 0.6903 | 7800 | 0.1546 |
386
- | 0.6991 | 7900 | 0.1484 |
387
- | 0.7080 | 8000 | 0.129 |
388
- | 0.7168 | 8100 | 0.1873 |
389
- | 0.7257 | 8200 | 0.1333 |
390
- | 0.7345 | 8300 | 0.1713 |
391
- | 0.7434 | 8400 | 0.1016 |
392
- | 0.7522 | 8500 | 0.1519 |
393
- | 0.7611 | 8600 | 0.1851 |
394
- | 0.7699 | 8700 | 0.144 |
395
- | 0.7788 | 8800 | 0.1488 |
396
- | 0.7876 | 8900 | 0.1568 |
397
- | 0.7965 | 9000 | 0.1672 |
398
- | 0.8053 | 9100 | 0.1236 |
399
- | 0.8142 | 9200 | 0.0973 |
400
- | 0.8230 | 9300 | 0.1491 |
401
- | 0.8319 | 9400 | 0.2251 |
402
- | 0.8407 | 9500 | 0.1433 |
403
- | 0.8496 | 9600 | 0.2634 |
404
- | 0.8584 | 9700 | 0.1723 |
405
- | 0.8673 | 9800 | 0.2373 |
406
- | 0.8761 | 9900 | 0.1065 |
407
- | 0.8850 | 10000 | 0.1578 |
408
- | 0.8938 | 10100 | 0.1127 |
409
- | 0.9027 | 10200 | 0.1632 |
410
- | 0.9115 | 10300 | 0.19 |
411
- | 0.9204 | 10400 | 0.0958 |
412
- | 0.9292 | 10500 | 0.1029 |
413
- | 0.9381 | 10600 | 0.1183 |
414
- | 0.9469 | 10700 | 0.1779 |
415
- | 0.9558 | 10800 | 0.1571 |
416
- | 0.9646 | 10900 | 0.1666 |
417
- | 0.9735 | 11000 | 0.1405 |
418
- | 0.9823 | 11100 | 0.147 |
419
- | 0.9912 | 11200 | 0.1428 |
420
- | 1.0 | 11300 | 0.1724 |
421
-
422
- </details>
423
-
424
- ### Framework Versions
425
- - Python: 3.9.5
426
- - Sentence Transformers: 3.0.1
427
- - Transformers: 4.44.2
428
- - PyTorch: 2.3.0+cu121
429
- - Accelerate: 0.33.0
430
- - Datasets: 2.21.0
431
- - Tokenizers: 0.19.1
432
-
433
- ## Citation
434
-
435
- ### BibTeX
436
-
437
- #### Sentence Transformers
438
- ```bibtex
439
- @inproceedings{reimers-2019-sentence-bert,
440
- title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
441
- author = "Reimers, Nils and Gurevych, Iryna",
442
- booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
443
- month = "11",
444
- year = "2019",
445
- publisher = "Association for Computational Linguistics",
446
- url = "https://arxiv.org/abs/1908.10084",
447
- }
448
- ```
449
-
450
- #### MultipleNegativesRankingLoss
451
- ```bibtex
452
- @misc{henderson2017efficient,
453
- title={Efficient Natural Language Response Suggestion for Smart Reply},
454
- author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
455
- year={2017},
456
- eprint={1705.00652},
457
- archivePrefix={arXiv},
458
- primaryClass={cs.CL}
459
- }
460
- ```
461
-
462
- <!--
463
- ## Glossary
464
-
465
- *Clearly define terms in order to be accessible across audiences.*
466
- -->
467
-
468
- <!--
469
- ## Model Card Authors
470
-
471
- *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
472
- -->
473
-
474
- <!--
475
- ## Model Card Contact
476
 
477
- *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
478
- -->
 
12
  - dataset_size:45199
13
  - loss:MultipleNegativesRankingLoss
14
  widget:
 
 
 
 
 
 
 
15
  - source_sentence: >-
16
  मैले विचार गर्नुपर्ने कलेजहरू के के हुन्, विचार गर्नुपर्ने कारकहरू: केएमसी
17
  म्यानिपल वा केएमसी मंगोलमा?
 
45
 
46
  ### Model Description
47
  - **Model Type:** Sentence Transformer
 
48
  - **Maximum Sequence Length:** 512 tokens
49
  - **Output Dimensionality:** 1024 tokens
50
  - **Similarity Function:** Cosine Similarity
51
  - **Training Dataset:**
52
  - universalml0/nepali_embedding_dataset
 
 
 
 
53
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
 
55
  ## Usage
56
 
 
67
  from sentence_transformers import SentenceTransformer
68
 
69
  # Download from the 🤗 Hub
70
+ model = SentenceTransformer("universalml/Nepali_Embedding_Model")
71
  # Run inference
72
  sentences = [
73
  'म कसरी बिस्तारै तौल घटाउन सक्छु?',
 
82
  similarities = model.similarity(embeddings, embeddings)
83
  print(similarities.shape)
84
  # [3, 3]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85