--- base_model: sentence-transformers/all-MiniLM-L6-v2 language: - en library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1830648 - loss:AnglELoss widget: - source_sentence: crunchy chips sentences: - big chips spiced gouda - purse - macaroni - source_sentence: genuine leather luggage sentences: - janatte luggage - bomb chemise - purse - source_sentence: head covers Rashguard sentences: - Double Shaded Blue Clutch - Rashguard - bathing costume - source_sentence: hand Made Sweatpants sentences: - acid cleanser - reflective weave sweatpants - rashguard - source_sentence: siamy wrap sentences: - siamy - hair revival - backpack --- # all-MiniLM-L6-v5-pair_score This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'siamy wrap', 'siamy', 'hair revival', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Training Details ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `learning_rate`: 2e-05 - `num_train_epochs`: 2 - `warmup_ratio`: 0.1 - `fp16`: True #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional
### Training Logs
Click to expand | Epoch | Step | Training Loss | loss | |:------:|:-----:|:-------------:|:------:| | 0.0070 | 100 | 16.865 | - | | 0.0140 | 200 | 16.1556 | - | | 0.0210 | 300 | 14.8008 | - | | 0.0280 | 400 | 12.4025 | - | | 0.0350 | 500 | 9.7465 | - | | 0.0420 | 600 | 8.448 | - | | 0.0489 | 700 | 8.1951 | - | | 0.0559 | 800 | 8.1093 | - | | 0.0629 | 900 | 8.0567 | - | | 0.0699 | 1000 | 8.0401 | - | | 0.0769 | 1100 | 7.9491 | - | | 0.0839 | 1200 | 7.9494 | - | | 0.0909 | 1300 | 7.9386 | - | | 0.0979 | 1400 | 7.9033 | - | | 0.1049 | 1500 | 7.9055 | - | | 0.1119 | 1600 | 7.9203 | - | | 0.1189 | 1700 | 7.8381 | - | | 0.1259 | 1800 | 7.8679 | - | | 0.1328 | 1900 | 7.8686 | - | | 0.1398 | 2000 | 7.8252 | - | | 0.1468 | 2100 | 7.856 | - | | 0.1538 | 2200 | 7.8301 | - | | 0.1608 | 2300 | 7.8595 | - | | 0.1678 | 2400 | 7.8138 | - | | 0.1748 | 2500 | 7.812 | - | | 0.1818 | 2600 | 7.8261 | - | | 0.1888 | 2700 | 7.7988 | - | | 0.1958 | 2800 | 7.7965 | - | | 0.2028 | 2900 | 7.783 | - | | 0.2098 | 3000 | 7.7752 | - | | 0.2168 | 3100 | 7.7715 | - | | 0.2237 | 3200 | 7.7903 | - | | 0.2307 | 3300 | 7.7656 | - | | 0.2377 | 3400 | 7.749 | - | | 0.2447 | 3500 | 7.7662 | - | | 0.2517 | 3600 | 7.7492 | - | | 0.2587 | 3700 | 7.737 | - | | 0.2657 | 3800 | 7.7232 | - | | 0.2727 | 3900 | 7.7616 | - | | 0.2797 | 4000 | 7.7391 | - | | 0.2867 | 4100 | 7.7552 | - | | 0.2937 | 4200 | 7.7273 | - | | 0.3007 | 4300 | 7.7216 | - | | 0.3076 | 4400 | 7.7371 | - | | 0.3146 | 4500 | 7.7426 | - | | 0.3216 | 4600 | 7.7406 | - | | 0.3286 | 4700 | 7.712 | - | | 0.3356 | 4800 | 7.7466 | - | | 0.3426 | 4900 | 7.7058 | - | | 0.3496 | 5000 | 7.7139 | 7.6896 | | 0.3566 | 5100 | 7.7457 | - | | 0.3636 | 5200 | 7.7172 | - | | 0.3706 | 5300 | 7.739 | - | | 0.3776 | 5400 | 7.7259 | - | | 0.3846 | 5500 | 7.6977 | - | | 0.3916 | 5600 | 7.7237 | - | | 0.3985 | 5700 | 7.7118 | - | | 0.4055 | 5800 | 7.7099 | - | | 0.4125 | 5900 | 7.7142 | - | | 0.4195 | 6000 | 7.6885 | - | | 0.4265 | 6100 | 7.6799 | - | | 0.4335 | 6200 | 7.7039 | - | | 0.4405 | 6300 | 7.6825 | - | | 0.4475 | 6400 | 7.6846 | - | | 0.4545 | 6500 | 7.7078 | - | | 0.4615 | 6600 | 7.6945 | - | | 0.4685 | 6700 | 7.7017 | - | | 0.4755 | 6800 | 7.6781 | - | | 0.4825 | 6900 | 7.6885 | - | | 0.4894 | 7000 | 7.7426 | - | | 0.4964 | 7100 | 7.6809 | - | | 0.5034 | 7200 | 7.6977 | - | | 0.5104 | 7300 | 7.6964 | - | | 0.5174 | 7400 | 7.6834 | - | | 0.5244 | 7500 | 7.6593 | - | | 0.5314 | 7600 | 7.6745 | - | | 0.5384 | 7700 | 7.6587 | - | | 0.5454 | 7800 | 7.6389 | - | | 0.5524 | 7900 | 7.6298 | - | | 0.5594 | 8000 | 7.6693 | - | | 0.5664 | 8100 | 7.6454 | - | | 0.5733 | 8200 | 7.6491 | - | | 0.5803 | 8300 | 7.661 | - | | 0.5873 | 8400 | 7.6525 | - | | 0.5943 | 8500 | 7.6669 | - | | 0.6013 | 8600 | 7.6379 | - | | 0.6083 | 8700 | 7.6706 | - | | 0.6153 | 8800 | 7.6487 | - | | 0.6223 | 8900 | 7.6607 | - | | 0.6293 | 9000 | 7.6334 | - | | 0.6363 | 9100 | 7.6891 | - | | 0.6433 | 9200 | 7.734 | - | | 0.6503 | 9300 | 7.6283 | - | | 0.6573 | 9400 | 7.6461 | - | | 0.6642 | 9500 | 7.623 | - | | 0.6712 | 9600 | 7.6251 | - | | 0.6782 | 9700 | 7.6663 | - | | 0.6852 | 9800 | 7.6376 | - | | 0.6922 | 9900 | 7.6834 | - | | 0.6992 | 10000 | 7.6851 | 7.6099 | | 0.7062 | 10100 | 7.6034 | - | | 0.7132 | 10200 | 7.6512 | - | | 0.7202 | 10300 | 7.6413 | - | | 0.7272 | 10400 | 7.6083 | - | | 0.7342 | 10500 | 7.6475 | - | | 0.7412 | 10600 | 7.61 | - | | 0.7481 | 10700 | 7.6404 | - | | 0.7551 | 10800 | 7.6308 | - | | 0.7621 | 10900 | 7.638 | - | | 0.7691 | 11000 | 7.5954 | - | | 0.7761 | 11100 | 7.6037 | - | | 0.7831 | 11200 | 7.6405 | - | | 0.7901 | 11300 | 7.6396 | - | | 0.7971 | 11400 | 7.5898 | - | | 0.8041 | 11500 | 7.644 | - | | 0.8111 | 11600 | 7.639 | - | | 0.8181 | 11700 | 7.6146 | - | | 0.8251 | 11800 | 7.6076 | - | | 0.8321 | 11900 | 7.5997 | - | | 0.8390 | 12000 | 7.6196 | - | | 0.8460 | 12100 | 7.6139 | - | | 0.8530 | 12200 | 7.6335 | - | | 0.8600 | 12300 | 7.6057 | - | | 0.8670 | 12400 | 7.5759 | - | | 0.8740 | 12500 | 7.6044 | - | | 0.8810 | 12600 | 7.589 | - | | 0.8880 | 12700 | 7.5871 | - | | 0.8950 | 12800 | 7.6161 | - | | 0.9020 | 12900 | 7.5797 | - | | 0.9090 | 13000 | 7.6202 | - | | 0.9160 | 13100 | 7.6116 | - | | 0.9229 | 13200 | 7.6253 | - | | 0.9299 | 13300 | 7.5891 | - | | 0.9369 | 13400 | 7.5856 | - | | 0.9439 | 13500 | 7.5824 | - | | 0.9509 | 13600 | 7.6288 | - | | 0.9579 | 13700 | 7.5653 | - | | 0.9649 | 13800 | 7.6073 | - | | 0.9719 | 13900 | 7.5958 | - | | 0.9789 | 14000 | 7.599 | - | | 0.9859 | 14100 | 7.5982 | - | | 0.9929 | 14200 | 7.5634 | - | | 0.9999 | 14300 | 7.5923 | - | | 1.0069 | 14400 | 7.6072 | - | | 1.0138 | 14500 | 7.5589 | - | | 1.0208 | 14600 | 7.6 | - | | 1.0278 | 14700 | 7.5464 | - | | 1.0348 | 14800 | 7.5824 | - | | 1.0418 | 14900 | 7.5528 | - | | 1.0488 | 15000 | 7.568 | 7.5618 | | 1.0558 | 15100 | 7.559 | - | | 1.0628 | 15200 | 7.5555 | - | | 1.0698 | 15300 | 7.552 | - | | 1.0768 | 15400 | 7.5851 | - | | 1.0838 | 15500 | 7.5256 | - | | 1.0908 | 15600 | 7.5683 | - | | 1.0977 | 15700 | 7.5909 | - | | 1.1047 | 15800 | 7.5655 | - | | 1.1117 | 15900 | 7.5476 | - | | 1.1187 | 16000 | 7.5721 | - | | 1.1257 | 16100 | 7.5593 | - | | 1.1327 | 16200 | 7.5783 | - | | 1.1397 | 16300 | 7.5905 | - | | 1.1467 | 16400 | 7.542 | - | | 1.1537 | 16500 | 7.5794 | - | | 1.1607 | 16600 | 7.5669 | - | | 1.1677 | 16700 | 7.5738 | - | | 1.1747 | 16800 | 7.5431 | - | | 1.1817 | 16900 | 7.5401 | - | | 1.1886 | 17000 | 7.5629 | - | | 1.1956 | 17100 | 7.5534 | - | | 1.2026 | 17200 | 7.571 | - | | 1.2096 | 17300 | 7.5387 | - | | 1.2166 | 17400 | 7.5596 | - | | 1.2236 | 17500 | 7.5427 | - |
### Framework Versions - Python: 3.8.10 - Sentence Transformers: 3.1.1 - Transformers: 4.45.2 - PyTorch: 2.4.1+cu118 - Accelerate: 1.0.1 - Datasets: 3.0.1 - Tokenizers: 0.20.3 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### AnglELoss ```bibtex @misc{li2023angleoptimized, title={AnglE-optimized Text Embeddings}, author={Xianming Li and Jing Li}, year={2023}, eprint={2309.12871}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```