--- language: - en license: apache-2.0 library_name: sentence-transformers tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_with_trainer - dataset_size:100K - **Maximum Sequence Length:** 384 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Dot Product - **Training Datasets:** - [multi_nli](https://huggingface.co/datasets/nyu-mll/multi_nli) - [snli](https://huggingface.co/datasets/stanfordnlp/snli) - [stsb](https://huggingface.co/datasets/mteb/stsbenchmark-sts) - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("tomaarsen/mpnet-base-allnli") # Run inference sentences = [ "Rouen is the ancient center of Normandy's thriving textile industry, and the place of Joan of Arc's martyrdom ' a national symbol of resistance to tyranny.", 'Joan of Arc sacrificed her life at Rouen, which became an enduring symbol of opposition to tyranny.', 'The islands are part of France now instead of just colonies.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `sts-dev` * Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:-------------------|:-----------| | pearson_cosine | 0.8344 | | spearman_cosine | 0.8295 | | pearson_manhattan | 0.8317 | | spearman_manhattan | 0.8332 | | pearson_euclidean | 0.8273 | | spearman_euclidean | 0.8295 | | pearson_dot | 0.8344 | | **spearman_dot** | **0.8295** | | pearson_max | 0.8344 | | spearman_max | 0.8332 | #### Semantic Similarity * Dataset: `sts-test` * Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.7776 | | **spearman_cosine** | **0.7643** | | pearson_manhattan | 0.7788 | | spearman_manhattan | 0.7659 | | pearson_euclidean | 0.7763 | | spearman_euclidean | 0.7643 | | pearson_dot | 0.7776 | | spearman_dot | 0.7643 | | pearson_max | 0.7788 | | spearman_max | 0.7659 | ## Training Details ### Training Datasets #### multi_nli * Dataset: [multi_nli](https://huggingface.co/datasets/nyu-mll/multi_nli) at [da70db2](https://huggingface.co/datasets/nyu-mll/multi_nli/tree/da70db2af9d09693783c3320c4249840212ee221) * Size: 392,702 training samples * Columns: premise, hypothesis, and label * Approximate statistics based on the first 1000 samples: | | premise | hypothesis | label | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------| | type | string | string | int | | details | | | | * Samples: | premise | hypothesis | label | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------| | Conceptually cream skimming has two basic dimensions - product and geography. | Product and geography are what make cream skimming work. | 1 | | you know during the season and i guess at at your level uh you lose them to the next level if if they decide to recall the the parent team the Braves decide to call to recall a guy from triple A then a double A guy goes up to replace him and a single A guy goes up to replace him | You lose the things to the following level if the people recall. | 0 | | One of our number will carry out your instructions minutely. | A member of my team will execute your orders with immense precision. | 0 | * Loss: [SoftmaxLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss) #### snli * Dataset: [snli](https://huggingface.co/datasets/stanfordnlp/snli) at [cdb5c3d](https://huggingface.co/datasets/stanfordnlp/snli/tree/cdb5c3d5eed6ead6e5a341c8e56e669bb666725b) * Size: 549,367 training samples * Columns: snli_premise, hypothesis, and label * Approximate statistics based on the first 1000 samples: | | snli_premise | hypothesis | label | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------| | type | string | string | int | | details | | | | * Samples: | snli_premise | hypothesis | label | |:--------------------------------------------------------------------|:---------------------------------------------------------------|:---------------| | A person on a horse jumps over a broken down airplane. | A person is training his horse for a competition. | 1 | | A person on a horse jumps over a broken down airplane. | A person is at a diner, ordering an omelette. | 2 | | A person on a horse jumps over a broken down airplane. | A person is outdoors, on a horse. | 0 | * Loss: [SoftmaxLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss) #### stsb * Dataset: [stsb](https://huggingface.co/datasets/mteb/stsbenchmark-sts) at [8913289](https://huggingface.co/datasets/mteb/stsbenchmark-sts/tree/8913289635987208e6e7c72789e4be2fe94b6abd) * Size: 5,749 training samples * Columns: sentence1, sentence2, and label * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | label | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | label | |:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------| | A plane is taking off. | An air plane is taking off. | 1.0 | | A man is playing a large flute. | A man is playing a flute. | 0.76 | | A man is spreading shreded cheese on a pizza. | A man is spreading shredded cheese on an uncooked pizza. | 0.76 | * Loss: [CosineSimilarityLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Evaluation Datasets #### multi_nli * Dataset: [multi_nli](https://huggingface.co/datasets/nyu-mll/multi_nli) at [da70db2](https://huggingface.co/datasets/nyu-mll/multi_nli/tree/da70db2af9d09693783c3320c4249840212ee221) * Size: 100 evaluation samples * Columns: premise, hypothesis, and label * Approximate statistics based on the first 1000 samples: | | premise | hypothesis | label | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------| | type | string | string | int | | details | | | | * Samples: | premise | hypothesis | label | |:---------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|:---------------| | The new rights are nice enough | Everyone really likes the newest benefits | 1 | | This site includes a list of all award winners and a searchable database of Government Executive articles. | The Government Executive articles housed on the website are not able to be searched. | 2 | | uh i don't know i i have mixed emotions about him uh sometimes i like him but at the same times i love to see somebody beat him | I like him for the most part, but would still enjoy seeing someone beat him. | 0 | * Loss: [SoftmaxLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss) #### snli * Dataset: [snli](https://huggingface.co/datasets/stanfordnlp/snli) at [cdb5c3d](https://huggingface.co/datasets/stanfordnlp/snli/tree/cdb5c3d5eed6ead6e5a341c8e56e669bb666725b) * Size: 9,842 evaluation samples * Columns: snli_premise, hypothesis, and label * Approximate statistics based on the first 1000 samples: | | snli_premise | hypothesis | label | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------| | type | string | string | int | | details | | | | * Samples: | snli_premise | hypothesis | label | |:-------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|:---------------| | Two women are embracing while holding to go packages. | The sisters are hugging goodbye while holding to go packages after just eating lunch. | 1 | | Two women are embracing while holding to go packages. | Two woman are holding packages. | 0 | | Two women are embracing while holding to go packages. | The men are fighting outside a deli. | 2 | * Loss: [SoftmaxLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss) #### stsb * Dataset: [stsb](https://huggingface.co/datasets/mteb/stsbenchmark-sts) at [8913289](https://huggingface.co/datasets/mteb/stsbenchmark-sts/tree/8913289635987208e6e7c72789e4be2fe94b6abd) * Size: 1,500 evaluation samples * Columns: sentence1, sentence2, and label * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | label | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | label | |:--------------------------------------------------|:------------------------------------------------------|:------------------| | A man with a hard hat is dancing. | A man wearing a hard hat is dancing. | 1.0 | | A young child is riding a horse. | A child is riding a horse. | 0.95 | | A man is feeding a mouse to a snake. | The man is feeding a mouse to the snake. | 1.0 | * Loss: [CosineSimilarityLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `learning_rate`: 2e-05 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `seed`: 33 - `bf16`: True - `load_best_model_at_end`: True - `push_to_hub`: True - `hub_model_id`: tomaarsen/mpnet-base-allnli - `hub_private_repo`: True - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 33 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: True - `resume_from_checkpoint`: None - `hub_model_id`: tomaarsen/mpnet-base-allnli - `hub_strategy`: every_save - `hub_private_repo`: True - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin
### Training Logs | Epoch | Step | Training Loss | multi nli loss | snli loss | stsb loss | sts-dev_spearman_dot | sts-test_spearman_cosine | |:----------:|:-------:|:-------------:|:--------------:|:----------:|:----------:|:--------------------:|:------------------------:| | 0.0370 | 10 | 0.8347 | - | - | - | - | - | | 0.0741 | 20 | 0.8269 | - | - | - | - | - | | 0.1111 | 30 | 0.7036 | 1.0978 | 1.0984 | 0.0830 | 0.6636 | - | | 0.1481 | 40 | 0.7889 | - | - | - | - | - | | 0.1852 | 50 | 0.7948 | - | - | - | - | - | | 0.2222 | 60 | 0.688 | 1.0976 | 1.0961 | 0.0679 | 0.7124 | - | | 0.2593 | 70 | 0.7911 | - | - | - | - | - | | 0.2963 | 80 | 0.7847 | - | - | - | - | - | | 0.3333 | 90 | 0.6801 | 1.0950 | 1.0942 | 0.0522 | 0.7810 | - | | 0.3704 | 100 | 0.7837 | - | - | - | - | - | | 0.4074 | 110 | 0.7803 | - | - | - | - | - | | 0.4444 | 120 | 0.6756 | 1.0978 | 1.0929 | 0.0441 | 0.8157 | - | | 0.4815 | 130 | 0.7829 | - | - | - | - | - | | 0.5185 | 140 | 0.7789 | - | - | - | - | - | | 0.5556 | 150 | 0.6756 | 1.0954 | 1.0911 | 0.0433 | 0.8215 | - | | 0.5926 | 160 | 0.7802 | - | - | - | - | - | | 0.6296 | 170 | 0.7751 | - | - | - | - | - | | 0.6667 | 180 | 0.6679 | 1.0934 | 1.0885 | 0.0401 | 0.8235 | - | | 0.7037 | 190 | 0.7755 | - | - | - | - | - | | 0.7407 | 200 | 0.775 | - | - | - | - | - | | **0.7778** | **210** | **0.6694** | **1.0919** | **1.0859** | **0.0377** | **0.8295** | **-** | | 0.8148 | 220 | 0.7733 | - | - | - | - | - | | 0.8519 | 230 | 0.772 | - | - | - | - | - | | 0.8889 | 240 | 0.6656 | 1.0891 | 1.0838 | 0.0365 | 0.8292 | - | | 0.9259 | 250 | 0.7726 | - | - | - | - | - | | 0.9630 | 260 | 0.7731 | - | - | - | - | - | | 1.0 | 270 | 0.6674 | 1.0888 | 1.0833 | 0.0372 | 0.8295 | 0.7643 | * The bold row denotes the saved checkpoint. ### Environmental Impact Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). - **Energy Consumed**: 0.018 kWh - **Carbon Emitted**: 0.007 kg of CO2 - **Hours Used**: 0.068 hours ### Training Hardware - **On Cloud**: No - **GPU Model**: 1 x NVIDIA GeForce RTX 3090 - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K - **RAM Size**: 31.78 GB ### Framework Versions - Python: 3.11.6 - Sentence Transformers: 3.1.0.dev0 - Transformers: 4.41.2 - PyTorch: 2.3.0+cu121 - Accelerate: 0.30.1 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers and SoftmaxLoss ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ```