|
--- |
|
base_model: sentence-transformers/all-MiniLM-L12-v2 |
|
library_name: sentence-transformers |
|
metrics: |
|
- pearson_cosine |
|
- spearman_cosine |
|
- pearson_manhattan |
|
- spearman_manhattan |
|
- pearson_euclidean |
|
- spearman_euclidean |
|
- pearson_dot |
|
- spearman_dot |
|
- pearson_max |
|
- spearman_max |
|
pipeline_tag: sentence-similarity |
|
tags: |
|
- sentence-transformers |
|
- sentence-similarity |
|
- feature-extraction |
|
- generated_from_trainer |
|
- dataset_size:100000 |
|
- loss:CosineSimilarityLoss |
|
widget: |
|
- source_sentence: The church has granite statues of Jesus and the Apostles adorning |
|
its porch . |
|
sentences: |
|
- There were no statues in the church . |
|
- L' Afrique du sud et le reste de l' Afrique sont les mêmes . |
|
- Tours on foot are a great way to see LA . |
|
- source_sentence: Au Centre du réseau routier de la région , Alicante est également |
|
une base logique pour les automobilistes et pour les liaisons ferroviaires et |
|
ferroviaires . |
|
sentences: |
|
- Alicante est fréquentée par les automobilistes et les touristes . |
|
- Les examinateurs ont passé sept mois à étudier leurs conclusions . |
|
- Ferries to the island depart from the central station every 2 hours . |
|
- source_sentence: Scheduled to reopen in 2002 or 2003 , the Malibu site will house |
|
only the Getty holdings in Greek and Roman antiquities , some of which date as |
|
far back as 3000 b.c. |
|
sentences: |
|
- C' est impossible d' avoir des billets pour les enregistrements télévisés . |
|
- The Getty holdings were taken hold of thanks to the researchers ' effort . |
|
- After the first of may ends the peak season for ferries . |
|
- source_sentence: Une nouvelle recherche relie ces bactéries parodontale aux maladies |
|
cardiaques , au diabète , aux bébés à faible poids de naissance , et à d' autres |
|
saletés que vous attendez des bactéries qui se déchaînent dans le sang . |
|
sentences: |
|
- Le prix des actions de Caterpillar a baissé en 1991 quand ils ont fait grève . |
|
- Ils agissent comme chaque année est la même . |
|
- La recherche indique qu' il n' y a pas de lien entre les bactéries parodontale |
|
et les maladies cardiaques ou le diabète . |
|
- source_sentence: L' ancien n' est pas une classification juridique qui entraîne |
|
une perte automatique de ces droits . |
|
sentences: |
|
- Some degree of uncertainty is inherent in free-market systems . |
|
- Les villes grecques d' Anatolie ont été exclues de l' appartenance à la Confédération |
|
Delian . |
|
- Ils voulaient plaider pour les personnes âgées . |
|
model-index: |
|
- name: SentenceTransformer based on sentence-transformers/all-MiniLM-L12-v2 |
|
results: |
|
- task: |
|
type: semantic-similarity |
|
name: Semantic Similarity |
|
dataset: |
|
name: snli dev |
|
type: snli-dev |
|
metrics: |
|
- type: pearson_cosine |
|
value: 0.35421287329686374 |
|
name: Pearson Cosine |
|
- type: spearman_cosine |
|
value: 0.3592670991851331 |
|
name: Spearman Cosine |
|
- type: pearson_manhattan |
|
value: 0.34936411192844985 |
|
name: Pearson Manhattan |
|
- type: spearman_manhattan |
|
value: 0.3583327923327215 |
|
name: Spearman Manhattan |
|
- type: pearson_euclidean |
|
value: 0.34982920048695176 |
|
name: Pearson Euclidean |
|
- type: spearman_euclidean |
|
value: 0.35926709915022625 |
|
name: Spearman Euclidean |
|
- type: pearson_dot |
|
value: 0.3542128787197555 |
|
name: Pearson Dot |
|
- type: spearman_dot |
|
value: 0.35926727022169175 |
|
name: Spearman Dot |
|
- type: pearson_max |
|
value: 0.3542128787197555 |
|
name: Pearson Max |
|
- type: spearman_max |
|
value: 0.35926727022169175 |
|
name: Spearman Max |
|
--- |
|
|
|
# SentenceTransformer based on sentence-transformers/all-MiniLM-L12-v2 |
|
|
|
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
- **Model Type:** Sentence Transformer |
|
- **Base model:** [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) <!-- at revision 30ce63ae64e71b9199b3d2eae9de99f64a26eedc --> |
|
- **Maximum Sequence Length:** 128 tokens |
|
- **Output Dimensionality:** 384 tokens |
|
- **Similarity Function:** Cosine Similarity |
|
<!-- - **Training Dataset:** Unknown --> |
|
<!-- - **Language:** Unknown --> |
|
<!-- - **License:** Unknown --> |
|
|
|
### Model Sources |
|
|
|
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net) |
|
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) |
|
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) |
|
|
|
### Full Model Architecture |
|
|
|
``` |
|
SentenceTransformer( |
|
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel |
|
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) |
|
(2): Normalize() |
|
) |
|
``` |
|
|
|
## Usage |
|
|
|
### Direct Usage (Sentence Transformers) |
|
|
|
First install the Sentence Transformers library: |
|
|
|
```bash |
|
pip install -U sentence-transformers |
|
``` |
|
|
|
Then you can load this model and run inference. |
|
```python |
|
from sentence_transformers import SentenceTransformer |
|
|
|
# Download from the 🤗 Hub |
|
model = SentenceTransformer("Nessrine9/finetuned-snli-MiniLM-L12-v2-100k-en-fr") |
|
# Run inference |
|
sentences = [ |
|
"L' ancien n' est pas une classification juridique qui entraîne une perte automatique de ces droits .", |
|
'Ils voulaient plaider pour les personnes âgées .', |
|
"Les villes grecques d' Anatolie ont été exclues de l' appartenance à la Confédération Delian .", |
|
] |
|
embeddings = model.encode(sentences) |
|
print(embeddings.shape) |
|
# [3, 384] |
|
|
|
# Get the similarity scores for the embeddings |
|
similarities = model.similarity(embeddings, embeddings) |
|
print(similarities.shape) |
|
# [3, 3] |
|
``` |
|
|
|
<!-- |
|
### Direct Usage (Transformers) |
|
|
|
<details><summary>Click to see the direct usage in Transformers</summary> |
|
|
|
</details> |
|
--> |
|
|
|
<!-- |
|
### Downstream Usage (Sentence Transformers) |
|
|
|
You can finetune this model on your own dataset. |
|
|
|
<details><summary>Click to expand</summary> |
|
|
|
</details> |
|
--> |
|
|
|
<!-- |
|
### Out-of-Scope Use |
|
|
|
*List how the model may foreseeably be misused and address what users ought not to do with the model.* |
|
--> |
|
|
|
## Evaluation |
|
|
|
### Metrics |
|
|
|
#### Semantic Similarity |
|
* Dataset: `snli-dev` |
|
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) |
|
|
|
| Metric | Value | |
|
|:-------------------|:-----------| |
|
| pearson_cosine | 0.3542 | |
|
| spearman_cosine | 0.3593 | |
|
| pearson_manhattan | 0.3494 | |
|
| spearman_manhattan | 0.3583 | |
|
| pearson_euclidean | 0.3498 | |
|
| spearman_euclidean | 0.3593 | |
|
| pearson_dot | 0.3542 | |
|
| spearman_dot | 0.3593 | |
|
| pearson_max | 0.3542 | |
|
| **spearman_max** | **0.3593** | |
|
|
|
<!-- |
|
## Bias, Risks and Limitations |
|
|
|
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* |
|
--> |
|
|
|
<!-- |
|
### Recommendations |
|
|
|
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* |
|
--> |
|
|
|
## Training Details |
|
|
|
### Training Dataset |
|
|
|
#### Unnamed Dataset |
|
|
|
|
|
* Size: 100,000 training samples |
|
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> |
|
* Approximate statistics based on the first 1000 samples: |
|
| | sentence_0 | sentence_1 | label | |
|
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------| |
|
| type | string | string | float | |
|
| details | <ul><li>min: 5 tokens</li><li>mean: 34.31 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 18.24 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> | |
|
* Samples: |
|
| sentence_0 | sentence_1 | label | |
|
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------|:-----------------| |
|
| <code>We 're off ! "</code> | <code>We 're not headed off .</code> | <code>1.0</code> | |
|
| <code>Il y en a eu un ici récemment qui me vient à l' esprit que c' est à propos d' une femme que c' est ridicule je veux dire que c' est presque euh ce serait drôle si ce n' était pas si triste je veux dire cette femme cette femme est sortie et a engagé quelqu' un à</code> | <code>Cette femme a engagé quelqu' un récemment pour le faire et s' est fait prendre immédiatement .</code> | <code>0.5</code> | |
|
| <code>Gentilello a précisé qu' il n' avait pas critiqué le processus d' examen par les pairs , mais que les panels qui examinent les interventions en matière d' alcool dans l' eds devraient inclure des représentants de la médecine d' urgence .</code> | <code>Gentilello S' est ensuite battu avec un psychiatre sur le parking .</code> | <code>0.5</code> | |
|
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: |
|
```json |
|
{ |
|
"loss_fct": "torch.nn.modules.loss.MSELoss" |
|
} |
|
``` |
|
|
|
### Training Hyperparameters |
|
#### Non-Default Hyperparameters |
|
|
|
- `eval_strategy`: steps |
|
- `per_device_train_batch_size`: 16 |
|
- `per_device_eval_batch_size`: 16 |
|
- `num_train_epochs`: 4 |
|
- `fp16`: True |
|
- `multi_dataset_batch_sampler`: round_robin |
|
|
|
#### All Hyperparameters |
|
<details><summary>Click to expand</summary> |
|
|
|
- `overwrite_output_dir`: False |
|
- `do_predict`: False |
|
- `eval_strategy`: steps |
|
- `prediction_loss_only`: True |
|
- `per_device_train_batch_size`: 16 |
|
- `per_device_eval_batch_size`: 16 |
|
- `per_gpu_train_batch_size`: None |
|
- `per_gpu_eval_batch_size`: None |
|
- `gradient_accumulation_steps`: 1 |
|
- `eval_accumulation_steps`: None |
|
- `torch_empty_cache_steps`: None |
|
- `learning_rate`: 5e-05 |
|
- `weight_decay`: 0.0 |
|
- `adam_beta1`: 0.9 |
|
- `adam_beta2`: 0.999 |
|
- `adam_epsilon`: 1e-08 |
|
- `max_grad_norm`: 1 |
|
- `num_train_epochs`: 4 |
|
- `max_steps`: -1 |
|
- `lr_scheduler_type`: linear |
|
- `lr_scheduler_kwargs`: {} |
|
- `warmup_ratio`: 0.0 |
|
- `warmup_steps`: 0 |
|
- `log_level`: passive |
|
- `log_level_replica`: warning |
|
- `log_on_each_node`: True |
|
- `logging_nan_inf_filter`: True |
|
- `save_safetensors`: True |
|
- `save_on_each_node`: False |
|
- `save_only_model`: False |
|
- `restore_callback_states_from_checkpoint`: False |
|
- `no_cuda`: False |
|
- `use_cpu`: False |
|
- `use_mps_device`: False |
|
- `seed`: 42 |
|
- `data_seed`: None |
|
- `jit_mode_eval`: False |
|
- `use_ipex`: False |
|
- `bf16`: False |
|
- `fp16`: True |
|
- `fp16_opt_level`: O1 |
|
- `half_precision_backend`: auto |
|
- `bf16_full_eval`: False |
|
- `fp16_full_eval`: False |
|
- `tf32`: None |
|
- `local_rank`: 0 |
|
- `ddp_backend`: None |
|
- `tpu_num_cores`: None |
|
- `tpu_metrics_debug`: False |
|
- `debug`: [] |
|
- `dataloader_drop_last`: False |
|
- `dataloader_num_workers`: 0 |
|
- `dataloader_prefetch_factor`: None |
|
- `past_index`: -1 |
|
- `disable_tqdm`: False |
|
- `remove_unused_columns`: True |
|
- `label_names`: None |
|
- `load_best_model_at_end`: False |
|
- `ignore_data_skip`: False |
|
- `fsdp`: [] |
|
- `fsdp_min_num_params`: 0 |
|
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} |
|
- `fsdp_transformer_layer_cls_to_wrap`: None |
|
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} |
|
- `deepspeed`: None |
|
- `label_smoothing_factor`: 0.0 |
|
- `optim`: adamw_torch |
|
- `optim_args`: None |
|
- `adafactor`: False |
|
- `group_by_length`: False |
|
- `length_column_name`: length |
|
- `ddp_find_unused_parameters`: None |
|
- `ddp_bucket_cap_mb`: None |
|
- `ddp_broadcast_buffers`: False |
|
- `dataloader_pin_memory`: True |
|
- `dataloader_persistent_workers`: False |
|
- `skip_memory_metrics`: True |
|
- `use_legacy_prediction_loop`: False |
|
- `push_to_hub`: False |
|
- `resume_from_checkpoint`: None |
|
- `hub_model_id`: None |
|
- `hub_strategy`: every_save |
|
- `hub_private_repo`: False |
|
- `hub_always_push`: False |
|
- `gradient_checkpointing`: False |
|
- `gradient_checkpointing_kwargs`: None |
|
- `include_inputs_for_metrics`: False |
|
- `eval_do_concat_batches`: True |
|
- `fp16_backend`: auto |
|
- `push_to_hub_model_id`: None |
|
- `push_to_hub_organization`: None |
|
- `mp_parameters`: |
|
- `auto_find_batch_size`: False |
|
- `full_determinism`: False |
|
- `torchdynamo`: None |
|
- `ray_scope`: last |
|
- `ddp_timeout`: 1800 |
|
- `torch_compile`: False |
|
- `torch_compile_backend`: None |
|
- `torch_compile_mode`: None |
|
- `dispatch_batches`: None |
|
- `split_batches`: None |
|
- `include_tokens_per_second`: False |
|
- `include_num_input_tokens_seen`: False |
|
- `neftune_noise_alpha`: None |
|
- `optim_target_modules`: None |
|
- `batch_eval_metrics`: False |
|
- `eval_on_start`: False |
|
- `eval_use_gather_object`: False |
|
- `batch_sampler`: batch_sampler |
|
- `multi_dataset_batch_sampler`: round_robin |
|
|
|
</details> |
|
|
|
### Training Logs |
|
| Epoch | Step | Training Loss | snli-dev_spearman_max | |
|
|:------:|:-----:|:-------------:|:---------------------:| |
|
| 0.08 | 500 | 0.1948 | 0.0484 | |
|
| 0.16 | 1000 | 0.1752 | 0.1177 | |
|
| 0.24 | 1500 | 0.1727 | 0.1136 | |
|
| 0.32 | 2000 | 0.1668 | 0.2050 | |
|
| 0.4 | 2500 | 0.1673 | 0.2227 | |
|
| 0.48 | 3000 | 0.1651 | 0.1760 | |
|
| 0.56 | 3500 | 0.1619 | 0.2195 | |
|
| 0.64 | 4000 | 0.1625 | 0.2308 | |
|
| 0.72 | 4500 | 0.1563 | 0.2405 | |
|
| 0.8 | 5000 | 0.1598 | 0.2773 | |
|
| 0.88 | 5500 | 0.1589 | 0.2359 | |
|
| 0.96 | 6000 | 0.1587 | 0.2084 | |
|
| 1.0 | 6250 | - | 0.2615 | |
|
| 1.04 | 6500 | 0.158 | 0.2958 | |
|
| 1.12 | 7000 | 0.1557 | 0.2887 | |
|
| 1.2 | 7500 | 0.1544 | 0.2960 | |
|
| 1.28 | 8000 | 0.1535 | 0.2977 | |
|
| 1.3600 | 8500 | 0.1559 | 0.2546 | |
|
| 1.44 | 9000 | 0.1518 | 0.3201 | |
|
| 1.52 | 9500 | 0.1551 | 0.2894 | |
|
| 1.6 | 10000 | 0.149 | 0.2981 | |
|
| 1.6800 | 10500 | 0.152 | 0.3140 | |
|
| 1.76 | 11000 | 0.1484 | 0.3056 | |
|
| 1.8400 | 11500 | 0.1497 | 0.3051 | |
|
| 1.92 | 12000 | 0.1522 | 0.2893 | |
|
| 2.0 | 12500 | 0.1503 | 0.2944 | |
|
| 2.08 | 13000 | 0.1496 | 0.3039 | |
|
| 2.16 | 13500 | 0.1462 | 0.3314 | |
|
| 2.24 | 14000 | 0.1505 | 0.2470 | |
|
| 2.32 | 14500 | 0.1457 | 0.3081 | |
|
| 2.4 | 15000 | 0.1478 | 0.3204 | |
|
| 2.48 | 15500 | 0.1464 | 0.3248 | |
|
| 2.56 | 16000 | 0.1442 | 0.3360 | |
|
| 2.64 | 16500 | 0.1437 | 0.3418 | |
|
| 2.7200 | 17000 | 0.1416 | 0.3496 | |
|
| 2.8 | 17500 | 0.1434 | 0.3283 | |
|
| 2.88 | 18000 | 0.146 | 0.3246 | |
|
| 2.96 | 18500 | 0.1448 | 0.3352 | |
|
| 3.0 | 18750 | - | 0.3248 | |
|
| 3.04 | 19000 | 0.1445 | 0.3394 | |
|
| 3.12 | 19500 | 0.1423 | 0.3430 | |
|
| 3.2 | 20000 | 0.1415 | 0.3410 | |
|
| 3.2800 | 20500 | 0.1411 | 0.3367 | |
|
| 3.36 | 21000 | 0.1445 | 0.3497 | |
|
| 3.44 | 21500 | 0.1383 | 0.3640 | |
|
| 3.52 | 22000 | 0.1408 | 0.3497 | |
|
| 3.6 | 22500 | 0.1374 | 0.3452 | |
|
| 3.68 | 23000 | 0.1401 | 0.3519 | |
|
| 3.76 | 23500 | 0.137 | 0.3582 | |
|
| 3.84 | 24000 | 0.1393 | 0.3610 | |
|
| 3.92 | 24500 | 0.1408 | 0.3575 | |
|
| 4.0 | 25000 | 0.1388 | 0.3593 | |
|
|
|
|
|
### Framework Versions |
|
- Python: 3.10.12 |
|
- Sentence Transformers: 3.2.1 |
|
- Transformers: 4.44.2 |
|
- PyTorch: 2.5.0+cu121 |
|
- Accelerate: 0.34.2 |
|
- Datasets: 3.0.2 |
|
- Tokenizers: 0.19.1 |
|
|
|
## Citation |
|
|
|
### BibTeX |
|
|
|
#### Sentence Transformers |
|
```bibtex |
|
@inproceedings{reimers-2019-sentence-bert, |
|
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", |
|
author = "Reimers, Nils and Gurevych, Iryna", |
|
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", |
|
month = "11", |
|
year = "2019", |
|
publisher = "Association for Computational Linguistics", |
|
url = "https://arxiv.org/abs/1908.10084", |
|
} |
|
``` |
|
|
|
<!-- |
|
## Glossary |
|
|
|
*Clearly define terms in order to be accessible across audiences.* |
|
--> |
|
|
|
<!-- |
|
## Model Card Authors |
|
|
|
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* |
|
--> |
|
|
|
<!-- |
|
## Model Card Contact |
|
|
|
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* |
|
--> |