SentenceTransformer based on sentence-transformers/all-MiniLM-L12-v2
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L12-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-MiniLM-L12-v2
- Maximum Sequence Length: 128 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Trelis/all-MiniLM-L12-v2-ft-triplets-10Qs")
# Run inference
sentences = [
'What is the ruling if the referee causes obstruction on either an attacking or defending player, including when the ball makes contact with the referee?',
'fringement occurs in the In-Goal Area. \n16.4\tPlayers in the Defending Team may not obstruct or interfere with an attacking \nplayer.\nRuling = A Penalty to the non-offending Team at the point of the Infringement or on the \nseven (7) metre line if the Infringement occurs in the In-Goal Area. \n16.5\tShould a supporting, attacking player cause an apparent and involuntary or \naccidental Obstruction and the player in Possession ceases movement to allow \na Touch to be made, the Touch is to count.\n16.6\tIf the Referee causes Obstruction on either an attacking player or a defending \nplayer including when the ball makes contact with the Referee, play should \npause and recommence with a Rollball at the Mark where the interference \noccurred and the Touch count remains unchanged.\n17\u2002 Interchange \n17.1\tPlayers may Interchange at any time. \n17.2\tThere is no limit to the number of times a player may Interchange.\n17.3\tInterchange players must remain in their Interchange Area for the duration of \nthe match.\n17.4\tInterchanges may only occur after the player leaving the Field of Play has \nentered the Interchange Area. \n17.5\tPlayers leaving or entering the Field of Play shall not hinder or obstruct play.\nRuling = A Penalty to the non-offending Team at the point of the Infringement.\n17.6\tPlayers entering the Field of Play must take up an Onside position before \nbecoming involved in play.\nFIT Playing Rules - 5th Edition\n14\nCOPYRIGHT © Touch Football Australia 2020\nRuling = A Penalty to the non-offending Team at the point of the Infringement.\n17.7\tWhen an intercept has occurred or a line break made, players are not permitted \nto Interchange until the next Touch has been made or ball becomes Dead.\nRuling A = If a player enters the Field of Play and prevents the scoring of a Try, a Penalty Try \nwill be awarded and the offending player sent to the Sin Bin.\nRuling B = If a player enters the Field of Play but does not impede the scoring of a Try the \noffending player will be sent to the Sin Bin.\n17.8\tFollowing a Try, players may Interchange at will, without having to wait for',
' Player\nThe player who replaces another player during Interchange. There is \na maximum of eight (8) substitute players in any Team and except \nwhen interchanging, in the Sin Bin, dismissed or on the Field of Play, \nthey must remain in the Substitution Box.\nTap and Tap Penalty\nThe method of commencing the match, recommencing the match \nafter Half Time and after a Try has been scored. The Tap is also the \nmethod of recommencing play when a Penalty is awarded. The Tap \nis taken by placing the ball on the ground at or behind the Mark, \nreleasing both hands from the ball, tapping the ball gently with either \nfoot or touching the foot on the ball. The ball must not roll or move \nmore than one (1) metre in any direction and must be retrieved \ncleanly, without touching the ground again. The player may face any \ndirection and use either foot. Provided it is at the Mark, the ball does \nnot have to be lifted from the ground prior to a Tap being taken.\nTeam\nA group of players constituting one (1) side in a competition match.\nTFA\nTouch Football Australia Limited\nTouch\nAny contact between the player in Possession and a defending \nplayer. A Touch includes contact on the ball, hair or clothing and may \nbe made by a defending player or by the player in Possession.\nTouch Count\nThe progressive number of Touches that each Team has before a \nChange of Possession, from zero (0) to six (6).\nTry\nThe result of any attacking player, except the Half, placing the ball on \nor over the Team’s Attacking Try Line before being Touched.\nTry Lines\nThe lines separating the In-Goal Areas from the Field of Play. See \nAppendix 1.\nVoluntary Rollball\nThe player in Possession performs a Rollball before a Touch is made \nwith a defending player.\nWing\nThe player outside the Link player.\nWinner\nThe Team that scores the most Tries during the match.\nFIT Playing Rules - 5th Edition\n4\nCOPYRIGHT © Touch Football Australia 2020\n Rules of Play \n Mode of Play \nThe object of the game of Touch is for each Team to score Tries and to prevent the \nopposition from scoring. The ball may be passed, knocked or handed between players \nof the Attacking Team who may in turn run',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16learning_rate
: 0.0001num_train_epochs
: 5lr_scheduler_type
: cosinewarmup_ratio
: 0.3
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonelearning_rate
: 0.0001weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 5max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.3warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | loss |
---|---|---|---|
0.1667 | 2 | 4.8893 | - |
0.3333 | 4 | 4.9073 | - |
0.5 | 6 | 4.8582 | - |
0.6667 | 8 | 4.8634 | 4.8319 |
0.8333 | 10 | 4.81 | - |
1.0 | 12 | 4.8214 | - |
1.1667 | 14 | 4.6917 | - |
1.3333 | 16 | 4.571 | 4.6944 |
1.5 | 18 | 4.5726 | - |
1.6667 | 20 | 4.6054 | - |
1.8333 | 22 | 4.4568 | - |
2.0 | 24 | 4.5025 | 4.5390 |
2.1667 | 26 | 4.3231 | - |
2.3333 | 28 | 4.1362 | - |
2.5 | 30 | 4.3427 | - |
2.6667 | 32 | 4.2574 | 4.4695 |
2.8333 | 34 | 4.3008 | - |
3.0 | 36 | 4.1244 | - |
3.1667 | 38 | 4.0408 | - |
3.3333 | 40 | 4.1497 | 4.3349 |
3.5 | 42 | 4.0795 | - |
3.6667 | 44 | 3.8948 | - |
3.8333 | 46 | 4.1476 | - |
4.0 | 48 | 4.0925 | 4.2929 |
4.1667 | 50 | 3.7692 | - |
4.3333 | 52 | 4.058 | - |
4.5 | 54 | 3.8418 | - |
4.6667 | 56 | 4.049 | 4.3185 |
4.8333 | 58 | 4.184 | - |
5.0 | 60 | 4.0321 | - |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.1+cu121
- Accelerate: 0.31.0
- Datasets: 2.17.1
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
TripletLoss
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Trelis/all-MiniLM-L12-v2-ft-triplets-10Qs
Base model
sentence-transformers/all-MiniLM-L12-v2