SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
It has been finetuned on a range of Q&A pairs based of UK government policy documents.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-MiniLM-L6-v2
- Maximum Sequence Length: 256 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("AndreasThinks/all-MiniLM-L6-v2_policy_doc_finetune")
# Run inference
sentences = [
'How much funding has the government committed to expand the Public Sector Fraud Authority to deploy AI in combating fraud?',
'2) Embracing the opportunities presented by making greater use of cutting-edge technology, such as AI, across the public sector. The government is:\nMore than doubling the size of i.AI, the AI incubator team, ensuring that the UK government has the in-house expertise consisting of the most talented technology professionals in the UK, who can apply their skills and expertise to appropriately seize the benefits of AI across the public sector and Civil Service.\nCommitting £34 million to expand the Public Sector Fraud Authority by deploying AI to help combat fraud across the public sector, making it easier to spot, stop and catch fraudsters thereby saving £100 million for the public purse.\nCommitting £17 million to accelerate DWP’s digital transformation, replacing paper-based processes with simplified online services, such as a new system for the Child Maintenance Service.\nCommitting £14 million for public sector research and innovation infrastructure. This includes funding to develop the next generation of health and security technologies, unlocking productivity improvements in the public and private sector alike.\n3) Strengthening preventative action to reduce demand on public services. The government is:\nCommitting an initial £105 million towards a wave of 15 new special free schools to create over 2,000 additional places for children with special educational needs and disabilities (SEND) across England. This will help more children receive a world-class education and builds on the significant levels of capital funding for SEND invested at the 2021 Spending Review. The locations of these special free schools will be announced by May 2024.\nConfirming the location of 20 Alternative Provision (AP) free schools, which will create over 1,600 additional AP places across England as part of the Spending Review 2021 commitment to invest £2.6 billion capital in high needs provision. This will support early intervention, helping improve outcomes for children requiring alternative provision, and helping them to fulfil their potential.',
'We will help build the UKDev (UK International Development) approach and brand by leveraging the UK’s comparative advantage within both the public and private sectors. We will build first and foremost on existing successful partnerships, through which we share UK models and expertise to support digital transformation in partner countries. For example, through our collaboration with the British Standards Institution (BSI) we will expand our collaboration to build the capacity of partner countries in Africa and South-East Asia (including through ASEAN) on digital standards, working with local private sector and national standards-setting bodies.\nWe will strengthen our delivery of peer learning activities in collaboration with Ofcom, exchanging experiences and sharing the UK models on spectrum management, local networks and other technical areas with telecoms regulators in partner countries, building on the positive peer-learning experience with Kenya and South Africa.\nWe will collaborate with Government Digital Service (GDS) to share know-how with partner countries on digitalisation in the public sector, building on our advisory role in GovStack[footnote 56]. We will leverage the UK experience of DPI for public or regulated services (health, transport, banking, land registries) based on the significant demand for this expertise from developing countries and riding the momentum on DPI generated by the G20 India presidency of 2023.\n 6.4 Enhancing FCDO’s digital development capability\nThe UK government will also enhance its own digital development capability to keep up with the pace of technological change, to be forward-looking and anticipate emergent benefits and risks of digital transformation. We will invest in new research on digital technologies and on their inclusive business models to build the global evidence base, share lessons learned and improve knowledge management through our portfolio of digital development and technology programmes, including the FCDO’s new Technology Centre for Expertise (Tech CoE), which will complement and support our programming portfolio.\nSince all sectors within international development are underpinned by digital technologies, we will ensure that digital development skills are mainstreamed across the FCDO. We will raise awareness and upgrade staff knowledge through new training opportunities on best practice in the complex and evolving area of digital development, through partnering with existing FCDO capability initiatives, ie the International Academy’s Development Faculty, the Cyber Network and the International Technology curriculum.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Semantic Similarity
- Dataset:
sts-dev
- Evaluated with
EmbeddingSimilarityEvaluator
Metric | Value |
---|---|
pearson_cosine | 0.8601 |
spearman_cosine | 0.8582 |
pearson_manhattan | 0.8605 |
spearman_manhattan | 0.8572 |
pearson_euclidean | 0.8616 |
spearman_euclidean | 0.8582 |
pearson_dot | 0.8601 |
spearman_dot | 0.8582 |
pearson_max | 0.8616 |
spearman_max | 0.8582 |
Training Details
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16learning_rate
: 2e-05num_train_epochs
: 2warmup_ratio
: 0.1use_mps_device
: Truebatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 2max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Trueseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | loss | sts-dev_spearman_cosine |
---|---|---|---|---|
0.0562 | 100 | 0.3598 | 0.8263 | 0.8672 |
0.1124 | 200 | 0.1983 | 0.7948 | 0.8666 |
0.1686 | 300 | 0.2021 | 0.7623 | 0.8666 |
0.2248 | 400 | 0.1844 | 0.7510 | 0.8657 |
0.2811 | 500 | 0.1704 | 0.7575 | 0.8629 |
0.3373 | 600 | 0.1643 | 0.7348 | 0.8641 |
0.3935 | 700 | 0.1808 | 0.7293 | 0.8604 |
0.4497 | 800 | 0.1494 | 0.7232 | 0.8636 |
0.5059 | 900 | 0.1563 | 0.7161 | 0.8634 |
0.5621 | 1000 | 0.1345 | 0.7115 | 0.8643 |
0.6183 | 1100 | 0.1344 | 0.7142 | 0.8617 |
0.6745 | 1200 | 0.1584 | 0.7106 | 0.8622 |
0.7307 | 1300 | 0.1488 | 0.7130 | 0.8592 |
0.7870 | 1400 | 0.1391 | 0.7034 | 0.8635 |
0.8432 | 1500 | 0.1433 | 0.7140 | 0.8614 |
0.8994 | 1600 | 0.1393 | 0.7067 | 0.8612 |
0.9556 | 1700 | 0.1644 | 0.6950 | 0.8628 |
1.0118 | 1800 | 0.1399 | 0.7072 | 0.8594 |
1.0680 | 1900 | 0.12 | 0.7093 | 0.8594 |
1.1242 | 2000 | 0.0904 | 0.7040 | 0.8587 |
1.1804 | 2100 | 0.082 | 0.6962 | 0.8585 |
1.2366 | 2200 | 0.0715 | 0.6985 | 0.8593 |
1.2929 | 2300 | 0.0624 | 0.7233 | 0.8562 |
1.3491 | 2400 | 0.0725 | 0.7064 | 0.8581 |
1.4053 | 2500 | 0.0665 | 0.7034 | 0.8570 |
1.4615 | 2600 | 0.0616 | 0.6940 | 0.8584 |
1.5177 | 2700 | 0.0703 | 0.6886 | 0.8599 |
1.5739 | 2800 | 0.0564 | 0.6860 | 0.8603 |
1.6301 | 2900 | 0.0603 | 0.6962 | 0.8590 |
1.6863 | 3000 | 0.0729 | 0.6906 | 0.8589 |
1.7426 | 3100 | 0.0753 | 0.6946 | 0.8579 |
1.7988 | 3200 | 0.0711 | 0.6909 | 0.8582 |
1.8550 | 3300 | 0.0743 | 0.6896 | 0.8583 |
1.9112 | 3400 | 0.0693 | 0.6902 | 0.8581 |
1.9674 | 3500 | 0.0845 | 0.6904 | 0.8582 |
Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.1
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for AndreasThinks/all-MiniLM-L6-v2_policy_doc_finetune
Base model
sentence-transformers/all-MiniLM-L6-v2Evaluation results
- Pearson Cosine on sts devself-reported0.860
- Spearman Cosine on sts devself-reported0.858
- Pearson Manhattan on sts devself-reported0.860
- Spearman Manhattan on sts devself-reported0.857
- Pearson Euclidean on sts devself-reported0.862
- Spearman Euclidean on sts devself-reported0.858
- Pearson Dot on sts devself-reported0.860
- Spearman Dot on sts devself-reported0.858
- Pearson Max on sts devself-reported0.862
- Spearman Max on sts devself-reported0.858