Edit model card

SentenceTransformer based on BAAI/bge-base-en-v1.5

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("kr-manish/fine-tune-embedding-bge-base-HrPolicy_vfinal")
# Run inference
sentences = [
    'Priya Softweb prioritizes the health and wellness of its employees. The company strongly prohibits chewing tobacco, gutka, gum, or smoking within the office premises. Penalties and harsh decisions from management await anyone found bringing such substances into the office. Furthermore, carrying food to the desk is not permitted. Employees are encouraged to use the terrace dining facility for lunch, snacks, and dinner. Priya Softweb also emphasizes cleanliness and orderliness in the workspace. Employees are responsible for maintaining their designated work areas, keeping them clean, organized, and free from unnecessary items. Spitting gutka, gum, or tobacco in the washrooms is strictly prohibited. These policies contribute to a healthier and more pleasant work environment for everyone.',
    'Priya Softweb prioritizes the health and wellness of its employees. The company strongly prohibits chewing tobacco, gutka, gum, or smoking within the office premises. Penalties and harsh decisions from management await anyone found bringing such substances into the office. Furthermore, carrying food to the desk is not permitted. Employees are encouraged to use the terrace dining facility for lunch, snacks, and dinner. Priya Softweb also emphasizes cleanliness and orderliness in the workspace. Employees are responsible for maintaining their designated work areas, keeping them clean, organized, and free from unnecessary items. Spitting gutka, gum, or tobacco in the washrooms is strictly prohibited. These policies contribute to a healthier and more pleasant work environment for everyone.',
    "The Performance Appraisal at Priya Softweb is solely based on the employee's performance evaluation. The evaluation score is compiled by the Team Leader/Project Manager, who also gives the final rating to the team member. Detailed recommendations are provided by the TL/PM, and increment or promotion is granted accordingly. This process ensures that performance is the primary factor driving salary revisions and promotions.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 1.0
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 1.0
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 1.0
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 1.0
cosine_mrr@10 1.0
cosine_map@100 1.0

Information Retrieval

Metric Value
cosine_accuracy@1 1.0
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 1.0
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 1.0
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 1.0
cosine_mrr@10 1.0
cosine_map@100 1.0

Information Retrieval

Metric Value
cosine_accuracy@1 1.0
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 1.0
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 1.0
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 1.0
cosine_mrr@10 1.0
cosine_map@100 1.0

Information Retrieval

Metric Value
cosine_accuracy@1 1.0
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 1.0
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 1.0
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 1.0
cosine_mrr@10 1.0
cosine_map@100 1.0

Information Retrieval

Metric Value
cosine_accuracy@1 1.0
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 1.0
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 1.0
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 1.0
cosine_mrr@10 1.0
cosine_map@100 1.0

Training Details

Training Dataset

Unnamed Dataset

  • Size: 160 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 16 tokens
    • mean: 90.76 tokens
    • max: 380 tokens
    • min: 16 tokens
    • mean: 90.76 tokens
    • max: 380 tokens
  • Samples:
    positive anchor
    The general timings for the Marketing team vary: BD works from 1:00 PM to 10:00 PM or 3:00 PM to 12:00 AM, while BA/SEO works from 11:00 AM to 8:00 PM. The general timings for the Marketing team vary: BD works from 1:00 PM to 10:00 PM or 3:00 PM to 12:00 AM, while BA/SEO works from 11:00 AM to 8:00 PM.
    Priya Softweb acknowledges the efforts of employees who work late hours. Employees working more than 11 hours on weekdays are eligible for reimbursement of up to Rs. 250/- for their dinner expenses. However, this reimbursement is subject to approval from their Department Head. This policy recognizes the extra effort put in by employees working extended hours and provides some financial compensation for their meals. Priya Softweb acknowledges the efforts of employees who work late hours. Employees working more than 11 hours on weekdays are eligible for reimbursement of up to Rs. 250/- for their dinner expenses. However, this reimbursement is subject to approval from their Department Head. This policy recognizes the extra effort put in by employees working extended hours and provides some financial compensation for their meals.
    While Priya Softweb allows employees to keep their cell phones during work hours for emergency purposes, excessive personal mobile phone usage and lengthy calls within the office premises are strictly prohibited. Excessive use may result in disciplinary actions. This policy aims to strike a balance between allowing accessibility for emergencies and maintaining a productive work environment free from distractions. While Priya Softweb allows employees to keep their cell phones during work hours for emergency purposes, excessive personal mobile phone usage and lengthy calls within the office premises are strictly prohibited. Excessive use may result in disciplinary actions. This policy aims to strike a balance between allowing accessibility for emergencies and maintaining a productive work environment free from distractions.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 3e-05
  • num_train_epochs: 15
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • fp16: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 3e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 15
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0 0 - 1.0 1.0 1.0 1.0 1.0
1.0 1 - 1.0 1.0 1.0 1.0 1.0
2.0 3 - 1.0 1.0 1.0 1.0 1.0
3.0 4 - 1.0 1.0 1.0 1.0 1.0
4.0 6 - 1.0 1.0 1.0 1.0 1.0
5.0 8 - 1.0 1.0 1.0 1.0 1.0
6.0 9 - 1.0 1.0 1.0 1.0 1.0
6.4 10 0.0767 - - - - -
7.0 11 - 1.0 1.0 1.0 1.0 1.0
8.0 12 - 1.0 1.0 1.0 1.0 1.0
9.0 13 - 1.0 1.0 1.0 1.0 1.0
10.0 15 - 1.0 1.0 1.0 1.0 1.0
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 0.32.1
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
6
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for kr-manish/fine-tune-embedding-bge-base-HrPolicy_vfinal

Finetuned
(249)
this model

Evaluation results