Edit model card

SentenceTransformer based on BAAI/bge-large-en

This is a sentence-transformers model finetuned from BAAI/bge-large-en. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-large-en
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("rnbokade/custom-bge")
# Run inference
sentences = [
    '3783 UC Davis (Northern Cal - Jon Sanguinetti)->Seq 18-P-3783',
    'dwg-3783s18 : 3783 Seq 18 - Drawings',
    'mat-3783s5 : 3783 Seq 5 - Material Order',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 0.9838
dot_accuracy 0.0162
manhattan_accuracy 0.9838
euclidean_accuracy 0.9838
max_accuracy 0.9838

Triplet

Metric Value
cosine_accuracy 0.9838
dot_accuracy 0.0162
manhattan_accuracy 0.9838
euclidean_accuracy 0.9838
max_accuracy 0.9838

Training Details

Training Dataset

Unnamed Dataset

  • Size: 22,604 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 22 tokens
    • mean: 25.35 tokens
    • max: 27 tokens
    • min: 15 tokens
    • mean: 18.84 tokens
    • max: 24 tokens
    • min: 6 tokens
    • mean: 16.74 tokens
    • max: 38 tokens
  • Samples:
    anchor positive negative
    MOD 1- Metal Decking - Floor
    Stud Wall Panels
    Floor Sheathing (Megaboard) Layout of Dirtt Frame Centerlines
    EW1001-125 : Door Slabs / Frames / Hardware dwg-3783s16 : 3783 Seq 16 - Drawings
    MOD 1- Metal Decking - Floor
    Stud Wall Panels
    Floor Sheathing (Megaboard) Layout of Dirtt Frame Centerlines
    EW1001-125 : Door Slabs / Frames / Hardware mat-3783s16 : 3783 Seq 16 - Material Order
    MOD 1- Metal Decking - Floor
    Stud Wall Panels
    Floor Sheathing (Megaboard) Layout of Dirtt Frame Centerlines
    EW1001-125 : Door Slabs / Frames / Hardware dwg-3786s292 : 3786 Seq 292 - Drawings
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 618 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 22 tokens
    • mean: 33.18 tokens
    • max: 45 tokens
    • min: 13 tokens
    • mean: 17.48 tokens
    • max: 22 tokens
    • min: 13 tokens
    • mean: 17.48 tokens
    • max: 22 tokens
  • Samples:
    anchor positive negative
    23-0125 - Crispr mRNA Fume Hood Installations->Construction->QC Lab 1218 Fume Hood Install->Electrical - Fume Hood Power/Grounding Terminations - QC Lab 26-0500-1.0 : Breakers (3P 20A) dwg-3786s17 : 3786 Seq 17 - Drawings
    23-0125 - Crispr mRNA Fume Hood Installations->Construction->QC Lab 1218 Fume Hood Install->Electrical - Fume Hood Power/Grounding Terminations - QC Lab 26-0500-1.0 : Breakers (3P 20A) mat-3786s17 : 3786 Seq 17 - Material Order
    23-0125 - Crispr mRNA Fume Hood Installations->Construction->QC Lab 1218 Fume Hood Install->Electrical - Fume Hood Power/Grounding Terminations - QC Lab 26-0500-1.0 : Breakers (3P 20A) 09-9000-2.0 : Paint and Coatings
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss custom-bge-dev_max_accuracy custom-bge-test_max_accuracy
0 0 - - 0.8463 -
0.0708 100 0.5651 0.6065 0.9919 -
0.1415 200 0.168 0.4217 0.9935 -
0.2123 300 0.0499 0.6747 0.9951 -
0.2831 400 0.2205 0.8112 0.9951 -
0.3539 500 0.1167 0.7040 0.9903 -
0.4246 600 0.0968 0.7364 0.9822 -
0.4954 700 0.1704 0.5540 0.9968 -
0.5662 800 0.1104 0.7266 0.9951 -
0.6369 900 0.1698 1.1020 0.9725 -
0.7077 1000 0.1077 0.9028 0.9790 -
0.7785 1100 0.1667 0.8478 0.9757 -
0.8493 1200 0.0707 0.7629 0.9887 -
0.9200 1300 0.0299 0.8024 0.9871 -
0.9908 1400 0.0005 0.8161 0.9838 -
1.0 1413 - - - 0.9838

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.42.4
  • PyTorch: 2.3.1+cu121
  • Accelerate: 0.32.1
  • Datasets: 2.21.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
9
Safetensors
Model size
335M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for rnbokade/custom-bge

Base model

BAAI/bge-large-en
Finetuned
(24)
this model

Evaluation results