uhoffmann's picture
Add new SentenceTransformer model.
d5d0eb6 verified
metadata
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
  - en
library_name: sentence-transformers
license: apache-2.0
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:6300
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
widget:
  - source_sentence: >-
      Mergers and acquisitions, joint ventures and strategic investments
      complement our internal development and enhance our partnerships to align
      with Visa’s priorities.
    sentences:
      - >-
        How much did the unbilled accounts receivable amount to as of December
        30, 2023?
      - >-
        What was the main reason for Visa to engage in mergers and acquisitions,
        joint ventures, and strategic investments?
      - What is the mission of Intuit?
  - source_sentence: >-
      Garmin’s audio brands, Fusion and JL Audio, offer premium audio products
      and accessories, including head units, speakers, amplifiers, subwoofers,
      and other audio components. These products are designed specifically for
      the marine, powersports, aftermarket automotive, home, or RV environments,
      offering premium sound quality and supporting many connectivity options
      for integrating with MFDs, smartphones, and Garmin wearables.
    sentences:
      - >-
        What type of insurance policies cover some of the defense and settlement
        costs associated with litigation mentioned?
      - >-
        What types of audio products does Garmin's Fusion and JL Audio brands
        offer?
      - >-
        What should investors consider when comparing Adjusted EBITDA across
        different companies?
  - source_sentence: >-
      Medical device products that are marketed in the European Union must
      comply with the requirements of the Medical Device Regulation (the MDR),
      which came into effect in May 2021. The MDR provides for regulatory
      oversight with respect to the design, manufacture, clinical trials,
      labeling and adverse event reporting for medical devices.
    sentences:
      - >-
        What are the requirements for medical devices to be marketed in the
        European Union under the MDR?
      - >-
        By what percentage did the pre-tax earnings increase from 2021 to 2022
        in the manufacturing sector?
      - What were the cash and cash equivalents at the end of 2023?
  - source_sentence: >-
      In March 2023, the Board of Directors sanctioned a restructuring plan
      concentrated on investment prioritization towards significant growth
      prospects and the optimization of the company's real estate assets. This
      includes substantial organizational changes such as reductions in office
      space and workforce.
    sentences:
      - >-
        How many physicians are part of the domestic Office of the Chief Medical
        Officer at DaVita as of December 31, 2023?
      - >-
        What changes in expenses did Delta Air Lines' ancillary businesses and
        refinery segment encounter in 2023 compared to 2022?
      - >-
        What are the restructuring targets of the company's Board of Directors
        as of 2023?
  - source_sentence: >-
      The quality of GM dealerships and our relationship with our dealers are
      critical to our success, now, and as we transition to our all-electric
      future, given that they maintain the primary sales and service interface
      with the end consumer of our products. In addition to the terms of our
      contracts with our dealers, we are regulated by various country and state
      franchise laws and regulations that may supersede those contractual terms
      and impose specific regulatory
    sentences:
      - >-
        How does General[39 chars] Motors ensure quality in their dealership
        network?
      - How can the public access the company's financial and legal reports?
      - >-
        Is the outcome of the investigation into Tesla's waste segregation
        practices currently determinable?
model-index:
  - name: BGE base Financial Matryoshka
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 768
          type: dim_768
        metrics:
          - type: cosine_accuracy@1
            value: 0.6785714285714286
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8171428571428572
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8671428571428571
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.91
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.6785714285714286
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2723809523809524
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.1734285714285714
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09099999999999998
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.6785714285714286
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8171428571428572
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8671428571428571
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.91
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7949318413045188
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7579920634920636
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.761780829563342
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 512
          type: dim_512
        metrics:
          - type: cosine_accuracy@1
            value: 0.6714285714285714
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8171428571428572
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8642857142857143
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9028571428571428
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.6714285714285714
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2723809523809524
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.17285714285714285
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09028571428571427
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.6714285714285714
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8171428571428572
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8642857142857143
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9028571428571428
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7892232861723367
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7524767573696142
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7566816338836445
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 256
          type: dim_256
        metrics:
          - type: cosine_accuracy@1
            value: 0.6671428571428571
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8142857142857143
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8657142857142858
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9028571428571428
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.6671428571428571
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2714285714285714
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.17314285714285713
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09028571428571427
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.6671428571428571
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8142857142857143
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8657142857142858
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9028571428571428
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.786715703830093
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.749225056689342
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7532686203724872
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 128
          type: dim_128
        metrics:
          - type: cosine_accuracy@1
            value: 0.6542857142857142
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8071428571428572
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8428571428571429
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.6542857142857142
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.26904761904761904
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.16857142857142854
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.6542857142857142
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8071428571428572
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8428571428571429
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7763972670750712
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7369308390022671
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7407041984815913
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 64
          type: dim_64
        metrics:
          - type: cosine_accuracy@1
            value: 0.62
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.7671428571428571
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8171428571428572
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.8785714285714286
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.62
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2557142857142857
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.16342857142857142
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.08785714285714284
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.62
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.7671428571428571
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8171428571428572
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.8785714285714286
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7482796784963641
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7067517006802718
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7110201251131743
            name: Cosine Map@100

BGE base Financial Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("uhoffmann/bge-base-financial-matryoshka")
# Run inference
sentences = [
    'The quality of GM dealerships and our relationship with our dealers are critical to our success, now, and as we transition to our all-electric future, given that they maintain the primary sales and service interface with the end consumer of our products. In addition to the terms of our contracts with our dealers, we are regulated by various country and state franchise laws and regulations that may supersede those contractual terms and impose specific regulatory',
    'How does General[39 chars] Motors ensure quality in their dealership network?',
    "How can the public access the company's financial and legal reports?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.6786
cosine_accuracy@3 0.8171
cosine_accuracy@5 0.8671
cosine_accuracy@10 0.91
cosine_precision@1 0.6786
cosine_precision@3 0.2724
cosine_precision@5 0.1734
cosine_precision@10 0.091
cosine_recall@1 0.6786
cosine_recall@3 0.8171
cosine_recall@5 0.8671
cosine_recall@10 0.91
cosine_ndcg@10 0.7949
cosine_mrr@10 0.758
cosine_map@100 0.7618

Information Retrieval

Metric Value
cosine_accuracy@1 0.6714
cosine_accuracy@3 0.8171
cosine_accuracy@5 0.8643
cosine_accuracy@10 0.9029
cosine_precision@1 0.6714
cosine_precision@3 0.2724
cosine_precision@5 0.1729
cosine_precision@10 0.0903
cosine_recall@1 0.6714
cosine_recall@3 0.8171
cosine_recall@5 0.8643
cosine_recall@10 0.9029
cosine_ndcg@10 0.7892
cosine_mrr@10 0.7525
cosine_map@100 0.7567

Information Retrieval

Metric Value
cosine_accuracy@1 0.6671
cosine_accuracy@3 0.8143
cosine_accuracy@5 0.8657
cosine_accuracy@10 0.9029
cosine_precision@1 0.6671
cosine_precision@3 0.2714
cosine_precision@5 0.1731
cosine_precision@10 0.0903
cosine_recall@1 0.6671
cosine_recall@3 0.8143
cosine_recall@5 0.8657
cosine_recall@10 0.9029
cosine_ndcg@10 0.7867
cosine_mrr@10 0.7492
cosine_map@100 0.7533

Information Retrieval

Metric Value
cosine_accuracy@1 0.6543
cosine_accuracy@3 0.8071
cosine_accuracy@5 0.8429
cosine_accuracy@10 0.9
cosine_precision@1 0.6543
cosine_precision@3 0.269
cosine_precision@5 0.1686
cosine_precision@10 0.09
cosine_recall@1 0.6543
cosine_recall@3 0.8071
cosine_recall@5 0.8429
cosine_recall@10 0.9
cosine_ndcg@10 0.7764
cosine_mrr@10 0.7369
cosine_map@100 0.7407

Information Retrieval

Metric Value
cosine_accuracy@1 0.62
cosine_accuracy@3 0.7671
cosine_accuracy@5 0.8171
cosine_accuracy@10 0.8786
cosine_precision@1 0.62
cosine_precision@3 0.2557
cosine_precision@5 0.1634
cosine_precision@10 0.0879
cosine_recall@1 0.62
cosine_recall@3 0.7671
cosine_recall@5 0.8171
cosine_recall@10 0.8786
cosine_ndcg@10 0.7483
cosine_mrr@10 0.7068
cosine_map@100 0.711

Training Details

Training Dataset

Unnamed Dataset

  • Size: 6,300 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 2 tokens
    • mean: 44.88 tokens
    • max: 272 tokens
    • min: 2 tokens
    • mean: 20.58 tokens
    • max: 45 tokens
  • Samples:
    positive anchor
    Walmart Inc. reported total revenues of $611,289 million for the fiscal year ended January 31, 2023. What was Walmart Inc.'s total revenue in the fiscal year ended January 31, 2023?
    The total equity balance of Visa Inc. as of September 30, 2023 was $38,733 million. What was the total equity of Visa Inc. as of September 30, 2023?
    Nike incorporates new technologies in its product design by using market intelligence and research, which helps its design teams identify opportunities to leverage these technologies in existing categories to respond to consumer preferences. How does Nike incorporate new technologies in its product design?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.8122 10 1.5521 - - - - -
0.9746 12 - 0.7178 0.7352 0.7404 0.6833 0.7422
1.6244 20 0.6753 - - - - -
1.9492 24 - 0.7340 0.7452 0.7524 0.7057 0.7561
2.4365 30 0.4611 - - - - -
2.9239 36 - 0.7392 0.7509 0.7560 0.7103 0.7588
3.2487 40 0.3763 - - - - -
3.8985 48 - 0.7407 0.7533 0.7567 0.711 0.7618
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.44.2
  • PyTorch: 2.4.0+cu121
  • Accelerate: 0.32.1
  • Datasets: 2.21.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}