Edit model card

BGE base Financial Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("tmmazen/bge-base-financial-matryoshka")
# Run inference
sentences = [
    'iconstituents its principal constituent is an alkaloid, tamarixin,along with traces of its aglocone, tamarixetin. theplant also contains a high level of tannin (ellagicand gallic) and quercetol (methyllic esther).',
    'What are the chemical constituents of Tamarix gallica L.?',
    'How is Myrtus communis L. used in modern and traditional medicine?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.1529
cosine_accuracy@3 0.2302
cosine_accuracy@5 0.2965
cosine_accuracy@10 0.3831
cosine_precision@1 0.1529
cosine_precision@3 0.0767
cosine_precision@5 0.0593
cosine_precision@10 0.0383
cosine_recall@1 0.1529
cosine_recall@3 0.2302
cosine_recall@5 0.2965
cosine_recall@10 0.3831
cosine_ndcg@10 0.2525
cosine_mrr@10 0.2125
cosine_map@100 0.2203

Information Retrieval

Metric Value
cosine_accuracy@1 0.151
cosine_accuracy@3 0.2357
cosine_accuracy@5 0.2947
cosine_accuracy@10 0.3886
cosine_precision@1 0.151
cosine_precision@3 0.0786
cosine_precision@5 0.0589
cosine_precision@10 0.0389
cosine_recall@1 0.151
cosine_recall@3 0.2357
cosine_recall@5 0.2947
cosine_recall@10 0.3886
cosine_ndcg@10 0.2543
cosine_mrr@10 0.2132
cosine_map@100 0.2197

Information Retrieval

Metric Value
cosine_accuracy@1 0.1436
cosine_accuracy@3 0.2247
cosine_accuracy@5 0.2855
cosine_accuracy@10 0.3573
cosine_precision@1 0.1436
cosine_precision@3 0.0749
cosine_precision@5 0.0571
cosine_precision@10 0.0357
cosine_recall@1 0.1436
cosine_recall@3 0.2247
cosine_recall@5 0.2855
cosine_recall@10 0.3573
cosine_ndcg@10 0.2393
cosine_mrr@10 0.2029
cosine_map@100 0.2112

Information Retrieval

Metric Value
cosine_accuracy@1 0.1326
cosine_accuracy@3 0.2339
cosine_accuracy@5 0.2744
cosine_accuracy@10 0.3462
cosine_precision@1 0.1326
cosine_precision@3 0.078
cosine_precision@5 0.0549
cosine_precision@10 0.0346
cosine_recall@1 0.1326
cosine_recall@3 0.2339
cosine_recall@5 0.2744
cosine_recall@10 0.3462
cosine_ndcg@10 0.2299
cosine_mrr@10 0.194
cosine_map@100 0.2021

Information Retrieval

Metric Value
cosine_accuracy@1 0.1142
cosine_accuracy@3 0.1878
cosine_accuracy@5 0.2431
cosine_accuracy@10 0.3204
cosine_precision@1 0.1142
cosine_precision@3 0.0626
cosine_precision@5 0.0486
cosine_precision@10 0.032
cosine_recall@1 0.1142
cosine_recall@3 0.1878
cosine_recall@5 0.2431
cosine_recall@10 0.3204
cosine_ndcg@10 0.2047
cosine_mrr@10 0.1691
cosine_map@100 0.1777

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 4,887 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 3 tokens
    • mean: 102.23 tokens
    • max: 512 tokens
    • min: 9 tokens
    • mean: 20.63 tokens
    • max: 42 tokens
  • Samples:
    positive anchor
    The reported side effects of solidification were those with allergy to the plants of the asteraceae family (formerly the family of compounds: chamomile, dandelion, echinacea, armoise, etc.), to which the solidification belongs. Sometimes, solidification can cause heartburn. What are the side effects and risks of overdose of the European Solidage plant?
    rhumes crâniens, névralgie, problèmes respiratoires, sinusite À quelles conditions ou fins Pulicaria incisa (Lam.) DC. est-il utilisé?
    La mélisse (Melissa officinalis) est originaire d’Asie Mineure (Turquie et pourtour méditerranéen) où Théophraste et Hippocrate en vantaient déjà la capacité à calmer les maux de ventre. Elle doit son nom au mot grec « melissa » désignant l’abeille (la mélisse est aussi appelée « piment des abeilles »). Elle est traditionnellement utilisée pour ses propriétés apaisantes sur le système nerveux et le système digestif. Son usage a été popularisé par des préparations élaborées dans des monastères (l’Eau de Mélisse des Carmes, par exemple). Cultivée en régions tempérées, la mélisse est une plante de la famille des labiées, tout comme la menthe. Ses feuilles sont récoltées de juin à septembre, puis séchées. La poudre de mélisse est obtenue par broyage des feuilles, dont on peut aussi extraire l’huile essentielle, à usage externe. Des teintures et des extraits liquides sont également obtenus par extraction dans l’alcool. Les autres usages traditionnels de la mélisse


    Les décoctions de mélisse sont parfois utilisées en frictions pour soulager les migraines ou les rhumatismes, et en bains en cas de nervosité, d’agitation et de règles douloureuses.
    Quelle est l'origine et quels sont les usages de la plante Mélisse?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 1e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • fp16: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 1e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.5229 10 7.9606 - - - - -
1.0458 20 4.6998 - - - - -
1.5686 30 0.3577 - - - - -
1.7778 34 - 0.1845 0.2027 0.2041 0.1558 0.2045
1.2680 40 2.4714 - - - - -
1.7908 50 4.4309 - - - - -
2.3137 60 0.7847 - - - - -
2.7843 69 - 0.2028 0.2114 0.2197 0.1779 0.2206
2.0131 70 0.1189 - - - - -
2.3268 76 - 0.2021 0.2112 0.2197 0.1777 0.2203
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.1.0
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 0.33.0
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
10
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tmmazen/bge-base-financial-matryoshka

Finetuned
(241)
this model

Evaluation results