Edit model card

SentenceTransformer based on Mihaiii/Venusaur

This is a sentence-transformers model finetuned from Mihaiii/Venusaur on the Mihaiii/qa-assistant-2 dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Mihaiii/Venusaur
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
  • Language: en

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'What are the social structures of ants?',
    'The social hierarchy of ants is a complex system that ensures the survival and efficiency of the colony.',
    'In a parliamentary system, the executive branch derives its legitimacy from and is accountable to the legislature; the executive and legislative branches are thus interconnected.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.8261
spearman_cosine 0.8277
pearson_manhattan 0.82
spearman_manhattan 0.8226
pearson_euclidean 0.8215
spearman_euclidean 0.8237
pearson_dot 0.8037
spearman_dot 0.8082
pearson_max 0.8261
spearman_max 0.8277

Training Details

Training Dataset

Mihaiii/qa-assistant-2

  • Dataset: Mihaiii/qa-assistant-2 at 9650e69
  • Size: 16,011 training samples
  • Columns: question, answer, and score
  • Approximate statistics based on the first 1000 samples:
    question answer score
    type string string float
    details
    • min: 6 tokens
    • mean: 12.73 tokens
    • max: 27 tokens
    • min: 10 tokens
    • mean: 22.42 tokens
    • max: 65 tokens
    • min: 0.02
    • mean: 0.53
    • max: 1.0
  • Samples:
    question answer score
    Can you describe the process of robot path planning? Robots can be programmed to perform a variety of tasks, from simple repetitive actions to complex decision-making processes. 0.27999999999999997
    Can humans live on Mars? Mars is the fourth planet from the Sun and is often called the Red Planet due to its reddish appearance. 0.16
    What are the key elements of composition in abstract art? The history of abstract art dates back to the early 20th century, with pioneers like Wassily Kandinsky and Piet Mondrian. 0.36
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Evaluation Dataset

Mihaiii/qa-assistant-2

  • Dataset: Mihaiii/qa-assistant-2 at 9650e69
  • Size: 3,879 evaluation samples
  • Columns: question, answer, and score
  • Approximate statistics based on the first 1000 samples:
    question answer score
    type string string float
    details
    • min: 7 tokens
    • mean: 12.71 tokens
    • max: 31 tokens
    • min: 10 tokens
    • mean: 22.63 tokens
    • max: 51 tokens
    • min: 0.02
    • mean: 0.53
    • max: 1.0
  • Samples:
    question answer score
    What is the concept of social stratification? The study of social stratification involves examining the inequalities and divisions within a society. 0.6799999999999999
    How does J.K. Rowling develop the character of Hermione Granger throughout the 'Harry Potter' series? The 'Harry Potter' series consists of seven books, starting with 'Harry Potter and the Philosopher's Stone' and ending with 'Harry Potter and the Deathly Hallows'. 0.22000000000000003
    What is the parliamentary system and how does it function? In a parliamentary system, the government can be dissolved by a vote of no confidence, which can lead to new elections. 0.6799999999999999
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 4
  • warmup_ratio: 0.1

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss sts-dev_spearman_cosine
0.0999 100 0.0593 0.0540 0.5848
0.1998 200 0.05 0.0463 0.6618
0.2997 300 0.044 0.0418 0.7102
0.3996 400 0.0413 0.0385 0.7390
0.4995 500 0.0377 0.0349 0.7707
0.5994 600 0.034 0.0333 0.7770
0.6993 700 0.0344 0.0321 0.7879
0.7992 800 0.0324 0.0311 0.7927
0.8991 900 0.0334 0.0302 0.8005
0.9990 1000 0.0304 0.0305 0.8023
1.0989 1100 0.0261 0.0306 0.8072
1.1988 1200 0.0267 0.0292 0.8104
1.2987 1300 0.0244 0.0287 0.8110
1.3986 1400 0.0272 0.0294 0.8098
1.4985 1500 0.0241 0.0281 0.8135
1.5984 1600 0.0253 0.0282 0.8143
1.6983 1700 0.0245 0.0276 0.8169
1.7982 1800 0.025 0.0274 0.8182
1.8981 1900 0.0236 0.0273 0.8193
1.9980 2000 0.0236 0.0269 0.8218
2.0979 2100 0.0215 0.0278 0.8213
2.1978 2200 0.0216 0.0269 0.8226
2.2977 2300 0.0205 0.0276 0.8207
2.3976 2400 0.0181 0.0273 0.8202
2.4975 2500 0.0197 0.0267 0.8228
2.5974 2600 0.02 0.0267 0.8238
2.6973 2700 0.0203 0.0263 0.8258
2.7972 2800 0.0184 0.0263 0.8264
2.8971 2900 0.0201 0.0269 0.8243
2.9970 3000 0.0196 0.0263 0.8251
3.0969 3100 0.0168 0.0264 0.8250
3.1968 3200 0.0176 0.0263 0.8267
3.2967 3300 0.0168 0.0263 0.8270
3.3966 3400 0.017 0.0260 0.8277
3.4965 3500 0.0164 0.0261 0.8273
3.5964 3600 0.0172 0.0259 0.8280
3.6963 3700 0.0168 0.0260 0.8274
3.7962 3800 0.0176 0.0262 0.8279
3.8961 3900 0.0182 0.0261 0.8278
3.9960 4000 0.0174 0.0260 0.8277

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.0.1+cu118
  • Accelerate: 0.31.0
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
6
Safetensors
Model size
15.6M params
Tensor type
F32
·
Inference API
This model can be loaded on Inference API (serverless).

Finetuned from

Dataset used to train Mihaiii/test33

Evaluation results