Edit model card

SentenceTransformer based on intfloat/e5-base-unsupervised

This is a sentence-transformers model finetuned from intfloat/e5-base-unsupervised. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: intfloat/e5-base-unsupervised
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("bobox/E5-base-unsupervised-TSDAE")
# Run inference
sentences = [
    "should eat diarrhea should solid as soon able you're bottle your have, try to them as . at home until 48 last spreading others.",
    "how long should you wait to eat after having diarrhea? You should eat solid food as soon as you feel able to. If you're breastfeeding or bottle feeding your baby and they have diarrhoea, you should try to feed them as normal. Stay at home until at least 48 hours after the last episode of diarrhoea to prevent spreading any infection to others.",
    'how to copy multiple cells in excel and paste?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.7707
spearman_cosine 0.7584
pearson_manhattan 0.759
spearman_manhattan 0.7475
pearson_euclidean 0.7605
spearman_euclidean 0.7489
pearson_dot 0.5774
spearman_dot 0.56
pearson_max 0.7707
spearman_max 0.7584

Training Details

Training Dataset

Unnamed Dataset

  • Size: 300,000 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 3 tokens
    • mean: 20.46 tokens
    • max: 69 tokens
    • min: 8 tokens
    • mean: 47.85 tokens
    • max: 132 tokens
  • Samples:
    sentence_0 sentence_1
    matter An unit of retains all subatomic neutrons Hydrogen (one one neutrons are particles of matter atoms? An atom is the smallest unit of matter that retains all of the chemical properties of an element. ... Most atoms contain all three of these types of subatomic particles—protons, electrons, and neutrons. Hydrogen (H) is an exception because it typically has one proton and one electron, but no neutrons.
    equals how 5 ml equals how many ounces?
    "A Country Boy School is poor is forced to its boy to school following official, ignoring mean a jail "A Country Boy Quits School" by Lao Hsiang is an endearing social satire. It is about a poor Chinese family which is forced to send its boy to school following an official proclamation, ignoring which would mean a jail term.
  • Loss: DenoisingAutoEncoderLoss

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 14
  • per_device_eval_batch_size: 14
  • num_train_epochs: 1
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 14
  • per_device_eval_batch_size: 14
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss sts-test_spearman_cosine
0 0 - 0.7211
0.0233 500 6.3144 -
0.0467 1000 5.3949 -
0.0500 1072 - 0.6820
0.0700 1500 5.0531 -
0.0933 2000 4.8547 -
0.1001 2144 - 0.7126
0.1167 2500 4.7058 -
0.1400 3000 4.5771 -
0.1501 3216 - 0.7290
0.1633 3500 4.4591 -
0.1867 4000 4.3502 -
0.2001 4288 - 0.7351
0.2100 4500 4.3071 -
0.2333 5000 4.2042 -
0.2501 5360 - 0.7464
0.2567 5500 4.1657 -
0.2800 6000 4.1111 -
0.3002 6432 - 0.7492
0.3033 6500 4.045 -
0.3267 7000 4.017 -
0.3500 7500 3.9651 -
0.3502 7504 - 0.7554
0.3733 8000 3.9199 -
0.3967 8500 3.8691 -
0.4002 8576 - 0.7517
0.4200 9000 3.8563 -
0.4433 9500 3.815 -
0.4502 9648 - 0.7540
0.4667 10000 3.7892 -
0.4900 10500 3.7543 -
0.5003 10720 - 0.7585
0.5133 11000 3.7391 -
0.5367 11500 3.7442 -
0.5503 11792 - 0.7587
0.5600 12000 3.7187 -
0.5833 12500 3.6855 -
0.6003 12864 - 0.7572
0.6067 13000 3.6751 -
0.6300 13500 3.6373 -
0.6503 13936 - 0.7574
0.6533 14000 3.6292 -
0.6767 14500 3.6277 -
0.7000 15000 3.6084 -
0.7004 15008 - 0.7575
0.7233 15500 3.6103 -
0.7467 16000 3.5953 -
0.7504 16080 - 0.7576
0.7700 16500 3.6232 -
0.7933 17000 3.5741 -
0.8004 17152 - 0.7583
0.8167 17500 3.5639 -
0.8400 18000 3.5667 -
0.8504 18224 - 0.7589
0.8633 18500 3.5598 -
0.8866 19000 3.5636 -
0.9005 19296 - 0.7584
0.9100 19500 3.5536 -
0.9333 20000 3.5529 -
0.9505 20368 - 0.7584
0.9566 20500 3.5485 -
0.9800 21000 3.5503 -
1.0 21429 - 0.7584

Framework Versions

  • Python: 3.10.13
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.1.2
  • Accelerate: 0.31.0
  • Datasets: 2.19.2
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

DenoisingAutoEncoderLoss

@inproceedings{wang-2021-TSDAE,
    title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
    author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna", 
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
    month = nov,
    year = "2021",
    address = "Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    pages = "671--688",
    url = "https://arxiv.org/abs/2104.06979",
}
Downloads last month
12

Finetuned from

Evaluation results