Edit model card

SentenceTransformer

This is a sentence-transformers model trained on the AbderrahmanSkiredj1/arabic_quora_duplicates_stsb_alue_holyquran_aranli_900k_anchor_positive_negative dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • AbderrahmanSkiredj1/arabic_quora_duplicates_stsb_alue_holyquran_aranli_900k_anchor_positive_negative

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("AbderrahmanSkiredj1/Arabic_text_embedding_for_sts")
# Run inference
sentences = [
    'يتم إنتاج أمثلة جميلة من المينا، والسيراميك، والفخار في وفرة كبيرة، وغالبا ما تتبع موضوع سلتيكي.',
    'يتم إنتاج عدد كبير من العناصر ذات المواضيع السلتية.',
    'يتم إنتاج الفخار الصغير الذي له موضوع سلتيكي.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

AbderrahmanSkiredj1/arabic_quora_duplicates_stsb_alue_holyquran_aranli_900k_anchor_positive_negative

  • Dataset: AbderrahmanSkiredj1/arabic_quora_duplicates_stsb_alue_holyquran_aranli_900k_anchor_positive_negative
  • Size: 853,827 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 14.54 tokens
    • max: 91 tokens
    • min: 4 tokens
    • mean: 10.62 tokens
    • max: 43 tokens
    • min: 4 tokens
    • mean: 10.32 tokens
    • max: 35 tokens
  • Samples:
    anchor positive negative
    هل يمكنك أن تأكل نفس الشيء كل يوم وتحصل على كل التغذية التي تحتاجها؟ هل الأكل نفس الشيء كل يوم صحي؟ ما هي القوة الخارقة التي تتمنى أن تملكها؟
    ثلاثة لاعبي كرة قدم، رقم 16 يرمي الكرة، رقم 71 يمنع الخصم الآخر. لاعبي كرة القدم يرمون ويمنعون بعضهم البعض الفريق يأكل البيتزا في مطعم
    كيف تحسن مهاراتك في الكتابة؟ كيف أستمر في تحسين كتابتي؟ كيف يتم تحديد أرقام الضمان الاجتماعي؟
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Evaluation Dataset

AbderrahmanSkiredj1/arabic_quora_duplicates_stsb_alue_holyquran_aranli_900k_anchor_positive_negative

  • Dataset: AbderrahmanSkiredj1/arabic_quora_duplicates_stsb_alue_holyquran_aranli_900k_anchor_positive_negative
  • Size: 11,584 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 16.03 tokens
    • max: 88 tokens
    • min: 3 tokens
    • mean: 11.72 tokens
    • max: 221 tokens
    • min: 3 tokens
    • mean: 10.59 tokens
    • max: 42 tokens
  • Samples:
    anchor positive negative
    ماذا سيحدث لو توقفت الأرض عن الدوران وتدور في نفس الوقت؟ ماذا سيحدث إذا توقفت الأرض عن الدوران؟ ما هو أفضل هاتف ذكي تحت 15000؟
    ثلاثة متفرجين بالغين وطفل واحد ينظرون إلى السماء بينما يقفون على الرصيف. أربعة أشخاص ينظرون إلى السماء. رجل وثلاثة أطفال يشاهدون بالونات الهيليوم تطفو أعلى في الهواء
    ماذا تفعل الدول لمنع الحرب؟ كيف يجب على الدول أن تمنع الحرب؟ كيف يمكنني كسب المال من بدء مدونة؟
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • learning_rate: 1e-06
  • num_train_epochs: 10
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 1e-06
  • weight_decay: 0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss
0.0120 40 3.1459
0.0240 80 3.2058
0.0360 120 3.0837
0.0480 160 3.1024
0.0600 200 3.015
0.0719 240 3.1311
0.0839 280 3.1101
0.0959 320 3.1288
0.1079 360 3.045
0.1199 400 3.0488
0.1319 440 3.1001
0.1439 480 3.2334
0.1559 520 3.0581
0.1679 560 2.9821
0.1799 600 3.1733
0.1918 640 3.0658
0.2038 680 3.0721
0.2158 720 3.1647
0.2278 760 3.0326
0.2398 800 3.1014
0.2518 840 2.9365
0.2638 880 3.0642
0.2758 920 2.9864
0.2878 960 3.0939
0.2998 1000 3.0676
0.3118 1040 2.9717
0.3237 1080 2.9908
0.3357 1120 2.9506
0.3477 1160 2.907
0.3597 1200 3.0451
0.3717 1240 3.0002
0.3837 1280 2.8842
0.3957 1320 3.0697
0.4077 1360 2.8967
0.4197 1400 3.0008
0.4317 1440 3.0027
0.4436 1480 2.9229
0.4556 1520 2.9539
0.4676 1560 2.9415
0.4796 1600 2.9401
0.4916 1640 2.8498
0.5036 1680 2.9646
0.5156 1720 2.9231
0.5276 1760 2.942
0.5396 1800 2.8521
0.5516 1840 2.8362
0.5635 1880 2.8497
0.5755 1920 2.8867
0.5875 1960 2.9148
0.5995 2000 2.9343
0.6115 2040 2.8537
0.6235 2080 2.7989
0.6355 2120 2.8508
0.6475 2160 2.916
0.6595 2200 2.926
0.6715 2240 2.752
0.6835 2280 2.7792
0.6954 2320 2.8381
0.7074 2360 2.7455
0.7194 2400 2.8953
0.7314 2440 2.8179
0.7434 2480 2.8471
0.7554 2520 2.7538
0.7674 2560 2.8271
0.7794 2600 2.8401
0.7914 2640 2.7402
0.8034 2680 2.6439

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.0.1
  • Transformers: 4.39.3
  • PyTorch: 2.2.2+cu121
  • Accelerate: 0.29.1
  • Datasets: 2.18.0
  • Tokenizers: 0.15.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
209
Safetensors
Model size
135M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using AbderrahmanSkiredj1/Arabic_text_embedding_for_sts 2

Evaluation results