sbert-base-ja-arc / README.md
LeoChiuu's picture
Add new SentenceTransformer model.
9b9fcdb verified
|
raw
history blame
20.1 kB
metadata
base_model: colorfulscoop/sbert-base-ja
library_name: sentence-transformers
metrics:
  - cosine_accuracy
  - cosine_accuracy_threshold
  - cosine_f1
  - cosine_f1_threshold
  - cosine_precision
  - cosine_recall
  - cosine_ap
  - dot_accuracy
  - dot_accuracy_threshold
  - dot_f1
  - dot_f1_threshold
  - dot_precision
  - dot_recall
  - dot_ap
  - manhattan_accuracy
  - manhattan_accuracy_threshold
  - manhattan_f1
  - manhattan_f1_threshold
  - manhattan_precision
  - manhattan_recall
  - manhattan_ap
  - euclidean_accuracy
  - euclidean_accuracy_threshold
  - euclidean_f1
  - euclidean_f1_threshold
  - euclidean_precision
  - euclidean_recall
  - euclidean_ap
  - max_accuracy
  - max_accuracy_threshold
  - max_f1
  - max_f1_threshold
  - max_precision
  - max_recall
  - max_ap
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:601
  - loss:CoSENTLoss
widget:
  - source_sentence: だれかが魔法で花をぬいぐるみに変えた
    sentences:
      - 誰かが魔法の呪文で花をぬいぐるみに変えた
      - 村長は誰?
      - どこ?
  - source_sentence: 暖炉にスカーフを置いた?
    sentences:
      - 魔法をかけられる人
      - ロウソク
      - 晩ご飯のとき
  - source_sentence: あほ
    sentences:
      - 調子はどう?
      - きらい
      - オッケー
  - source_sentence: 猫のぬいぐるみ
    sentences:
      - 赤い染みが皿にあった
      - 好きじゃないの?
      - ぬいぐるみ
  - source_sentence: リリアンはどんな呪文が使えるの?
    sentences:
      - あなたは魔法使い?
      - 姿かたちを変える魔法
      - どのくらいのサイズ?
model-index:
  - name: SentenceTransformer based on colorfulscoop/sbert-base-ja
    results:
      - task:
          type: binary-classification
          name: Binary Classification
        dataset:
          name: custom arc semantics data jp
          type: custom-arc-semantics-data-jp
        metrics:
          - type: cosine_accuracy
            value: 0.9090909090909091
            name: Cosine Accuracy
          - type: cosine_accuracy_threshold
            value: 0.4785935878753662
            name: Cosine Accuracy Threshold
          - type: cosine_f1
            value: 0.9341317365269461
            name: Cosine F1
          - type: cosine_f1_threshold
            value: 0.4785935878753662
            name: Cosine F1 Threshold
          - type: cosine_precision
            value: 0.9176470588235294
            name: Cosine Precision
          - type: cosine_recall
            value: 0.9512195121951219
            name: Cosine Recall
          - type: cosine_ap
            value: 0.9287829842425579
            name: Cosine Ap
          - type: dot_accuracy
            value: 0.9008264462809917
            name: Dot Accuracy
          - type: dot_accuracy_threshold
            value: 234.1079864501953
            name: Dot Accuracy Threshold
          - type: dot_f1
            value: 0.9302325581395349
            name: Dot F1
          - type: dot_f1_threshold
            value: 209.4735870361328
            name: Dot F1 Threshold
          - type: dot_precision
            value: 0.8888888888888888
            name: Dot Precision
          - type: dot_recall
            value: 0.975609756097561
            name: Dot Recall
          - type: dot_ap
            value: 0.9635932205663708
            name: Dot Ap
          - type: manhattan_accuracy
            value: 0.9008264462809917
            name: Manhattan Accuracy
          - type: manhattan_accuracy_threshold
            value: 558.378173828125
            name: Manhattan Accuracy Threshold
          - type: manhattan_f1
            value: 0.9302325581395349
            name: Manhattan F1
          - type: manhattan_f1_threshold
            value: 580.81640625
            name: Manhattan F1 Threshold
          - type: manhattan_precision
            value: 0.8888888888888888
            name: Manhattan Precision
          - type: manhattan_recall
            value: 0.975609756097561
            name: Manhattan Recall
          - type: manhattan_ap
            value: 0.92846470083454
            name: Manhattan Ap
          - type: euclidean_accuracy
            value: 0.9090909090909091
            name: Euclidean Accuracy
          - type: euclidean_accuracy_threshold
            value: 24.130870819091797
            name: Euclidean Accuracy Threshold
          - type: euclidean_f1
            value: 0.9341317365269461
            name: Euclidean F1
          - type: euclidean_f1_threshold
            value: 24.130870819091797
            name: Euclidean F1 Threshold
          - type: euclidean_precision
            value: 0.9176470588235294
            name: Euclidean Precision
          - type: euclidean_recall
            value: 0.9512195121951219
            name: Euclidean Recall
          - type: euclidean_ap
            value: 0.9287963056027329
            name: Euclidean Ap
          - type: max_accuracy
            value: 0.9090909090909091
            name: Max Accuracy
          - type: max_accuracy_threshold
            value: 558.378173828125
            name: Max Accuracy Threshold
          - type: max_f1
            value: 0.9341317365269461
            name: Max F1
          - type: max_f1_threshold
            value: 580.81640625
            name: Max F1 Threshold
          - type: max_precision
            value: 0.9176470588235294
            name: Max Precision
          - type: max_recall
            value: 0.975609756097561
            name: Max Recall
          - type: max_ap
            value: 0.9635932205663708
            name: Max Ap

SentenceTransformer based on colorfulscoop/sbert-base-ja

This is a sentence-transformers model finetuned from colorfulscoop/sbert-base-ja on the csv dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: colorfulscoop/sbert-base-ja
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • csv

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("LeoChiuu/sbert-base-ja-arc")
# Run inference
sentences = [
    'リリアンはどんな呪文が使えるの?',
    '姿かたちを変える魔法',
    'どのくらいのサイズ?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Binary Classification

Metric Value
cosine_accuracy 0.9091
cosine_accuracy_threshold 0.4786
cosine_f1 0.9341
cosine_f1_threshold 0.4786
cosine_precision 0.9176
cosine_recall 0.9512
cosine_ap 0.9288
dot_accuracy 0.9008
dot_accuracy_threshold 234.108
dot_f1 0.9302
dot_f1_threshold 209.4736
dot_precision 0.8889
dot_recall 0.9756
dot_ap 0.9636
manhattan_accuracy 0.9008
manhattan_accuracy_threshold 558.3782
manhattan_f1 0.9302
manhattan_f1_threshold 580.8164
manhattan_precision 0.8889
manhattan_recall 0.9756
manhattan_ap 0.9285
euclidean_accuracy 0.9091
euclidean_accuracy_threshold 24.1309
euclidean_f1 0.9341
euclidean_f1_threshold 24.1309
euclidean_precision 0.9176
euclidean_recall 0.9512
euclidean_ap 0.9288
max_accuracy 0.9091
max_accuracy_threshold 558.3782
max_f1 0.9341
max_f1_threshold 580.8164
max_precision 0.9176
max_recall 0.9756
max_ap 0.9636

Training Details

Training Dataset

csv

  • Dataset: csv
  • Size: 601 training samples
  • Columns: text1, text2, and label
  • Approximate statistics based on the first 601 samples:
    text1 text2 label
    type string string int
    details
    • min: 4 tokens
    • mean: 7.99 tokens
    • max: 15 tokens
    • min: 4 tokens
    • mean: 8.05 tokens
    • max: 14 tokens
    • 0: ~33.96%
    • 1: ~66.04%
  • Samples:
    text1 text2 label
    どっちがいいと思う? どっちが欲しい? 1
    かわいいね ばか 0
    別のは選べないの? なにが欲しい? 0
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Evaluation Dataset

csv

  • Dataset: csv
  • Size: 601 evaluation samples
  • Columns: text1, text2, and label
  • Approximate statistics based on the first 601 samples:
    text1 text2 label
    type string string int
    details
    • min: 4 tokens
    • mean: 8.26 tokens
    • max: 15 tokens
    • min: 4 tokens
    • mean: 7.94 tokens
    • max: 14 tokens
    • 0: ~32.23%
    • 1: ~67.77%
  • Samples:
    text1 text2 label
    誰かが魔法を使った 誰かがが魔法をかけた 1
    これが花 ぬいぐるみが花 1
    夜ご飯を作る前 夜ご飯を食べる前 1
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • learning_rate: 2e-05
  • num_train_epochs: 13
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 8
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 13
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss custom-arc-semantics-data-jp_max_ap
None 0 - - 0.8596
1.0167 61 2.775 2.0852 0.8927
2.0167 122 1.213 1.7433 0.9291
3.0167 183 0.5703 1.5724 0.9379
4.0167 244 0.4603 1.6239 0.9432
5.0167 305 0.3672 1.6444 0.9523
6.0167 366 0.2947 1.6222 0.9603
7.0167 427 0.2255 1.7302 0.9619
8.0167 488 0.1678 1.7360 0.9633
9.0167 549 0.1163 1.8029 0.9620
10.0167 610 0.0706 1.8986 0.9639
11.0167 671 0.0389 1.9671 0.9624
12.0167 732 0.0333 2.0375 0.9636
12.8 780 0.0618 1.9938 0.9636

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.1.0
  • Transformers: 4.44.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.34.2
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CoSENTLoss

@online{kexuefm-8847,
    title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
    author={Su Jianlin},
    year={2022},
    month={Jan},
    url={https://kexue.fm/archives/8847},
}