Edit model card

SentenceTransformer based on embaas/sentence-transformers-multilingual-e5-large

This is a sentence-transformers model finetuned from embaas/sentence-transformers-multilingual-e5-large. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("jerryyun/kicon_con_e5large_15")
# Run inference
sentences = [
    '압축력이 지배적인 부재의 설계 강도 해석에서 중립축이 단면 밖에 위치할 때, 어떤 상태를 극한 상태로 간주합니까?',
    '(4)휨과축력이동시에작용하는부재의설계강도해석에서 압축력이지배적이어서중립축이단면밖에놓인경우에는압축연단의변형률이한계변형률에도달할때를극한상태로간주한다 .이때의한계변형률은압축력과휨모멘트의작용을고려한변형률분포로결정한다 .(5)프리스트레스를가하지않은휨부재는설계휨강도해석에서구한중립축의깊이가다음값의최대허용중립축깊이이하이어야한다 .\ue0e7max\ue047\ue06d\ue10e\ue0e7\ue0f9\ue048\ue0b1\ue0f7\ue10e\ue0fd\ue10e\ue0e7\ue0f9\ue0e8  (3-1)3.2설계축강도와최소계수휨모멘트(1)압축부재의설계축강도는다음과같이결정하여야한다 .단,KDS 142020(4.1.1(9))에따라횡방향구속효과를고려하는경우에는이값을초과할수있다 .①프리스트레스를 가하지않은압축부재에서축방향철근의설계항복변형률 \ue0b1\ue0f7\ue10e\ue0fd가KDS142020(4.1.1)에규정된콘크리트의변형률\ue10e\ue0e7\ue0f3이하인경우다음식에따라설계축강도를결정하여야한다 .',
    '콘크리트구조철근상세설계기준 KDS142050:2022KDS140000구조설계기준 10⑤보또는브래킷이기둥의 4면에연결되어있는경우에가장낮은보또는브래킷의최하단수평철근아래에서 75mm이내에서띠철근배치를끝낼수있다 .단,이때보의폭은해당기둥면폭의 1/2이상이어야한다 .⑥앵커볼트가기둥상단이나주각상단에위치한경우에앵커볼트는기둥이나주각의적어도 4개이상의수직철근을감싸고있는횡방향철근에의해둘러싸여져야한다 .횡방향철근은기둥상단이나주각상단에서 125mm이내에배치하고적어도2개이상의 D13철근이나 3개이상의 D10철근으로구성되어야한다 .4.5기둥및접합부철근의특별배치상세4.5.1옵셋굽힘철근(1)기둥연결부에서단면치수가변하는경우다음규정에따라옵셋굽힘철근을배치하여야한다 .(2)옵셋굽힘철근의굽힘부에서기울기는 1/6을초과할수없다 .(3)옵셋굽힘철근의굽힘부를벗어난상⋅하부철근은기둥축에평행하여야한다 .',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,966 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 12 tokens
    • mean: 27.91 tokens
    • max: 62 tokens
    • min: 11 tokens
    • mean: 246.43 tokens
    • max: 419 tokens
  • Samples:
    sentence_0 sentence_1
    KDS 14 00 00 설계기준은 어느 나라의 기준인가요? KDS 14 00 00설계기준 Korean Design StandardKDS 14 00 00 : 2022구조설계기준2022년1월11일개정http://www.kcsc.re.kr
    2022년 1월 11일에 개정된 KDS 14 00 00 구조설계기준은 어디에서 확인할 수 있나요? KDS 14 00 00설계기준 Korean Design StandardKDS 14 00 00 : 2022구조설계기준2022년1월11일개정http://www.kcsc.re.kr
    KDS 14 20 00 콘크리트 구조 설계에서 사용되는 강도설계법의 기본 원칙은 무엇인가요? 구조설계기준체계KDS 14 20 00콘크리트구조설계(강도설계법 ) KDS 14 20 01 콘크리트구조설계(강도설계법 ) 일반사항 KDS 14 20 10 콘크리트구조해석과설계원칙 KDS 14 20 20 콘크리트구조휨및압축설계기준 KDS 14 20 22 콘크리트구조전단및비틀림설계기준 KDS 14 20 24 콘크리트구조스트럿 -타이모델기준 KDS 14 20 26 콘크리트구조피로설계기준 KDS 14 20 30 콘크리트구조사용성설계기준 KDS 14 20 40 콘크리트구조내구성설계기준 KDS 14 20 50 콘크리트구조철근상세설계기준 KDS 14 20 52 콘크리트구조정착및이음설계기준 KDS 14 20 54 콘크리트용앵커설계기준 KDS 14 20 60 프리스트레스트콘크리트구조설계기준 KDS 14 20 62 프리캐스트콘크리트구조설계기준 KDS 14 20 64 구조용무근콘크리트설계기준 KDS 14 20 66 합성콘크리트설계기준 KDS 14 20 70 콘크리트슬래브와기초판설계기준 KDS 14 20 72 콘크리트벽체설계기준 KDS 14 20 74 기타콘크리트구조설계기준 KDS 14 20 80 콘크리트내진설계기준 KDS 14 20 90 기존콘크리트구조물의
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • num_train_epochs: 15
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 15
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
8.0645 500 0.2562

Framework Versions

  • Python: 3.11.0rc1
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.1
  • PyTorch: 2.2.2+cu121
  • Accelerate: 0.30.1
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
5
Safetensors
Model size
560M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for jerryyun/kicon_con_e5large_15

Finetuned
(2)
this model