--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb model-index: - name: korean_embedding_model results: - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 62.462024005162874 - type: cos_sim_spearman value: 59.04592371468026 - type: euclidean_pearson value: 60.118409297960774 - type: euclidean_spearman value: 59.04592371468026 - type: manhattan_pearson value: 59.6758261833799 - type: manhattan_spearman value: 59.10255151100711 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 69.54306440280438 - type: cos_sim_spearman value: 62.859142390813574 - type: euclidean_pearson value: 65.6949193466544 - type: euclidean_spearman value: 62.859152754778854 - type: manhattan_pearson value: 65.65986839533139 - type: manhattan_spearman value: 62.82868162534342 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 66.06384755873458 - type: cos_sim_spearman value: 62.589736136651894 - type: euclidean_pearson value: 62.78577890775041 - type: euclidean_spearman value: 62.588858379781634 - type: manhattan_pearson value: 62.827478623777985 - type: manhattan_spearman value: 62.617997229102706 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 71.86398880834443 - type: cos_sim_spearman value: 72.1348002553312 - type: euclidean_pearson value: 71.6796109730168 - type: euclidean_spearman value: 72.1349022685911 - type: manhattan_pearson value: 71.66477952415218 - type: manhattan_spearman value: 72.09093373400123 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 70.22680219584427 - type: cos_sim_spearman value: 67.0818395499375 - type: euclidean_pearson value: 68.24498247750782 - type: euclidean_spearman value: 67.0818306104199 - type: manhattan_pearson value: 68.23186143435814 - type: manhattan_spearman value: 67.06973319437314 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 75.54853695205654 - type: cos_sim_spearman value: 75.93775396598934 - type: euclidean_pearson value: 75.10618334577337 - type: euclidean_spearman value: 75.93775372510834 - type: manhattan_pearson value: 75.123200749426 - type: manhattan_spearman value: 75.95755907955946 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 70.22928051288379 - type: cos_sim_spearman value: 70.13385961598065 - type: euclidean_pearson value: 69.66948135244029 - type: euclidean_spearman value: 70.13385923761084 - type: manhattan_pearson value: 69.66975130970742 - type: manhattan_spearman value: 70.16415157887303 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 77.12344529924287 - type: cos_sim_spearman value: 77.13355009366349 - type: euclidean_pearson value: 77.73092283054677 - type: euclidean_spearman value: 77.13355009366349 - type: manhattan_pearson value: 77.59037018668798 - type: manhattan_spearman value: 77.00181739561044 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 60.402875441797896 - type: cos_sim_spearman value: 62.21971197434699 - type: euclidean_pearson value: 63.08540172189354 - type: euclidean_spearman value: 62.21971197434699 - type: manhattan_pearson value: 62.971870200624714 - type: manhattan_spearman value: 62.17079870601948 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 69.14110875934769 - type: cos_sim_spearman value: 67.83869999603111 - type: euclidean_pearson value: 68.32930987602938 - type: euclidean_spearman value: 67.8387112205369 - type: manhattan_pearson value: 68.385068161592 - type: manhattan_spearman value: 67.86635507968924 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.185534982566132 - type: cos_sim_spearman value: 28.71714958933386 - type: dot_pearson value: 29.185527195235316 - type: dot_spearman value: 28.71714958933386 --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors