SentenceTransformer based on dunzhang/stella_en_1.5B_v5
This is a sentence-transformers model finetuned from dunzhang/stella_en_1.5B_v5. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: dunzhang/stella_en_1.5B_v5
- Maximum Sequence Length: 8096 tokens
- Output Dimensionality: 1024 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 8096, 'do_lower_case': False}) with Transformer model: Qwen2Model
(1): Pooling({'word_embedding_dimension': 1536, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 1536, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'The Tchaikovsky Symphony Orchestra is a Russian classical music orchestra established in 1930. It was founded as the Moscow Radio Symphony Orchestra, and served as the official symphony for the Soviet All-Union Radio network. Following the dissolution of the, Soviet Union in 1991, the orchestra was renamed in 1993 by the Russian Ministry of Culture in recognition of the central role the music of Tchaikovsky plays in its repertoire. The current music director is Vladimir Fedoseyev, who has been in that position since 1974.',
'Instruct: Given a web search query, retrieve relevant passages that answer the query.\nQuery: Tchaikovsky Symphony Orchestra',
'Instruct: Given a web search query, retrieve relevant passages that answer the query.\nQuery: Sierra del Lacandón',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.9448 |
cosine_accuracy@3 | 0.9687 |
cosine_accuracy@5 | 0.9764 |
cosine_accuracy@10 | 0.9811 |
cosine_precision@1 | 0.9448 |
cosine_precision@3 | 0.3229 |
cosine_precision@5 | 0.1953 |
cosine_precision@10 | 0.0981 |
cosine_recall@1 | 0.9448 |
cosine_recall@3 | 0.9687 |
cosine_recall@5 | 0.9764 |
cosine_recall@10 | 0.9811 |
cosine_ndcg@10 | 0.9637 |
cosine_mrr@10 | 0.958 |
cosine_map@100 | 0.9586 |
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.9444 |
cosine_accuracy@3 | 0.97 |
cosine_accuracy@5 | 0.9764 |
cosine_accuracy@10 | 0.9825 |
cosine_precision@1 | 0.9444 |
cosine_precision@3 | 0.3233 |
cosine_precision@5 | 0.1953 |
cosine_precision@10 | 0.0982 |
cosine_recall@1 | 0.9444 |
cosine_recall@3 | 0.97 |
cosine_recall@5 | 0.9764 |
cosine_recall@10 | 0.9825 |
cosine_ndcg@10 | 0.9639 |
cosine_mrr@10 | 0.9579 |
cosine_map@100 | 0.9584 |
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.9438 |
cosine_accuracy@3 | 0.967 |
cosine_accuracy@5 | 0.9724 |
cosine_accuracy@10 | 0.9801 |
cosine_precision@1 | 0.9438 |
cosine_precision@3 | 0.3223 |
cosine_precision@5 | 0.1945 |
cosine_precision@10 | 0.098 |
cosine_recall@1 | 0.9438 |
cosine_recall@3 | 0.967 |
cosine_recall@5 | 0.9724 |
cosine_recall@10 | 0.9801 |
cosine_ndcg@10 | 0.9624 |
cosine_mrr@10 | 0.9567 |
cosine_map@100 | 0.9573 |
Training Details
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_eval_batch_size
: 4gradient_accumulation_steps
: 4learning_rate
: 2e-05max_steps
: 1500lr_scheduler_type
: cosinewarmup_ratio
: 0.1warmup_steps
: 5bf16
: Truetf32
: Trueoptim
: adamw_torch_fusedgradient_checkpointing
: Truegradient_checkpointing_kwargs
: {'use_reentrant': False}batch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 8per_device_eval_batch_size
: 4per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 4eval_accumulation_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 3.0max_steps
: 1500lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 5log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Truelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Truedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Truegradient_checkpointing_kwargs
: {'use_reentrant': False}include_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | loss | cosine_map@100 |
---|---|---|---|---|
0.0185 | 100 | 0.4835 | 0.0751 | 0.9138 |
0.0369 | 200 | 0.0646 | 0.0590 | 0.9384 |
0.0554 | 300 | 0.0594 | 0.0519 | 0.9462 |
0.0739 | 400 | 0.0471 | 0.0483 | 0.9514 |
0.0924 | 500 | 0.0524 | 0.0455 | 0.9531 |
0.1108 | 600 | 0.0435 | 0.0397 | 0.9546 |
0.1293 | 700 | 0.0336 | 0.0394 | 0.9549 |
0.1478 | 800 | 0.0344 | 0.0374 | 0.9565 |
0.1662 | 900 | 0.0393 | 0.0361 | 0.9568 |
0.1847 | 1000 | 0.0451 | 0.0361 | 0.9578 |
0.2032 | 1100 | 0.0278 | 0.0358 | 0.9568 |
0.2216 | 1200 | 0.0332 | 0.0356 | 0.9572 |
0.2401 | 1300 | 0.0317 | 0.0354 | 0.9575 |
0.2586 | 1400 | 0.026 | 0.0355 | 0.9574 |
0.2771 | 1500 | 0.0442 | 0.0355 | 0.9573 |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.2.0+cu121
- Accelerate: 0.33.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 22
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for FINGU-AI/FinguMv3
Base model
dunzhang/stella_en_1.5B_v5Evaluation results
- Cosine Accuracy@1 on Unknownself-reported0.945
- Cosine Accuracy@3 on Unknownself-reported0.969
- Cosine Accuracy@5 on Unknownself-reported0.976
- Cosine Accuracy@10 on Unknownself-reported0.981
- Cosine Precision@1 on Unknownself-reported0.945
- Cosine Precision@3 on Unknownself-reported0.323
- Cosine Precision@5 on Unknownself-reported0.195
- Cosine Precision@10 on Unknownself-reported0.098
- Cosine Recall@1 on Unknownself-reported0.945
- Cosine Recall@3 on Unknownself-reported0.969