SentenceTransformer based on sentence-transformers/multi-qa-MiniLM-L6-cos-v1
This is a sentence-transformers model finetuned from sentence-transformers/multi-qa-MiniLM-L6-cos-v1. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/multi-qa-MiniLM-L6-cos-v1
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Atharva26/sentence-transformer-finetuned-faq")
# Run inference
sentences = [
'How can I access and customize my Preferences in Choice?',
'Preferences\n\nTo access your Preferences in Choice and customize how you view and use key features, follow these steps:\n1. Open your account and tap on "More."\n2. Select "My Preferences."\n\nDark Theme:\nEnjoy our sleek Dark Theme by selecting the very first option in the menu labeled "DARK THEME." To switch back to the default Light Theme, open the menu and select the Light Theme option.\n\n---\n**Cleaned Answer:**\nIn the Preferences section of Choice, you can adjust how you view and utilize key features. To do this, open your account and tap on "More," then select "My Preferences." Switch to the Dark Theme by choosing the first option labeled "DARK THEME" in the menu. To go back to the default Light Theme, open the menu and select the Light Theme option.',
"To find a specific company or ETF on the Choice platform, you can follow these steps:\n\n1. Using Search:\n- Type the first three letters of the company's name (e.g., ICICI) to see a list of affiliated companies/ETFs on NSE/BSE. \n- Choose your segment (Equity, Derivatives, Commodities, or Currency).\n- Select the specific company from the results drop-down to view its overview.\n- Add the company to your WatchList or set a price alert.\n\n2. Scrip Page - Cash and F&O:\n- View detailed information about a company.\n- Search by typing the first 3 letters of the company name and selecting it for the overview.\n\n3. Overview Section:\n- Provides essential details like OPEN, HIGH, LOW, CLOSE.\n- Click (i) for more details like PRICE TICK, MARKET LOT, CIRCUIT RANGE, etc.\n- Market Depth shows BID and ASK numbers with quantities.\n\n4. Technical Section:\n- View Technical data and use the CHART tool for analysis.\n\n5. Pivot Points:\n- Shows RESISTANCE and SUPPORT levels.\n- Displays numbers for DAILY assessment by default, switch timeframes if needed.\n\n6. Futures Section:\n- Displays future prices for near term, mid term, and far term.\n\n7. Recent News:\n- Displays news related to the company from various sources.\n\n8. Scrip Page - Derivatives (F&O):\n- Search for company and view open Call/Put contracts.\n- Select a contract and view the Option Chain.\n- Buy/Sell contracts and execute orders.\n\nThese steps will help you navigate and use the features available on the Choice platform effectively.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 32per_device_eval_batch_size
: 32learning_rate
: 2e-05num_train_epochs
: 2lr_scheduler_type
: cosinewarmup_ratio
: 0.3
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 32per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 2max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.3warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | loss |
---|---|---|---|
0.0345 | 2 | 4.7333 | - |
0.0690 | 4 | 4.706 | - |
0.1034 | 6 | 4.7013 | - |
0.1379 | 8 | 4.721 | - |
0.1724 | 10 | 4.6913 | - |
0.2069 | 12 | 4.6712 | - |
0.2414 | 14 | 4.6583 | - |
0.2586 | 15 | - | 4.6563 |
0.2759 | 16 | 4.6551 | - |
0.3103 | 18 | 4.6552 | - |
0.3448 | 20 | 4.6531 | - |
0.3793 | 22 | 4.5423 | - |
0.4138 | 24 | 4.5717 | - |
0.4483 | 26 | 4.5314 | - |
0.4828 | 28 | 4.5324 | - |
0.5172 | 30 | 4.453 | 4.5032 |
0.5517 | 32 | 4.5038 | - |
0.5862 | 34 | 4.4599 | - |
0.6207 | 36 | 4.3689 | - |
0.6552 | 38 | 4.4138 | - |
0.6897 | 40 | 4.3598 | - |
0.7241 | 42 | 4.3962 | - |
0.7586 | 44 | 4.2948 | - |
0.7759 | 45 | - | 4.3505 |
0.7931 | 46 | 4.3345 | - |
0.8276 | 48 | 4.4072 | - |
0.8621 | 50 | 4.2972 | - |
0.8966 | 52 | 4.3006 | - |
0.9310 | 54 | 4.3104 | - |
0.9655 | 56 | 4.2059 | - |
1.0 | 58 | 4.2059 | - |
1.0345 | 60 | 4.2079 | 4.2349 |
1.0690 | 62 | 4.23 | - |
1.1034 | 64 | 4.2325 | - |
1.1379 | 66 | 4.1432 | - |
1.1724 | 68 | 4.235 | - |
1.2069 | 70 | 4.1383 | - |
1.2414 | 72 | 4.1703 | - |
1.2759 | 74 | 4.1145 | - |
1.2931 | 75 | - | 4.1638 |
1.3103 | 76 | 4.0703 | - |
1.3448 | 78 | 4.1306 | - |
1.3793 | 80 | 4.0792 | - |
1.4138 | 82 | 4.102 | - |
1.4483 | 84 | 4.1091 | - |
1.4828 | 86 | 4.1437 | - |
1.5172 | 88 | 4.1011 | - |
1.5517 | 90 | 4.0618 | 4.1283 |
1.5862 | 92 | 4.0696 | - |
1.6207 | 94 | 4.1508 | - |
1.6552 | 96 | 4.0182 | - |
1.6897 | 98 | 4.1442 | - |
1.7241 | 100 | 4.2017 | - |
1.7586 | 102 | 4.097 | - |
1.7931 | 104 | 4.2886 | - |
1.8103 | 105 | - | 4.1213 |
1.8276 | 106 | 4.0573 | - |
1.8621 | 108 | 4.1101 | - |
1.8966 | 110 | 4.1942 | - |
1.9310 | 112 | 4.122 | - |
1.9655 | 114 | 4.1533 | - |
2.0 | 116 | 4.0961 | - |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.5.1+cu121
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
TripletLoss
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
- Downloads last month
- 14
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.