Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
token-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
chanwoopark/roberta-link
null
[ "transformers", "safetensors", "roberta", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T16:44:36+00:00
null
null
{}
shravani12/mistral-finetuned-alpaca
null
[ "region:us" ]
null
2024-05-03T16:44:55+00:00
translation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-egyAr-eng_fineTuned This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1953 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.4904 | 0.06 | 20 | 2.5405 | | 2.4189 | 0.13 | 40 | 2.2441 | | 2.4096 | 0.19 | 60 | 2.0196 | | 2.0011 | 0.25 | 80 | 1.9210 | | 1.8648 | 0.32 | 100 | 1.9143 | | 2.1312 | 0.38 | 120 | 1.7299 | | 1.8335 | 0.44 | 140 | 1.7309 | | 1.8953 | 0.51 | 160 | 1.6637 | | 1.7433 | 0.57 | 180 | 1.6094 | | 1.6717 | 0.63 | 200 | 1.6277 | | 1.8456 | 0.69 | 220 | 1.5190 | | 1.5594 | 0.76 | 240 | 1.5161 | | 1.7027 | 0.82 | 260 | 1.4832 | | 1.5024 | 0.88 | 280 | 1.4489 | | 1.4542 | 0.95 | 300 | 1.4940 | | 1.6944 | 1.01 | 320 | 1.4590 | | 1.1835 | 1.07 | 340 | 1.4306 | | 0.9745 | 1.14 | 360 | 1.4373 | | 1.2107 | 1.2 | 380 | 1.3939 | | 1.0052 | 1.26 | 400 | 1.3884 | | 1.0351 | 1.33 | 420 | 1.3911 | | 1.145 | 1.39 | 440 | 1.3541 | | 0.9529 | 1.45 | 460 | 1.3534 | | 1.1718 | 1.52 | 480 | 1.3090 | | 0.9735 | 1.58 | 500 | 1.3072 | | 1.0636 | 1.64 | 520 | 1.2980 | | 1.0589 | 1.7 | 540 | 1.2669 | | 0.8511 | 1.77 | 560 | 1.2689 | | 1.1347 | 1.83 | 580 | 1.2328 | | 0.894 | 1.89 | 600 | 1.2335 | | 0.9555 | 1.96 | 620 | 1.2270 | | 0.9605 | 2.02 | 640 | 1.2204 | | 0.6781 | 2.08 | 660 | 1.2287 | | 0.5184 | 2.15 | 680 | 1.2452 | | 0.7999 | 2.21 | 700 | 1.2139 | | 0.5561 | 2.27 | 720 | 1.2213 | | 0.7168 | 2.34 | 740 | 1.2141 | | 0.648 | 2.4 | 760 | 1.2071 | | 0.4999 | 2.46 | 780 | 1.2127 | | 0.7798 | 2.53 | 800 | 1.2054 | | 0.5454 | 2.59 | 820 | 1.2017 | | 0.7056 | 2.65 | 840 | 1.2016 | | 0.6453 | 2.72 | 860 | 1.1961 | | 0.5223 | 2.78 | 880 | 1.1964 | | 0.7595 | 2.84 | 900 | 1.1958 | | 0.5751 | 2.9 | 920 | 1.1952 | | 0.6057 | 2.97 | 940 | 1.1953 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["ar", "en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "Helsinki-NLP/opus-mt-ar-en", "pipeline_tag": "translation", "model-index": [{"name": "opus-mt-egyAr-eng_fineTuned", "results": []}]}
Amr-khaled/opus-mt-egyAr-eng_fineTuned
null
[ "transformers", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "translation", "ar", "en", "base_model:Helsinki-NLP/opus-mt-ar-en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T16:45:20+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_0-seqsight_65536_512_47M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.6597 - F1 Score: 0.7055 - Accuracy: 0.7062 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.637 | 3.92 | 200 | 0.5948 | 0.6698 | 0.6716 | | 0.5893 | 7.84 | 400 | 0.6008 | 0.6755 | 0.6778 | | 0.5697 | 11.76 | 600 | 0.5872 | 0.6839 | 0.6840 | | 0.5542 | 15.69 | 800 | 0.5847 | 0.7043 | 0.7049 | | 0.5394 | 19.61 | 1000 | 0.5803 | 0.7124 | 0.7123 | | 0.5317 | 23.53 | 1200 | 0.5817 | 0.7004 | 0.7037 | | 0.5174 | 27.45 | 1400 | 0.5746 | 0.7191 | 0.7198 | | 0.5127 | 31.37 | 1600 | 0.5709 | 0.7062 | 0.7062 | | 0.506 | 35.29 | 1800 | 0.5635 | 0.7111 | 0.7111 | | 0.4972 | 39.22 | 2000 | 0.5585 | 0.7152 | 0.7160 | | 0.4881 | 43.14 | 2200 | 0.5657 | 0.7172 | 0.7173 | | 0.4855 | 47.06 | 2400 | 0.5607 | 0.7106 | 0.7111 | | 0.4771 | 50.98 | 2600 | 0.5740 | 0.7143 | 0.7148 | | 0.4701 | 54.9 | 2800 | 0.5698 | 0.7226 | 0.7235 | | 0.4655 | 58.82 | 3000 | 0.5727 | 0.7231 | 0.7235 | | 0.4592 | 62.75 | 3200 | 0.5749 | 0.7205 | 0.7210 | | 0.4511 | 66.67 | 3400 | 0.5821 | 0.7191 | 0.7198 | | 0.4484 | 70.59 | 3600 | 0.5665 | 0.7285 | 0.7296 | | 0.4467 | 74.51 | 3800 | 0.5741 | 0.7295 | 0.7296 | | 0.4389 | 78.43 | 4000 | 0.5775 | 0.7244 | 0.7247 | | 0.438 | 82.35 | 4200 | 0.5870 | 0.7269 | 0.7284 | | 0.4334 | 86.27 | 4400 | 0.5802 | 0.7342 | 0.7346 | | 0.4261 | 90.2 | 4600 | 0.5829 | 0.7297 | 0.7296 | | 0.4196 | 94.12 | 4800 | 0.5916 | 0.7281 | 0.7284 | | 0.4167 | 98.04 | 5000 | 0.5844 | 0.7228 | 0.7235 | | 0.4091 | 101.96 | 5200 | 0.5934 | 0.7355 | 0.7358 | | 0.4099 | 105.88 | 5400 | 0.5895 | 0.7308 | 0.7309 | | 0.4054 | 109.8 | 5600 | 0.5939 | 0.7294 | 0.7296 | | 0.4027 | 113.73 | 5800 | 0.6007 | 0.7292 | 0.7296 | | 0.4029 | 117.65 | 6000 | 0.5960 | 0.7247 | 0.7247 | | 0.3937 | 121.57 | 6200 | 0.6040 | 0.7210 | 0.7210 | | 0.3941 | 125.49 | 6400 | 0.6091 | 0.7223 | 0.7222 | | 0.3917 | 129.41 | 6600 | 0.6112 | 0.7235 | 0.7235 | | 0.3885 | 133.33 | 6800 | 0.6028 | 0.7284 | 0.7284 | | 0.3852 | 137.25 | 7000 | 0.6154 | 0.7296 | 0.7296 | | 0.3781 | 141.18 | 7200 | 0.6169 | 0.7235 | 0.7235 | | 0.3756 | 145.1 | 7400 | 0.6242 | 0.7319 | 0.7321 | | 0.3779 | 149.02 | 7600 | 0.6144 | 0.7272 | 0.7272 | | 0.3764 | 152.94 | 7800 | 0.6155 | 0.7308 | 0.7309 | | 0.37 | 156.86 | 8000 | 0.6209 | 0.7283 | 0.7284 | | 0.3706 | 160.78 | 8200 | 0.6228 | 0.7283 | 0.7284 | | 0.369 | 164.71 | 8400 | 0.6290 | 0.7247 | 0.7247 | | 0.3634 | 168.63 | 8600 | 0.6289 | 0.7222 | 0.7222 | | 0.3653 | 172.55 | 8800 | 0.6240 | 0.7246 | 0.7247 | | 0.3662 | 176.47 | 9000 | 0.6260 | 0.7233 | 0.7235 | | 0.3559 | 180.39 | 9200 | 0.6308 | 0.7197 | 0.7198 | | 0.3618 | 184.31 | 9400 | 0.6311 | 0.7284 | 0.7284 | | 0.3634 | 188.24 | 9600 | 0.6294 | 0.7284 | 0.7284 | | 0.3661 | 192.16 | 9800 | 0.6271 | 0.7272 | 0.7272 | | 0.3576 | 196.08 | 10000 | 0.6275 | 0.7272 | 0.7272 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_0-seqsight_65536_512_47M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_0-seqsight_65536_512_47M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:45:23+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_0-seqsight_65536_512_47M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.9070 - F1 Score: 0.7069 - Accuracy: 0.7074 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6249 | 3.92 | 200 | 0.5924 | 0.6809 | 0.6815 | | 0.5702 | 7.84 | 400 | 0.5896 | 0.6887 | 0.6889 | | 0.5451 | 11.76 | 600 | 0.5648 | 0.6922 | 0.6938 | | 0.5224 | 15.69 | 800 | 0.5642 | 0.7134 | 0.7173 | | 0.5028 | 19.61 | 1000 | 0.5586 | 0.7185 | 0.7185 | | 0.4866 | 23.53 | 1200 | 0.5690 | 0.7193 | 0.7222 | | 0.4641 | 27.45 | 1400 | 0.5686 | 0.7400 | 0.7407 | | 0.4489 | 31.37 | 1600 | 0.5830 | 0.7165 | 0.7173 | | 0.4304 | 35.29 | 1800 | 0.5761 | 0.7270 | 0.7272 | | 0.4077 | 39.22 | 2000 | 0.5861 | 0.7297 | 0.7296 | | 0.3854 | 43.14 | 2200 | 0.6005 | 0.7380 | 0.7383 | | 0.3724 | 47.06 | 2400 | 0.5956 | 0.7272 | 0.7284 | | 0.3489 | 50.98 | 2600 | 0.6159 | 0.7297 | 0.7296 | | 0.3304 | 54.9 | 2800 | 0.6671 | 0.7315 | 0.7333 | | 0.3153 | 58.82 | 3000 | 0.6739 | 0.7260 | 0.7259 | | 0.3025 | 62.75 | 3200 | 0.7029 | 0.7280 | 0.7284 | | 0.2875 | 66.67 | 3400 | 0.6816 | 0.7295 | 0.7296 | | 0.2738 | 70.59 | 3600 | 0.6824 | 0.7282 | 0.7284 | | 0.2614 | 74.51 | 3800 | 0.7269 | 0.7351 | 0.7358 | | 0.2571 | 78.43 | 4000 | 0.7406 | 0.7369 | 0.7370 | | 0.2395 | 82.35 | 4200 | 0.7667 | 0.7331 | 0.7333 | | 0.238 | 86.27 | 4400 | 0.7654 | 0.7382 | 0.7395 | | 0.2193 | 90.2 | 4600 | 0.7736 | 0.7281 | 0.7284 | | 0.2125 | 94.12 | 4800 | 0.7860 | 0.7234 | 0.7235 | | 0.21 | 98.04 | 5000 | 0.7801 | 0.7479 | 0.7481 | | 0.1949 | 101.96 | 5200 | 0.8131 | 0.7366 | 0.7370 | | 0.1947 | 105.88 | 5400 | 0.8441 | 0.7407 | 0.7407 | | 0.1882 | 109.8 | 5600 | 0.8412 | 0.7382 | 0.7383 | | 0.1851 | 113.73 | 5800 | 0.8371 | 0.7302 | 0.7309 | | 0.1775 | 117.65 | 6000 | 0.8648 | 0.7358 | 0.7358 | | 0.169 | 121.57 | 6200 | 0.8611 | 0.7346 | 0.7346 | | 0.1666 | 125.49 | 6400 | 0.8923 | 0.7393 | 0.7395 | | 0.1665 | 129.41 | 6600 | 0.8906 | 0.7333 | 0.7333 | | 0.1598 | 133.33 | 6800 | 0.9035 | 0.7345 | 0.7346 | | 0.1537 | 137.25 | 7000 | 0.9237 | 0.7405 | 0.7407 | | 0.1541 | 141.18 | 7200 | 0.9118 | 0.7383 | 0.7383 | | 0.1502 | 145.1 | 7400 | 0.9269 | 0.7419 | 0.7420 | | 0.1474 | 149.02 | 7600 | 0.9470 | 0.7420 | 0.7420 | | 0.147 | 152.94 | 7800 | 0.9501 | 0.7395 | 0.7395 | | 0.1378 | 156.86 | 8000 | 0.9572 | 0.7382 | 0.7383 | | 0.1426 | 160.78 | 8200 | 0.9603 | 0.7296 | 0.7296 | | 0.1351 | 164.71 | 8400 | 0.9646 | 0.7320 | 0.7321 | | 0.1355 | 168.63 | 8600 | 0.9647 | 0.7346 | 0.7346 | | 0.1327 | 172.55 | 8800 | 0.9743 | 0.7308 | 0.7309 | | 0.1279 | 176.47 | 9000 | 0.9963 | 0.7356 | 0.7358 | | 0.1228 | 180.39 | 9200 | 1.0062 | 0.7333 | 0.7333 | | 0.128 | 184.31 | 9400 | 1.0003 | 0.7395 | 0.7395 | | 0.1317 | 188.24 | 9600 | 0.9883 | 0.7370 | 0.7370 | | 0.13 | 192.16 | 9800 | 0.9884 | 0.7382 | 0.7383 | | 0.132 | 196.08 | 10000 | 0.9877 | 0.7370 | 0.7370 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_0-seqsight_65536_512_47M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_0-seqsight_65536_512_47M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:45:34+00:00
text-classification
transformers
{}
mouhebMehdoui/distilbert-base-uncased-finetuned-emotion
null
[ "transformers", "safetensors", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T16:45:39+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_1-seqsight_65536_512_47M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.2654 - F1 Score: 0.8818 - Accuracy: 0.8818 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5064 | 0.47 | 200 | 0.4045 | 0.8148 | 0.8148 | | 0.4235 | 0.95 | 400 | 0.3718 | 0.8328 | 0.8329 | | 0.3838 | 1.42 | 600 | 0.3340 | 0.8497 | 0.8497 | | 0.3579 | 1.9 | 800 | 0.3136 | 0.8578 | 0.8578 | | 0.3359 | 2.37 | 1000 | 0.3129 | 0.8583 | 0.8583 | | 0.3323 | 2.84 | 1200 | 0.3049 | 0.8639 | 0.8639 | | 0.3184 | 3.32 | 1400 | 0.2993 | 0.8652 | 0.8652 | | 0.3257 | 3.79 | 1600 | 0.3028 | 0.8661 | 0.8661 | | 0.3233 | 4.27 | 1800 | 0.2905 | 0.8711 | 0.8713 | | 0.3164 | 4.74 | 2000 | 0.2929 | 0.8693 | 0.8694 | | 0.3168 | 5.21 | 2200 | 0.2850 | 0.8743 | 0.8744 | | 0.313 | 5.69 | 2400 | 0.2868 | 0.8756 | 0.8756 | | 0.3079 | 6.16 | 2600 | 0.2849 | 0.8753 | 0.8755 | | 0.309 | 6.64 | 2800 | 0.2803 | 0.8791 | 0.8792 | | 0.3075 | 7.11 | 3000 | 0.2860 | 0.8749 | 0.8749 | | 0.3055 | 7.58 | 3200 | 0.2868 | 0.8738 | 0.8738 | | 0.305 | 8.06 | 3400 | 0.2796 | 0.8756 | 0.8758 | | 0.3023 | 8.53 | 3600 | 0.2840 | 0.8762 | 0.8762 | | 0.3034 | 9.0 | 3800 | 0.2817 | 0.8782 | 0.8783 | | 0.3017 | 9.48 | 4000 | 0.2795 | 0.8778 | 0.8780 | | 0.2989 | 9.95 | 4200 | 0.2760 | 0.8782 | 0.8783 | | 0.2969 | 10.43 | 4400 | 0.2771 | 0.8778 | 0.8778 | | 0.298 | 10.9 | 4600 | 0.2745 | 0.8794 | 0.8795 | | 0.2895 | 11.37 | 4800 | 0.2783 | 0.8784 | 0.8784 | | 0.3009 | 11.85 | 5000 | 0.2740 | 0.8798 | 0.8799 | | 0.2939 | 12.32 | 5200 | 0.2781 | 0.8799 | 0.8799 | | 0.2961 | 12.8 | 5400 | 0.2783 | 0.8793 | 0.8793 | | 0.2931 | 13.27 | 5600 | 0.2719 | 0.8810 | 0.8811 | | 0.2878 | 13.74 | 5800 | 0.2746 | 0.8791 | 0.8792 | | 0.2924 | 14.22 | 6000 | 0.2695 | 0.8809 | 0.8809 | | 0.2862 | 14.69 | 6200 | 0.2703 | 0.8812 | 0.8812 | | 0.2925 | 15.17 | 6400 | 0.2712 | 0.8826 | 0.8826 | | 0.2901 | 15.64 | 6600 | 0.2690 | 0.8806 | 0.8807 | | 0.2855 | 16.11 | 6800 | 0.2670 | 0.8815 | 0.8815 | | 0.2842 | 16.59 | 7000 | 0.2644 | 0.8824 | 0.8824 | | 0.2824 | 17.06 | 7200 | 0.2654 | 0.8815 | 0.8815 | | 0.2861 | 17.54 | 7400 | 0.2664 | 0.8817 | 0.8817 | | 0.2867 | 18.01 | 7600 | 0.2644 | 0.8840 | 0.8841 | | 0.2816 | 18.48 | 7800 | 0.2657 | 0.8814 | 0.8814 | | 0.2876 | 18.96 | 8000 | 0.2633 | 0.8827 | 0.8827 | | 0.2851 | 19.43 | 8200 | 0.2655 | 0.8820 | 0.8820 | | 0.2818 | 19.91 | 8400 | 0.2633 | 0.8853 | 0.8854 | | 0.2851 | 20.38 | 8600 | 0.2637 | 0.8828 | 0.8829 | | 0.2792 | 20.85 | 8800 | 0.2631 | 0.8856 | 0.8857 | | 0.2798 | 21.33 | 9000 | 0.2632 | 0.8836 | 0.8836 | | 0.2814 | 21.8 | 9200 | 0.2650 | 0.8821 | 0.8821 | | 0.2852 | 22.27 | 9400 | 0.2626 | 0.8830 | 0.8830 | | 0.2787 | 22.75 | 9600 | 0.2634 | 0.8853 | 0.8854 | | 0.2794 | 23.22 | 9800 | 0.2643 | 0.8843 | 0.8844 | | 0.2793 | 23.7 | 10000 | 0.2642 | 0.8839 | 0.8839 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_1-seqsight_65536_512_47M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_1-seqsight_65536_512_47M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:45:57+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Ayu14/mistral_7b_guanaco
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T16:46:03+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_1-seqsight_65536_512_47M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.2451 - F1 Score: 0.8914 - Accuracy: 0.8915 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4716 | 0.47 | 200 | 0.3665 | 0.8341 | 0.8342 | | 0.3645 | 0.95 | 400 | 0.3228 | 0.8565 | 0.8566 | | 0.3288 | 1.42 | 600 | 0.3039 | 0.8642 | 0.8642 | | 0.3258 | 1.9 | 800 | 0.2954 | 0.8700 | 0.8700 | | 0.312 | 2.37 | 1000 | 0.2922 | 0.8701 | 0.8701 | | 0.3096 | 2.84 | 1200 | 0.2945 | 0.8731 | 0.8731 | | 0.2987 | 3.32 | 1400 | 0.2809 | 0.8756 | 0.8756 | | 0.306 | 3.79 | 1600 | 0.2991 | 0.8702 | 0.8703 | | 0.3005 | 4.27 | 1800 | 0.2755 | 0.8777 | 0.8780 | | 0.2953 | 4.74 | 2000 | 0.2796 | 0.8785 | 0.8786 | | 0.2946 | 5.21 | 2200 | 0.2743 | 0.8788 | 0.8792 | | 0.289 | 5.69 | 2400 | 0.2729 | 0.8826 | 0.8826 | | 0.2848 | 6.16 | 2600 | 0.2693 | 0.8805 | 0.8808 | | 0.2831 | 6.64 | 2800 | 0.2664 | 0.8846 | 0.8847 | | 0.284 | 7.11 | 3000 | 0.2727 | 0.8850 | 0.8850 | | 0.2786 | 7.58 | 3200 | 0.2695 | 0.8864 | 0.8864 | | 0.2777 | 8.06 | 3400 | 0.2596 | 0.8851 | 0.8852 | | 0.2727 | 8.53 | 3600 | 0.2778 | 0.8823 | 0.8823 | | 0.2793 | 9.0 | 3800 | 0.2641 | 0.8875 | 0.8876 | | 0.2712 | 9.48 | 4000 | 0.2614 | 0.8876 | 0.8878 | | 0.2719 | 9.95 | 4200 | 0.2567 | 0.8890 | 0.8891 | | 0.2689 | 10.43 | 4400 | 0.2578 | 0.8900 | 0.8900 | | 0.2687 | 10.9 | 4600 | 0.2566 | 0.8919 | 0.8919 | | 0.2618 | 11.37 | 4800 | 0.2680 | 0.8852 | 0.8852 | | 0.271 | 11.85 | 5000 | 0.2547 | 0.8921 | 0.8921 | | 0.2626 | 12.32 | 5200 | 0.2607 | 0.8870 | 0.8870 | | 0.2647 | 12.8 | 5400 | 0.2639 | 0.8867 | 0.8867 | | 0.2638 | 13.27 | 5600 | 0.2513 | 0.8917 | 0.8918 | | 0.2571 | 13.74 | 5800 | 0.2536 | 0.8924 | 0.8925 | | 0.2594 | 14.22 | 6000 | 0.2541 | 0.8912 | 0.8912 | | 0.2559 | 14.69 | 6200 | 0.2528 | 0.8926 | 0.8927 | | 0.2583 | 15.17 | 6400 | 0.2548 | 0.8909 | 0.8909 | | 0.2589 | 15.64 | 6600 | 0.2536 | 0.8909 | 0.8909 | | 0.2524 | 16.11 | 6800 | 0.2498 | 0.8928 | 0.8928 | | 0.2516 | 16.59 | 7000 | 0.2497 | 0.8919 | 0.8919 | | 0.2538 | 17.06 | 7200 | 0.2479 | 0.8916 | 0.8916 | | 0.2527 | 17.54 | 7400 | 0.2513 | 0.8913 | 0.8913 | | 0.2546 | 18.01 | 7600 | 0.2455 | 0.8922 | 0.8922 | | 0.2471 | 18.48 | 7800 | 0.2491 | 0.8918 | 0.8918 | | 0.2562 | 18.96 | 8000 | 0.2459 | 0.8925 | 0.8925 | | 0.253 | 19.43 | 8200 | 0.2487 | 0.8919 | 0.8919 | | 0.2513 | 19.91 | 8400 | 0.2445 | 0.8948 | 0.8949 | | 0.2523 | 20.38 | 8600 | 0.2480 | 0.8925 | 0.8925 | | 0.2474 | 20.85 | 8800 | 0.2449 | 0.8937 | 0.8937 | | 0.2465 | 21.33 | 9000 | 0.2476 | 0.8927 | 0.8927 | | 0.2493 | 21.8 | 9200 | 0.2484 | 0.8925 | 0.8925 | | 0.2532 | 22.27 | 9400 | 0.2457 | 0.8935 | 0.8936 | | 0.2431 | 22.75 | 9600 | 0.2467 | 0.8922 | 0.8922 | | 0.2476 | 23.22 | 9800 | 0.2477 | 0.8919 | 0.8919 | | 0.2482 | 23.7 | 10000 | 0.2472 | 0.8921 | 0.8921 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_1-seqsight_65536_512_47M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_1-seqsight_65536_512_47M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:46:06+00:00
text-classification
transformers
{}
HossamEL-Dein/distilbert-base-uncased-finetuned-emotion
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T16:46:42+00:00
null
transformers
# Uploaded model - **Developed by:** xsa-dev - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
xsa-dev/hugs_llama3_technique_ft_lora
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-03T16:47:09+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
arthrod/jaera
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T16:47:14+00:00
text-generation
transformers
{}
PB7-DUT-2023/finetuned_Bloomz_1b1_v4
null
[ "transformers", "pytorch", "bloom", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T16:47:18+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TweetRobertaNewDataset This model is a fine-tuned version of [AndreiUrsu/TweetRoberta_5epochs](https://huggingface.co/AndreiUrsu/TweetRoberta_5epochs) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Accuracy: 1.0 - F1: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---:| | 0.0 | 1.0 | 1000 | 0.0000 | 1.0 | 1.0 | | 0.0 | 2.0 | 2000 | 0.0000 | 1.0 | 1.0 | | 0.0 | 3.0 | 3000 | 0.0000 | 1.0 | 1.0 | | 0.0 | 4.0 | 4000 | 0.0000 | 1.0 | 1.0 | | 0.0 | 5.0 | 5000 | 0.0000 | 1.0 | 1.0 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "AndreiUrsu/TweetRoberta_5epochs", "model-index": [{"name": "TweetRobertaNewDataset", "results": []}]}
AndreiUrsu/TweetRobertaNewDataset
null
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:AndreiUrsu/TweetRoberta_5epochs", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T16:47:22+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-160m_niki-041a_imdb_random-token-1280_10-rounds_seed-1 This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m", "model-index": [{"name": "robust_llm_pythia-160m_niki-041a_imdb_random-token-1280_10-rounds_seed-1", "results": []}]}
AlignmentResearch/robust_llm_pythia-160m_niki-041a_imdb_random-token-1280_10-rounds_seed-1
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-160m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T16:48:25+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_1-seqsight_65536_512_47M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.2380 - F1 Score: 0.8971 - Accuracy: 0.8971 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4424 | 0.47 | 200 | 0.3542 | 0.8397 | 0.8402 | | 0.3385 | 0.95 | 400 | 0.3072 | 0.8628 | 0.8629 | | 0.3132 | 1.42 | 600 | 0.2880 | 0.8750 | 0.8750 | | 0.3121 | 1.9 | 800 | 0.2883 | 0.8762 | 0.8762 | | 0.2973 | 2.37 | 1000 | 0.2811 | 0.8780 | 0.8780 | | 0.2919 | 2.84 | 1200 | 0.2791 | 0.8803 | 0.8804 | | 0.2792 | 3.32 | 1400 | 0.2647 | 0.8834 | 0.8835 | | 0.2846 | 3.79 | 1600 | 0.2777 | 0.8788 | 0.8789 | | 0.2762 | 4.27 | 1800 | 0.2560 | 0.8894 | 0.8895 | | 0.2699 | 4.74 | 2000 | 0.2609 | 0.8870 | 0.8870 | | 0.2666 | 5.21 | 2200 | 0.2545 | 0.8863 | 0.8866 | | 0.2605 | 5.69 | 2400 | 0.2619 | 0.8861 | 0.8861 | | 0.2555 | 6.16 | 2600 | 0.2478 | 0.8929 | 0.8931 | | 0.2557 | 6.64 | 2800 | 0.2490 | 0.8922 | 0.8922 | | 0.2545 | 7.11 | 3000 | 0.2513 | 0.8930 | 0.8930 | | 0.2466 | 7.58 | 3200 | 0.2533 | 0.8932 | 0.8933 | | 0.2482 | 8.06 | 3400 | 0.2414 | 0.8944 | 0.8944 | | 0.2422 | 8.53 | 3600 | 0.2517 | 0.8940 | 0.8940 | | 0.2502 | 9.0 | 3800 | 0.2511 | 0.8932 | 0.8933 | | 0.2417 | 9.48 | 4000 | 0.2431 | 0.8978 | 0.8979 | | 0.2428 | 9.95 | 4200 | 0.2401 | 0.8966 | 0.8967 | | 0.2404 | 10.43 | 4400 | 0.2427 | 0.8953 | 0.8953 | | 0.2394 | 10.9 | 4600 | 0.2418 | 0.8949 | 0.8949 | | 0.2349 | 11.37 | 4800 | 0.2500 | 0.8916 | 0.8916 | | 0.2407 | 11.85 | 5000 | 0.2409 | 0.8958 | 0.8958 | | 0.2334 | 12.32 | 5200 | 0.2439 | 0.8921 | 0.8921 | | 0.2364 | 12.8 | 5400 | 0.2496 | 0.8947 | 0.8947 | | 0.237 | 13.27 | 5600 | 0.2399 | 0.8947 | 0.8947 | | 0.2288 | 13.74 | 5800 | 0.2400 | 0.8996 | 0.8996 | | 0.2319 | 14.22 | 6000 | 0.2448 | 0.8934 | 0.8934 | | 0.2288 | 14.69 | 6200 | 0.2422 | 0.8989 | 0.8989 | | 0.2313 | 15.17 | 6400 | 0.2411 | 0.8967 | 0.8967 | | 0.2331 | 15.64 | 6600 | 0.2416 | 0.8970 | 0.8970 | | 0.2244 | 16.11 | 6800 | 0.2385 | 0.8981 | 0.8981 | | 0.2233 | 16.59 | 7000 | 0.2361 | 0.8998 | 0.8998 | | 0.2281 | 17.06 | 7200 | 0.2381 | 0.8978 | 0.8979 | | 0.2247 | 17.54 | 7400 | 0.2415 | 0.8947 | 0.8947 | | 0.2268 | 18.01 | 7600 | 0.2353 | 0.8992 | 0.8992 | | 0.2208 | 18.48 | 7800 | 0.2409 | 0.8981 | 0.8981 | | 0.2294 | 18.96 | 8000 | 0.2351 | 0.8996 | 0.8996 | | 0.2237 | 19.43 | 8200 | 0.2390 | 0.8992 | 0.8992 | | 0.2243 | 19.91 | 8400 | 0.2363 | 0.8982 | 0.8983 | | 0.2241 | 20.38 | 8600 | 0.2374 | 0.8993 | 0.8993 | | 0.2202 | 20.85 | 8800 | 0.2361 | 0.8990 | 0.8990 | | 0.2218 | 21.33 | 9000 | 0.2373 | 0.8992 | 0.8992 | | 0.2197 | 21.8 | 9200 | 0.2394 | 0.8992 | 0.8992 | | 0.2263 | 22.27 | 9400 | 0.2361 | 0.8989 | 0.8989 | | 0.2152 | 22.75 | 9600 | 0.2377 | 0.8990 | 0.8990 | | 0.2202 | 23.22 | 9800 | 0.2377 | 0.8990 | 0.8990 | | 0.2206 | 23.7 | 10000 | 0.2374 | 0.8999 | 0.8999 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_1-seqsight_65536_512_47M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_1-seqsight_65536_512_47M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:48:27+00:00
null
null
{"license": "apache-2.0"}
jooneez/marklogic-connect
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-03T16:48:51+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_4-seqsight_65536_512_47M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.5948 - F1 Score: 0.6707 - Accuracy: 0.6707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6521 | 1.69 | 200 | 0.6311 | 0.6340 | 0.6383 | | 0.627 | 3.39 | 400 | 0.6197 | 0.6487 | 0.6500 | | 0.6163 | 5.08 | 600 | 0.6114 | 0.6666 | 0.6665 | | 0.6104 | 6.78 | 800 | 0.6103 | 0.6676 | 0.6676 | | 0.6062 | 8.47 | 1000 | 0.6073 | 0.6693 | 0.6691 | | 0.6025 | 10.17 | 1200 | 0.6068 | 0.6739 | 0.6739 | | 0.5986 | 11.86 | 1400 | 0.6013 | 0.6743 | 0.6745 | | 0.5957 | 13.56 | 1600 | 0.6004 | 0.6776 | 0.6776 | | 0.5964 | 15.25 | 1800 | 0.5962 | 0.6822 | 0.6824 | | 0.5907 | 16.95 | 2000 | 0.5946 | 0.6853 | 0.6856 | | 0.586 | 18.64 | 2200 | 0.5967 | 0.6780 | 0.6782 | | 0.5872 | 20.34 | 2400 | 0.5974 | 0.6772 | 0.6803 | | 0.5874 | 22.03 | 2600 | 0.5923 | 0.6867 | 0.6867 | | 0.5837 | 23.73 | 2800 | 0.5887 | 0.6937 | 0.6936 | | 0.584 | 25.42 | 3000 | 0.5901 | 0.6885 | 0.6899 | | 0.5828 | 27.12 | 3200 | 0.5944 | 0.6771 | 0.6787 | | 0.5809 | 28.81 | 3400 | 0.5882 | 0.6916 | 0.6914 | | 0.5754 | 30.51 | 3600 | 0.5876 | 0.6937 | 0.6936 | | 0.5791 | 32.2 | 3800 | 0.5852 | 0.6979 | 0.6978 | | 0.5786 | 33.9 | 4000 | 0.5855 | 0.6899 | 0.6899 | | 0.5769 | 35.59 | 4200 | 0.5867 | 0.6903 | 0.6904 | | 0.5735 | 37.29 | 4400 | 0.5839 | 0.6935 | 0.6936 | | 0.5771 | 38.98 | 4600 | 0.5826 | 0.6943 | 0.6946 | | 0.5735 | 40.68 | 4800 | 0.5815 | 0.6977 | 0.6978 | | 0.5709 | 42.37 | 5000 | 0.5823 | 0.6947 | 0.6946 | | 0.5736 | 44.07 | 5200 | 0.5811 | 0.6955 | 0.6957 | | 0.5703 | 45.76 | 5400 | 0.5821 | 0.6947 | 0.6946 | | 0.5711 | 47.46 | 5600 | 0.5819 | 0.6945 | 0.6946 | | 0.5716 | 49.15 | 5800 | 0.5847 | 0.6890 | 0.6899 | | 0.5688 | 50.85 | 6000 | 0.5803 | 0.7000 | 0.6999 | | 0.5665 | 52.54 | 6200 | 0.5811 | 0.6990 | 0.6989 | | 0.5651 | 54.24 | 6400 | 0.5798 | 0.6957 | 0.6957 | | 0.571 | 55.93 | 6600 | 0.5786 | 0.6968 | 0.6968 | | 0.5676 | 57.63 | 6800 | 0.5794 | 0.6956 | 0.6962 | | 0.5645 | 59.32 | 7000 | 0.5808 | 0.6942 | 0.6941 | | 0.566 | 61.02 | 7200 | 0.5794 | 0.6917 | 0.6925 | | 0.5642 | 62.71 | 7400 | 0.5783 | 0.6971 | 0.6973 | | 0.5663 | 64.41 | 7600 | 0.5778 | 0.6978 | 0.6978 | | 0.5662 | 66.1 | 7800 | 0.5793 | 0.6942 | 0.6941 | | 0.5628 | 67.8 | 8000 | 0.5785 | 0.6971 | 0.6973 | | 0.5656 | 69.49 | 8200 | 0.5782 | 0.6979 | 0.6978 | | 0.5631 | 71.19 | 8400 | 0.5776 | 0.6957 | 0.6957 | | 0.5639 | 72.88 | 8600 | 0.5774 | 0.6987 | 0.6989 | | 0.5609 | 74.58 | 8800 | 0.5779 | 0.6958 | 0.6957 | | 0.5636 | 76.27 | 9000 | 0.5780 | 0.6963 | 0.6962 | | 0.563 | 77.97 | 9200 | 0.5775 | 0.6974 | 0.6973 | | 0.5616 | 79.66 | 9400 | 0.5776 | 0.6976 | 0.6978 | | 0.5619 | 81.36 | 9600 | 0.5775 | 0.6962 | 0.6962 | | 0.5652 | 83.05 | 9800 | 0.5774 | 0.6972 | 0.6973 | | 0.5595 | 84.75 | 10000 | 0.5774 | 0.6989 | 0.6989 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_4-seqsight_65536_512_47M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_4-seqsight_65536_512_47M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:50:33+00:00
text-generation
transformers
# Uploaded model - **Developed by:** shkna1368 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit", "pipeline_tag": "text-generation"}
shkna1368/kurdish_poetry_v3_model
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-05-03T16:50:37+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_4-seqsight_65536_512_47M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.5997 - F1 Score: 0.6760 - Accuracy: 0.6760 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6464 | 1.69 | 200 | 0.6291 | 0.6383 | 0.6431 | | 0.6205 | 3.39 | 400 | 0.6127 | 0.6541 | 0.6553 | | 0.6052 | 5.08 | 600 | 0.6008 | 0.6660 | 0.6660 | | 0.5962 | 6.78 | 800 | 0.5952 | 0.6850 | 0.6851 | | 0.589 | 8.47 | 1000 | 0.5906 | 0.6884 | 0.6883 | | 0.5841 | 10.17 | 1200 | 0.5929 | 0.6830 | 0.6835 | | 0.5809 | 11.86 | 1400 | 0.5859 | 0.6899 | 0.6899 | | 0.5737 | 13.56 | 1600 | 0.5868 | 0.6973 | 0.6973 | | 0.5746 | 15.25 | 1800 | 0.5881 | 0.6834 | 0.6856 | | 0.5679 | 16.95 | 2000 | 0.5914 | 0.6871 | 0.6883 | | 0.5629 | 18.64 | 2200 | 0.5909 | 0.6867 | 0.6872 | | 0.5626 | 20.34 | 2400 | 0.5850 | 0.6899 | 0.6904 | | 0.5611 | 22.03 | 2600 | 0.6049 | 0.6727 | 0.6771 | | 0.5549 | 23.73 | 2800 | 0.5837 | 0.6957 | 0.6957 | | 0.5551 | 25.42 | 3000 | 0.5832 | 0.6980 | 0.6989 | | 0.5532 | 27.12 | 3200 | 0.5874 | 0.6884 | 0.6893 | | 0.55 | 28.81 | 3400 | 0.5862 | 0.6948 | 0.6946 | | 0.544 | 30.51 | 3600 | 0.5865 | 0.7006 | 0.7005 | | 0.5452 | 32.2 | 3800 | 0.5857 | 0.6985 | 0.6984 | | 0.5445 | 33.9 | 4000 | 0.5852 | 0.6965 | 0.6968 | | 0.5391 | 35.59 | 4200 | 0.5939 | 0.6867 | 0.6872 | | 0.5392 | 37.29 | 4400 | 0.5879 | 0.7016 | 0.7015 | | 0.5392 | 38.98 | 4600 | 0.5866 | 0.7011 | 0.7010 | | 0.5349 | 40.68 | 4800 | 0.5890 | 0.6990 | 0.6989 | | 0.5301 | 42.37 | 5000 | 0.5922 | 0.6913 | 0.6914 | | 0.5352 | 44.07 | 5200 | 0.5859 | 0.7011 | 0.7010 | | 0.5268 | 45.76 | 5400 | 0.5905 | 0.6947 | 0.6946 | | 0.5284 | 47.46 | 5600 | 0.5919 | 0.7042 | 0.7042 | | 0.5294 | 49.15 | 5800 | 0.5930 | 0.6938 | 0.6941 | | 0.5242 | 50.85 | 6000 | 0.5896 | 0.6899 | 0.6899 | | 0.5227 | 52.54 | 6200 | 0.5891 | 0.6995 | 0.6994 | | 0.519 | 54.24 | 6400 | 0.5922 | 0.6995 | 0.6994 | | 0.5246 | 55.93 | 6600 | 0.5936 | 0.6934 | 0.6936 | | 0.5201 | 57.63 | 6800 | 0.5891 | 0.6989 | 0.6989 | | 0.5165 | 59.32 | 7000 | 0.5952 | 0.6956 | 0.6957 | | 0.5146 | 61.02 | 7200 | 0.5919 | 0.6985 | 0.6984 | | 0.5153 | 62.71 | 7400 | 0.5909 | 0.6995 | 0.6994 | | 0.5157 | 64.41 | 7600 | 0.5900 | 0.6995 | 0.6994 | | 0.5143 | 66.1 | 7800 | 0.5983 | 0.7005 | 0.7005 | | 0.5122 | 67.8 | 8000 | 0.5958 | 0.6994 | 0.6994 | | 0.5115 | 69.49 | 8200 | 0.5938 | 0.7016 | 0.7015 | | 0.5125 | 71.19 | 8400 | 0.5931 | 0.6948 | 0.6946 | | 0.5132 | 72.88 | 8600 | 0.5940 | 0.6995 | 0.6994 | | 0.5102 | 74.58 | 8800 | 0.5946 | 0.6942 | 0.6941 | | 0.5131 | 76.27 | 9000 | 0.5943 | 0.6963 | 0.6962 | | 0.5078 | 77.97 | 9200 | 0.5952 | 0.6942 | 0.6941 | | 0.5101 | 79.66 | 9400 | 0.5941 | 0.6969 | 0.6968 | | 0.5088 | 81.36 | 9600 | 0.5944 | 0.6942 | 0.6941 | | 0.5098 | 83.05 | 9800 | 0.5942 | 0.6937 | 0.6936 | | 0.505 | 84.75 | 10000 | 0.5946 | 0.6937 | 0.6936 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_4-seqsight_65536_512_47M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_4-seqsight_65536_512_47M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:50:56+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_4-seqsight_65536_512_47M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.5934 - F1 Score: 0.6669 - Accuracy: 0.6670 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6385 | 1.69 | 200 | 0.6138 | 0.6592 | 0.6591 | | 0.6104 | 3.39 | 400 | 0.6019 | 0.6744 | 0.6745 | | 0.5929 | 5.08 | 600 | 0.5917 | 0.6831 | 0.6830 | | 0.5813 | 6.78 | 800 | 0.5865 | 0.6924 | 0.6930 | | 0.5717 | 8.47 | 1000 | 0.5868 | 0.6908 | 0.6914 | | 0.5641 | 10.17 | 1200 | 0.5854 | 0.6930 | 0.6936 | | 0.5553 | 11.86 | 1400 | 0.5774 | 0.7075 | 0.7074 | | 0.5443 | 13.56 | 1600 | 0.5859 | 0.6988 | 0.6989 | | 0.5406 | 15.25 | 1800 | 0.5799 | 0.7015 | 0.7021 | | 0.5298 | 16.95 | 2000 | 0.5854 | 0.6908 | 0.6920 | | 0.5214 | 18.64 | 2200 | 0.5981 | 0.6859 | 0.6867 | | 0.5138 | 20.34 | 2400 | 0.5922 | 0.6974 | 0.6973 | | 0.5109 | 22.03 | 2600 | 0.6084 | 0.6798 | 0.6824 | | 0.4997 | 23.73 | 2800 | 0.5918 | 0.6984 | 0.6984 | | 0.4936 | 25.42 | 3000 | 0.6058 | 0.6896 | 0.6941 | | 0.4897 | 27.12 | 3200 | 0.6117 | 0.6988 | 0.6989 | | 0.4819 | 28.81 | 3400 | 0.6112 | 0.7056 | 0.7058 | | 0.4692 | 30.51 | 3600 | 0.6153 | 0.7043 | 0.7042 | | 0.4657 | 32.2 | 3800 | 0.6426 | 0.6894 | 0.6893 | | 0.4632 | 33.9 | 4000 | 0.6184 | 0.6939 | 0.6941 | | 0.4527 | 35.59 | 4200 | 0.6472 | 0.6833 | 0.6840 | | 0.4502 | 37.29 | 4400 | 0.6197 | 0.7027 | 0.7026 | | 0.448 | 38.98 | 4600 | 0.6403 | 0.6931 | 0.6930 | | 0.4379 | 40.68 | 4800 | 0.6416 | 0.6958 | 0.6957 | | 0.4331 | 42.37 | 5000 | 0.6411 | 0.6887 | 0.6888 | | 0.4327 | 44.07 | 5200 | 0.6587 | 0.6921 | 0.6920 | | 0.4206 | 45.76 | 5400 | 0.6642 | 0.6921 | 0.6920 | | 0.4175 | 47.46 | 5600 | 0.6771 | 0.6971 | 0.6973 | | 0.4181 | 49.15 | 5800 | 0.6664 | 0.6952 | 0.6952 | | 0.4105 | 50.85 | 6000 | 0.6591 | 0.6831 | 0.6830 | | 0.4053 | 52.54 | 6200 | 0.6680 | 0.6921 | 0.6920 | | 0.4009 | 54.24 | 6400 | 0.6803 | 0.6836 | 0.6835 | | 0.3975 | 55.93 | 6600 | 0.6966 | 0.6871 | 0.6872 | | 0.3969 | 57.63 | 6800 | 0.6871 | 0.6979 | 0.6978 | | 0.3936 | 59.32 | 7000 | 0.7074 | 0.6852 | 0.6851 | | 0.3866 | 61.02 | 7200 | 0.7011 | 0.6894 | 0.6893 | | 0.3839 | 62.71 | 7400 | 0.6931 | 0.6868 | 0.6867 | | 0.3815 | 64.41 | 7600 | 0.6938 | 0.6878 | 0.6877 | | 0.3823 | 66.1 | 7800 | 0.7002 | 0.6857 | 0.6856 | | 0.3772 | 67.8 | 8000 | 0.7163 | 0.6889 | 0.6888 | | 0.3756 | 69.49 | 8200 | 0.7114 | 0.6915 | 0.6914 | | 0.3761 | 71.19 | 8400 | 0.7144 | 0.6909 | 0.6909 | | 0.3735 | 72.88 | 8600 | 0.7128 | 0.6899 | 0.6899 | | 0.3722 | 74.58 | 8800 | 0.7161 | 0.6910 | 0.6909 | | 0.3731 | 76.27 | 9000 | 0.7225 | 0.6883 | 0.6883 | | 0.3656 | 77.97 | 9200 | 0.7261 | 0.6889 | 0.6888 | | 0.3672 | 79.66 | 9400 | 0.7293 | 0.6921 | 0.6920 | | 0.3749 | 81.36 | 9600 | 0.7191 | 0.6877 | 0.6877 | | 0.3646 | 83.05 | 9800 | 0.7229 | 0.6910 | 0.6909 | | 0.3638 | 84.75 | 10000 | 0.7225 | 0.6905 | 0.6904 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_4-seqsight_65536_512_47M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_4-seqsight_65536_512_47M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:51:16+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_3-seqsight_65536_512_47M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4966 - F1 Score: 0.7905 - Accuracy: 0.7908 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6153 | 13.33 | 200 | 0.5197 | 0.7270 | 0.7280 | | 0.5296 | 26.67 | 400 | 0.4688 | 0.7612 | 0.7615 | | 0.4753 | 40.0 | 600 | 0.4453 | 0.7862 | 0.7866 | | 0.4354 | 53.33 | 800 | 0.4455 | 0.7824 | 0.7824 | | 0.4069 | 66.67 | 1000 | 0.4548 | 0.7822 | 0.7824 | | 0.3871 | 80.0 | 1200 | 0.4627 | 0.7824 | 0.7824 | | 0.3672 | 93.33 | 1400 | 0.4847 | 0.7822 | 0.7824 | | 0.3483 | 106.67 | 1600 | 0.4773 | 0.7822 | 0.7824 | | 0.3364 | 120.0 | 1800 | 0.4735 | 0.7899 | 0.7908 | | 0.3245 | 133.33 | 2000 | 0.4826 | 0.7699 | 0.7699 | | 0.3147 | 146.67 | 2200 | 0.4741 | 0.7772 | 0.7782 | | 0.3043 | 160.0 | 2400 | 0.4895 | 0.7944 | 0.7950 | | 0.2964 | 173.33 | 2600 | 0.4888 | 0.7907 | 0.7908 | | 0.2902 | 186.67 | 2800 | 0.4801 | 0.8115 | 0.8117 | | 0.2778 | 200.0 | 3000 | 0.4895 | 0.7991 | 0.7992 | | 0.2702 | 213.33 | 3200 | 0.4908 | 0.7908 | 0.7908 | | 0.2638 | 226.67 | 3400 | 0.5043 | 0.8114 | 0.8117 | | 0.2601 | 240.0 | 3600 | 0.5133 | 0.8117 | 0.8117 | | 0.2565 | 253.33 | 3800 | 0.5242 | 0.7865 | 0.7866 | | 0.2513 | 266.67 | 4000 | 0.5249 | 0.8033 | 0.8033 | | 0.2463 | 280.0 | 4200 | 0.5159 | 0.8033 | 0.8033 | | 0.2422 | 293.33 | 4400 | 0.5105 | 0.8159 | 0.8159 | | 0.2422 | 306.67 | 4600 | 0.5276 | 0.8033 | 0.8033 | | 0.2378 | 320.0 | 4800 | 0.5143 | 0.8201 | 0.8201 | | 0.2339 | 333.33 | 5000 | 0.5301 | 0.7991 | 0.7992 | | 0.2291 | 346.67 | 5200 | 0.5187 | 0.8159 | 0.8159 | | 0.2262 | 360.0 | 5400 | 0.5386 | 0.8033 | 0.8033 | | 0.2286 | 373.33 | 5600 | 0.5374 | 0.8033 | 0.8033 | | 0.2224 | 386.67 | 5800 | 0.5370 | 0.8032 | 0.8033 | | 0.2187 | 400.0 | 6000 | 0.5439 | 0.7991 | 0.7992 | | 0.2199 | 413.33 | 6200 | 0.5510 | 0.7992 | 0.7992 | | 0.215 | 426.67 | 6400 | 0.5624 | 0.8117 | 0.8117 | | 0.2085 | 440.0 | 6600 | 0.5570 | 0.8033 | 0.8033 | | 0.2148 | 453.33 | 6800 | 0.5593 | 0.7948 | 0.7950 | | 0.2102 | 466.67 | 7000 | 0.5556 | 0.8033 | 0.8033 | | 0.2079 | 480.0 | 7200 | 0.5660 | 0.8159 | 0.8159 | | 0.2083 | 493.33 | 7400 | 0.5608 | 0.7991 | 0.7992 | | 0.2029 | 506.67 | 7600 | 0.5672 | 0.8033 | 0.8033 | | 0.2086 | 520.0 | 7800 | 0.5573 | 0.8033 | 0.8033 | | 0.2045 | 533.33 | 8000 | 0.5656 | 0.7992 | 0.7992 | | 0.2064 | 546.67 | 8200 | 0.5623 | 0.7950 | 0.7950 | | 0.2037 | 560.0 | 8400 | 0.5711 | 0.8033 | 0.8033 | | 0.2058 | 573.33 | 8600 | 0.5637 | 0.8033 | 0.8033 | | 0.199 | 586.67 | 8800 | 0.5728 | 0.7992 | 0.7992 | | 0.1986 | 600.0 | 9000 | 0.5716 | 0.8033 | 0.8033 | | 0.199 | 613.33 | 9200 | 0.5719 | 0.8075 | 0.8075 | | 0.2011 | 626.67 | 9400 | 0.5749 | 0.8033 | 0.8033 | | 0.1941 | 640.0 | 9600 | 0.5781 | 0.8033 | 0.8033 | | 0.196 | 653.33 | 9800 | 0.5787 | 0.8033 | 0.8033 | | 0.1975 | 666.67 | 10000 | 0.5777 | 0.8033 | 0.8033 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_3-seqsight_65536_512_47M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_3-seqsight_65536_512_47M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:51:22+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/wf4ax6j
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T16:52:05+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_3-seqsight_65536_512_47M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.7229 - F1 Score: 0.7936 - Accuracy: 0.7950 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5751 | 13.33 | 200 | 0.4346 | 0.7943 | 0.7950 | | 0.4162 | 26.67 | 400 | 0.4310 | 0.8108 | 0.8117 | | 0.3344 | 40.0 | 600 | 0.4316 | 0.8201 | 0.8201 | | 0.2898 | 53.33 | 800 | 0.4517 | 0.7991 | 0.7992 | | 0.2598 | 66.67 | 1000 | 0.4565 | 0.8284 | 0.8285 | | 0.237 | 80.0 | 1200 | 0.4747 | 0.8324 | 0.8326 | | 0.2109 | 93.33 | 1400 | 0.5427 | 0.7943 | 0.7950 | | 0.1887 | 106.67 | 1600 | 0.5072 | 0.8493 | 0.8494 | | 0.1776 | 120.0 | 1800 | 0.5411 | 0.8368 | 0.8368 | | 0.1623 | 133.33 | 2000 | 0.5828 | 0.8366 | 0.8368 | | 0.1505 | 146.67 | 2200 | 0.5813 | 0.8532 | 0.8536 | | 0.137 | 160.0 | 2400 | 0.6017 | 0.8366 | 0.8368 | | 0.13 | 173.33 | 2600 | 0.6122 | 0.8243 | 0.8243 | | 0.1221 | 186.67 | 2800 | 0.6114 | 0.8326 | 0.8326 | | 0.1103 | 200.0 | 3000 | 0.6693 | 0.8117 | 0.8117 | | 0.1076 | 213.33 | 3200 | 0.6581 | 0.8285 | 0.8285 | | 0.102 | 226.67 | 3400 | 0.6523 | 0.8490 | 0.8494 | | 0.0938 | 240.0 | 3600 | 0.6944 | 0.8493 | 0.8494 | | 0.0883 | 253.33 | 3800 | 0.7203 | 0.8284 | 0.8285 | | 0.0868 | 266.67 | 4000 | 0.7252 | 0.8367 | 0.8368 | | 0.0828 | 280.0 | 4200 | 0.7367 | 0.8368 | 0.8368 | | 0.0758 | 293.33 | 4400 | 0.7482 | 0.8326 | 0.8326 | | 0.073 | 306.67 | 4600 | 0.7660 | 0.8532 | 0.8536 | | 0.0744 | 320.0 | 4800 | 0.7260 | 0.8452 | 0.8452 | | 0.0686 | 333.33 | 5000 | 0.8126 | 0.8030 | 0.8033 | | 0.0657 | 346.67 | 5200 | 0.8016 | 0.8033 | 0.8033 | | 0.0612 | 360.0 | 5400 | 0.7908 | 0.8450 | 0.8452 | | 0.0595 | 373.33 | 5600 | 0.7927 | 0.8325 | 0.8326 | | 0.0602 | 386.67 | 5800 | 0.8100 | 0.8405 | 0.8410 | | 0.0534 | 400.0 | 6000 | 0.8413 | 0.8284 | 0.8285 | | 0.0572 | 413.33 | 6200 | 0.8071 | 0.8201 | 0.8201 | | 0.054 | 426.67 | 6400 | 0.8397 | 0.8368 | 0.8368 | | 0.051 | 440.0 | 6600 | 0.8219 | 0.8491 | 0.8494 | | 0.0486 | 453.33 | 6800 | 0.8881 | 0.8159 | 0.8159 | | 0.0468 | 466.67 | 7000 | 0.8793 | 0.8283 | 0.8285 | | 0.0486 | 480.0 | 7200 | 0.8410 | 0.8365 | 0.8368 | | 0.0448 | 493.33 | 7400 | 0.8617 | 0.8282 | 0.8285 | | 0.0439 | 506.67 | 7600 | 0.8704 | 0.8284 | 0.8285 | | 0.0465 | 520.0 | 7800 | 0.8496 | 0.8200 | 0.8201 | | 0.0459 | 533.33 | 8000 | 0.8654 | 0.8159 | 0.8159 | | 0.043 | 546.67 | 8200 | 0.8749 | 0.8325 | 0.8326 | | 0.0427 | 560.0 | 8400 | 0.8373 | 0.8285 | 0.8285 | | 0.0411 | 573.33 | 8600 | 0.8710 | 0.8366 | 0.8368 | | 0.0417 | 586.67 | 8800 | 0.8645 | 0.8200 | 0.8201 | | 0.0415 | 600.0 | 9000 | 0.8574 | 0.8282 | 0.8285 | | 0.0418 | 613.33 | 9200 | 0.8599 | 0.8241 | 0.8243 | | 0.0388 | 626.67 | 9400 | 0.8878 | 0.8200 | 0.8201 | | 0.0387 | 640.0 | 9600 | 0.8772 | 0.8325 | 0.8326 | | 0.0411 | 653.33 | 9800 | 0.8791 | 0.8283 | 0.8285 | | 0.0372 | 666.67 | 10000 | 0.8800 | 0.8241 | 0.8243 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_3-seqsight_65536_512_47M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_3-seqsight_65536_512_47M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:52:15+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_3-seqsight_65536_512_47M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset. It achieves the following results on the evaluation set: - Loss: 1.0857 - F1 Score: 0.8073 - Accuracy: 0.8075 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5234 | 13.33 | 200 | 0.3913 | 0.8242 | 0.8243 | | 0.3311 | 26.67 | 400 | 0.4285 | 0.8272 | 0.8285 | | 0.2514 | 40.0 | 600 | 0.4827 | 0.8033 | 0.8033 | | 0.1983 | 53.33 | 800 | 0.4903 | 0.8492 | 0.8494 | | 0.1525 | 66.67 | 1000 | 0.5545 | 0.8201 | 0.8201 | | 0.1297 | 80.0 | 1200 | 0.5400 | 0.8324 | 0.8326 | | 0.1067 | 93.33 | 1400 | 0.6817 | 0.7782 | 0.7782 | | 0.0876 | 106.67 | 1600 | 0.6357 | 0.8159 | 0.8159 | | 0.076 | 120.0 | 1800 | 0.6809 | 0.8200 | 0.8201 | | 0.0666 | 133.33 | 2000 | 0.7299 | 0.8283 | 0.8285 | | 0.0577 | 146.67 | 2200 | 0.6639 | 0.8452 | 0.8452 | | 0.0527 | 160.0 | 2400 | 0.7078 | 0.8326 | 0.8326 | | 0.044 | 173.33 | 2600 | 0.7728 | 0.8243 | 0.8243 | | 0.0459 | 186.67 | 2800 | 0.7674 | 0.8357 | 0.8368 | | 0.0383 | 200.0 | 3000 | 0.8229 | 0.8408 | 0.8410 | | 0.037 | 213.33 | 3200 | 0.7486 | 0.8619 | 0.8619 | | 0.031 | 226.67 | 3400 | 0.8314 | 0.8535 | 0.8536 | | 0.0292 | 240.0 | 3600 | 0.7943 | 0.8451 | 0.8452 | | 0.0234 | 253.33 | 3800 | 0.9168 | 0.8452 | 0.8452 | | 0.0245 | 266.67 | 4000 | 0.8986 | 0.8368 | 0.8368 | | 0.025 | 280.0 | 4200 | 0.9041 | 0.8326 | 0.8326 | | 0.0236 | 293.33 | 4400 | 0.8131 | 0.8494 | 0.8494 | | 0.0201 | 306.67 | 4600 | 0.9812 | 0.8367 | 0.8368 | | 0.0235 | 320.0 | 4800 | 0.9153 | 0.8452 | 0.8452 | | 0.019 | 333.33 | 5000 | 0.9622 | 0.8155 | 0.8159 | | 0.0181 | 346.67 | 5200 | 0.9807 | 0.8117 | 0.8117 | | 0.0176 | 360.0 | 5400 | 0.9316 | 0.8325 | 0.8326 | | 0.0172 | 373.33 | 5600 | 0.9852 | 0.8284 | 0.8285 | | 0.0157 | 386.67 | 5800 | 0.9615 | 0.8408 | 0.8410 | | 0.0158 | 400.0 | 6000 | 0.9269 | 0.8284 | 0.8285 | | 0.0141 | 413.33 | 6200 | 0.9634 | 0.8284 | 0.8285 | | 0.0159 | 426.67 | 6400 | 1.0444 | 0.8284 | 0.8285 | | 0.0113 | 440.0 | 6600 | 1.0204 | 0.8367 | 0.8368 | | 0.0138 | 453.33 | 6800 | 1.0301 | 0.8201 | 0.8201 | | 0.0132 | 466.67 | 7000 | 0.9787 | 0.8409 | 0.8410 | | 0.0114 | 480.0 | 7200 | 0.9992 | 0.8325 | 0.8326 | | 0.012 | 493.33 | 7400 | 1.0057 | 0.8451 | 0.8452 | | 0.011 | 506.67 | 7600 | 1.0578 | 0.8284 | 0.8285 | | 0.0115 | 520.0 | 7800 | 1.0444 | 0.8158 | 0.8159 | | 0.0105 | 533.33 | 8000 | 1.0361 | 0.8408 | 0.8410 | | 0.0105 | 546.67 | 8200 | 1.0373 | 0.8283 | 0.8285 | | 0.0097 | 560.0 | 8400 | 1.0294 | 0.8284 | 0.8285 | | 0.0086 | 573.33 | 8600 | 1.0487 | 0.8368 | 0.8368 | | 0.0094 | 586.67 | 8800 | 1.1034 | 0.8325 | 0.8326 | | 0.0088 | 600.0 | 9000 | 1.0760 | 0.8408 | 0.8410 | | 0.0099 | 613.33 | 9200 | 1.0482 | 0.8325 | 0.8326 | | 0.0085 | 626.67 | 9400 | 1.0606 | 0.8367 | 0.8368 | | 0.0084 | 640.0 | 9600 | 1.0605 | 0.8326 | 0.8326 | | 0.0085 | 653.33 | 9800 | 1.0749 | 0.8367 | 0.8368 | | 0.0074 | 666.67 | 10000 | 1.0751 | 0.8367 | 0.8368 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_3-seqsight_65536_512_47M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_3-seqsight_65536_512_47M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:52:16+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_2-seqsight_65536_512_47M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.3541 - F1 Score: 0.8658 - Accuracy: 0.8659 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.4313 | 9.52 | 200 | 0.3500 | 0.8142 | 0.8171 | | 0.3241 | 19.05 | 400 | 0.3304 | 0.8337 | 0.8354 | | 0.2983 | 28.57 | 600 | 0.2979 | 0.8474 | 0.8476 | | 0.2808 | 38.1 | 800 | 0.2914 | 0.8656 | 0.8659 | | 0.2646 | 47.62 | 1000 | 0.2881 | 0.8658 | 0.8659 | | 0.256 | 57.14 | 1200 | 0.2981 | 0.8562 | 0.8567 | | 0.2392 | 66.67 | 1400 | 0.3045 | 0.8685 | 0.8689 | | 0.2375 | 76.19 | 1600 | 0.3021 | 0.8713 | 0.8720 | | 0.2255 | 85.71 | 1800 | 0.2869 | 0.8747 | 0.875 | | 0.2177 | 95.24 | 2000 | 0.2744 | 0.8689 | 0.8689 | | 0.2113 | 104.76 | 2200 | 0.2641 | 0.8780 | 0.8780 | | 0.2051 | 114.29 | 2400 | 0.2741 | 0.8811 | 0.8811 | | 0.2064 | 123.81 | 2600 | 0.2673 | 0.8841 | 0.8841 | | 0.1955 | 133.33 | 2800 | 0.2755 | 0.8779 | 0.8780 | | 0.1906 | 142.86 | 3000 | 0.2868 | 0.8717 | 0.8720 | | 0.1845 | 152.38 | 3200 | 0.2780 | 0.8749 | 0.875 | | 0.1824 | 161.9 | 3400 | 0.3034 | 0.8716 | 0.8720 | | 0.1783 | 171.43 | 3600 | 0.2952 | 0.8747 | 0.875 | | 0.1771 | 180.95 | 3800 | 0.2867 | 0.8719 | 0.8720 | | 0.1721 | 190.48 | 4000 | 0.2793 | 0.8780 | 0.8780 | | 0.1691 | 200.0 | 4200 | 0.3039 | 0.8746 | 0.875 | | 0.1671 | 209.52 | 4400 | 0.2854 | 0.8841 | 0.8841 | | 0.1618 | 219.05 | 4600 | 0.2955 | 0.8718 | 0.8720 | | 0.1632 | 228.57 | 4800 | 0.2898 | 0.8811 | 0.8811 | | 0.158 | 238.1 | 5000 | 0.3040 | 0.8748 | 0.875 | | 0.1574 | 247.62 | 5200 | 0.3039 | 0.8749 | 0.875 | | 0.158 | 257.14 | 5400 | 0.3062 | 0.8749 | 0.875 | | 0.1516 | 266.67 | 5600 | 0.3205 | 0.8809 | 0.8811 | | 0.1522 | 276.19 | 5800 | 0.3115 | 0.8748 | 0.875 | | 0.1493 | 285.71 | 6000 | 0.3113 | 0.8841 | 0.8841 | | 0.1447 | 295.24 | 6200 | 0.3163 | 0.8810 | 0.8811 | | 0.1432 | 304.76 | 6400 | 0.3184 | 0.8780 | 0.8780 | | 0.1455 | 314.29 | 6600 | 0.3097 | 0.8811 | 0.8811 | | 0.1432 | 323.81 | 6800 | 0.3162 | 0.8841 | 0.8841 | | 0.1436 | 333.33 | 7000 | 0.3118 | 0.8841 | 0.8841 | | 0.1408 | 342.86 | 7200 | 0.3115 | 0.8872 | 0.8872 | | 0.1448 | 352.38 | 7400 | 0.3119 | 0.8872 | 0.8872 | | 0.1407 | 361.9 | 7600 | 0.3106 | 0.8810 | 0.8811 | | 0.1393 | 371.43 | 7800 | 0.3156 | 0.8841 | 0.8841 | | 0.135 | 380.95 | 8000 | 0.3190 | 0.8811 | 0.8811 | | 0.1352 | 390.48 | 8200 | 0.3213 | 0.8780 | 0.8780 | | 0.1354 | 400.0 | 8400 | 0.3208 | 0.8872 | 0.8872 | | 0.1345 | 409.52 | 8600 | 0.3207 | 0.8872 | 0.8872 | | 0.1381 | 419.05 | 8800 | 0.3193 | 0.8810 | 0.8811 | | 0.1323 | 428.57 | 9000 | 0.3282 | 0.8841 | 0.8841 | | 0.1319 | 438.1 | 9200 | 0.3342 | 0.8780 | 0.8780 | | 0.1339 | 447.62 | 9400 | 0.3331 | 0.8810 | 0.8811 | | 0.132 | 457.14 | 9600 | 0.3261 | 0.8872 | 0.8872 | | 0.1377 | 466.67 | 9800 | 0.3275 | 0.8872 | 0.8872 | | 0.1341 | 476.19 | 10000 | 0.3258 | 0.8872 | 0.8872 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_2-seqsight_65536_512_47M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_2-seqsight_65536_512_47M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:52:36+00:00
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) OpenCerebrum-1.0-7b-SFT - bnb 8bits - Model creator: https://huggingface.co/Locutusque/ - Original model: https://huggingface.co/Locutusque/OpenCerebrum-1.0-7b-SFT/ Original model description: --- language: - en license: apache-2.0 tags: - open-source - code - math - chemistry - biology - text-generation - question-answering datasets: - Open-Orca/SlimOrca - glaiveai/glaive-code-assistant - camel-ai/physics - camel-ai/math - camel-ai/chemistry - camel-ai/biology - WizardLM/WizardLM_evol_instruct_V2_196k - microsoft/orca-math-word-problems-200k - grimulkan/theory-of-mind - Vezora/Tested-22k-Python-Alpaca - m-a-p/Code-Feedback - Locutusque/arc-cot - jondurbin/airoboros-2.1 - WizardLM/WizardLM_evol_instruct_70k pipeline_tag: text-generation --- # OpenCerebrum-1.0-7B-SFT OpenCerebrum-1.0-7B-SFT is an open-source language model fine-tuned from the alpindale/Mistral-7B-v0.2-hf base model on a diverse dataset aimed at replicating capabilities of AetherResearch's proprietary Cerebrum model. The model was fine-tuned on approximately 1.2 million examples across 14 datasets spanning coding, math, science, reasoning, and general instruction-following. The goal was to assemble public datasets that could help the model achieve strong performance on benchmarks where Cerebrum excels. ## Model Details - **Base Model:** alpindale/Mistral-7B-v0.2-hf - **Parameters:** 7 billion - **Fine-Tuning Dataset Size:** ~1,200,000 examples - **Fine-Tuning Data:** Amalgamation of 14 public datasets - **Language:** English - **License:** Apache 2.0 ## Intended Use OpenCerebrum-1.0-7B-SFT is intended to be a powerful open-source model for coding, math, science, and general question-answering and text generation tasks. Its diverse fine-tuning data aims to equip it with broad knowledge and reasoning capabilities. However, as an open-source replica trained on a subset of data compared to the original Cerebrum, it may not match Cerebrum's full performance. Additionally, biases and limitations of the fine-tuning data may be reflected in the model's outputs. ## Limitations and Biases - The model may have biases and limitations inherited from its fine-tuning datasets. Thorough testing is needed to characterize these. - With 1.2 million training examples, the fine-tuning data is still limited compared to the proprietary Cerebrum data. - As the model is based on a 7B parameter model, it has computational and memory constraints compared to larger models. ## Training Details The model was fine-tuned on the 14 datasets listed in the Datasets section, totaling approximately 1.2 million examples. Default training hyperparameters were used. In the future, the fine-tuning dataset may be condensed to more closely match the 5,000 example dataset reputedly used for the original Cerebrum model.
{}
RichardErkhov/Locutusque_-_OpenCerebrum-1.0-7b-SFT-8bits
null
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-05-03T16:52:46+00:00
text-generation
peft
{}
sch-ai/front-title-all-norallmnormistral-7b-warm-Keisha
null
[ "peft", "tensorboard", "safetensors", "text-generation", "base_model:norallm/normistral-7b-warm", "region:us" ]
null
2024-05-03T16:53:15+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_2-seqsight_65536_512_47M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.5242 - F1 Score: 0.8750 - Accuracy: 0.875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.3964 | 9.52 | 200 | 0.3257 | 0.8488 | 0.8506 | | 0.2775 | 19.05 | 400 | 0.2967 | 0.8563 | 0.8567 | | 0.2431 | 28.57 | 600 | 0.2805 | 0.8687 | 0.8689 | | 0.2149 | 38.1 | 800 | 0.2896 | 0.8870 | 0.8872 | | 0.1892 | 47.62 | 1000 | 0.2798 | 0.8811 | 0.8811 | | 0.1722 | 57.14 | 1200 | 0.3207 | 0.8840 | 0.8841 | | 0.1489 | 66.67 | 1400 | 0.3516 | 0.8780 | 0.8780 | | 0.1417 | 76.19 | 1600 | 0.3447 | 0.8748 | 0.875 | | 0.1275 | 85.71 | 1800 | 0.3557 | 0.8872 | 0.8872 | | 0.1144 | 95.24 | 2000 | 0.3438 | 0.8841 | 0.8841 | | 0.1022 | 104.76 | 2200 | 0.3620 | 0.8901 | 0.8902 | | 0.0968 | 114.29 | 2400 | 0.3779 | 0.8963 | 0.8963 | | 0.091 | 123.81 | 2600 | 0.3865 | 0.8871 | 0.8872 | | 0.0798 | 133.33 | 2800 | 0.3939 | 0.8750 | 0.875 | | 0.0737 | 142.86 | 3000 | 0.4687 | 0.8777 | 0.8780 | | 0.0698 | 152.38 | 3200 | 0.4192 | 0.8963 | 0.8963 | | 0.0687 | 161.9 | 3400 | 0.4379 | 0.8901 | 0.8902 | | 0.0614 | 171.43 | 3600 | 0.4795 | 0.8778 | 0.8780 | | 0.0619 | 180.95 | 3800 | 0.4757 | 0.8869 | 0.8872 | | 0.0537 | 190.48 | 4000 | 0.4562 | 0.8963 | 0.8963 | | 0.0545 | 200.0 | 4200 | 0.4989 | 0.8778 | 0.8780 | | 0.0507 | 209.52 | 4400 | 0.4625 | 0.8841 | 0.8841 | | 0.0491 | 219.05 | 4600 | 0.5119 | 0.8839 | 0.8841 | | 0.0501 | 228.57 | 4800 | 0.4785 | 0.8962 | 0.8963 | | 0.0445 | 238.1 | 5000 | 0.5140 | 0.8778 | 0.8780 | | 0.0429 | 247.62 | 5200 | 0.4812 | 0.8872 | 0.8872 | | 0.045 | 257.14 | 5400 | 0.5032 | 0.8902 | 0.8902 | | 0.0382 | 266.67 | 5600 | 0.5139 | 0.8901 | 0.8902 | | 0.0409 | 276.19 | 5800 | 0.5122 | 0.8932 | 0.8933 | | 0.0375 | 285.71 | 6000 | 0.5461 | 0.8777 | 0.8780 | | 0.0336 | 295.24 | 6200 | 0.5440 | 0.8869 | 0.8872 | | 0.0347 | 304.76 | 6400 | 0.5410 | 0.8901 | 0.8902 | | 0.0312 | 314.29 | 6600 | 0.5536 | 0.8901 | 0.8902 | | 0.032 | 323.81 | 6800 | 0.5701 | 0.8931 | 0.8933 | | 0.035 | 333.33 | 7000 | 0.5255 | 0.8870 | 0.8872 | | 0.0296 | 342.86 | 7200 | 0.6222 | 0.8807 | 0.8811 | | 0.0323 | 352.38 | 7400 | 0.5536 | 0.8870 | 0.8872 | | 0.0309 | 361.9 | 7600 | 0.5629 | 0.8869 | 0.8872 | | 0.0305 | 371.43 | 7800 | 0.5216 | 0.8871 | 0.8872 | | 0.0264 | 380.95 | 8000 | 0.6018 | 0.8775 | 0.8780 | | 0.0278 | 390.48 | 8200 | 0.5967 | 0.8808 | 0.8811 | | 0.0268 | 400.0 | 8400 | 0.5701 | 0.8901 | 0.8902 | | 0.0284 | 409.52 | 8600 | 0.5754 | 0.8808 | 0.8811 | | 0.0276 | 419.05 | 8800 | 0.5478 | 0.8902 | 0.8902 | | 0.027 | 428.57 | 9000 | 0.5620 | 0.8870 | 0.8872 | | 0.0232 | 438.1 | 9200 | 0.5838 | 0.8900 | 0.8902 | | 0.0298 | 447.62 | 9400 | 0.5757 | 0.8840 | 0.8841 | | 0.0271 | 457.14 | 9600 | 0.5786 | 0.8900 | 0.8902 | | 0.0242 | 466.67 | 9800 | 0.5620 | 0.8870 | 0.8872 | | 0.0259 | 476.19 | 10000 | 0.5620 | 0.8870 | 0.8872 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_2-seqsight_65536_512_47M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_2-seqsight_65536_512_47M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:53:24+00:00
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
dabagyan/bert-sarcasm-model
null
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T16:54:08+00:00
null
transformers
# Uploaded model - **Developed by:** dpriver - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
dpriver/model
null
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-03T16:54:19+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_2-seqsight_65536_512_47M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.5883 - F1 Score: 0.8628 - Accuracy: 0.8628 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.3645 | 9.52 | 200 | 0.2904 | 0.8685 | 0.8689 | | 0.2332 | 19.05 | 400 | 0.2734 | 0.8841 | 0.8841 | | 0.1873 | 28.57 | 600 | 0.3260 | 0.8684 | 0.8689 | | 0.1497 | 38.1 | 800 | 0.3228 | 0.8902 | 0.8902 | | 0.1189 | 47.62 | 1000 | 0.3362 | 0.8902 | 0.8902 | | 0.0994 | 57.14 | 1200 | 0.4017 | 0.8841 | 0.8841 | | 0.0742 | 66.67 | 1400 | 0.4739 | 0.8810 | 0.8811 | | 0.0656 | 76.19 | 1600 | 0.4869 | 0.8718 | 0.8720 | | 0.0532 | 85.71 | 1800 | 0.4801 | 0.8841 | 0.8841 | | 0.0448 | 95.24 | 2000 | 0.4620 | 0.8902 | 0.8902 | | 0.0403 | 104.76 | 2200 | 0.4691 | 0.8963 | 0.8963 | | 0.0328 | 114.29 | 2400 | 0.5741 | 0.8841 | 0.8841 | | 0.0323 | 123.81 | 2600 | 0.5977 | 0.8717 | 0.8720 | | 0.0318 | 133.33 | 2800 | 0.5713 | 0.8653 | 0.8659 | | 0.025 | 142.86 | 3000 | 0.5882 | 0.8902 | 0.8902 | | 0.0226 | 152.38 | 3200 | 0.5815 | 0.8871 | 0.8872 | | 0.0244 | 161.9 | 3400 | 0.6150 | 0.8869 | 0.8872 | | 0.0217 | 171.43 | 3600 | 0.5968 | 0.8748 | 0.875 | | 0.0176 | 180.95 | 3800 | 0.6338 | 0.8841 | 0.8841 | | 0.0149 | 190.48 | 4000 | 0.6048 | 0.8810 | 0.8811 | | 0.0176 | 200.0 | 4200 | 0.6294 | 0.8810 | 0.8811 | | 0.0145 | 209.52 | 4400 | 0.6139 | 0.8811 | 0.8811 | | 0.0126 | 219.05 | 4600 | 0.6751 | 0.8840 | 0.8841 | | 0.0142 | 228.57 | 4800 | 0.6638 | 0.8687 | 0.8689 | | 0.0123 | 238.1 | 5000 | 0.6573 | 0.8719 | 0.8720 | | 0.012 | 247.62 | 5200 | 0.5845 | 0.8871 | 0.8872 | | 0.0129 | 257.14 | 5400 | 0.6561 | 0.8933 | 0.8933 | | 0.0113 | 266.67 | 5600 | 0.7041 | 0.8686 | 0.8689 | | 0.0094 | 276.19 | 5800 | 0.7106 | 0.8809 | 0.8811 | | 0.0125 | 285.71 | 6000 | 0.6203 | 0.8870 | 0.8872 | | 0.0104 | 295.24 | 6200 | 0.6492 | 0.8902 | 0.8902 | | 0.0094 | 304.76 | 6400 | 0.6602 | 0.8749 | 0.875 | | 0.0075 | 314.29 | 6600 | 0.6598 | 0.8902 | 0.8902 | | 0.0089 | 323.81 | 6800 | 0.7270 | 0.8871 | 0.8872 | | 0.0085 | 333.33 | 7000 | 0.6682 | 0.8811 | 0.8811 | | 0.0058 | 342.86 | 7200 | 0.7529 | 0.8932 | 0.8933 | | 0.007 | 352.38 | 7400 | 0.7259 | 0.8871 | 0.8872 | | 0.0066 | 361.9 | 7600 | 0.7356 | 0.8841 | 0.8841 | | 0.0067 | 371.43 | 7800 | 0.7154 | 0.8810 | 0.8811 | | 0.0066 | 380.95 | 8000 | 0.7417 | 0.8902 | 0.8902 | | 0.0076 | 390.48 | 8200 | 0.7257 | 0.8841 | 0.8841 | | 0.0066 | 400.0 | 8400 | 0.7069 | 0.8810 | 0.8811 | | 0.0059 | 409.52 | 8600 | 0.7168 | 0.8872 | 0.8872 | | 0.0042 | 419.05 | 8800 | 0.7106 | 0.8810 | 0.8811 | | 0.0065 | 428.57 | 9000 | 0.7177 | 0.8841 | 0.8841 | | 0.0038 | 438.1 | 9200 | 0.7353 | 0.8841 | 0.8841 | | 0.0055 | 447.62 | 9400 | 0.7442 | 0.8871 | 0.8872 | | 0.0052 | 457.14 | 9600 | 0.7419 | 0.8841 | 0.8841 | | 0.0041 | 466.67 | 9800 | 0.7421 | 0.8841 | 0.8841 | | 0.0051 | 476.19 | 10000 | 0.7426 | 0.8841 | 0.8841 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_2-seqsight_65536_512_47M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_2-seqsight_65536_512_47M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:54:53+00:00
null
transformers
# Uploaded model - **Developed by:** xsa-dev - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
xsa-dev/hugs_llama3_technique_ft_8bit_Q8_0
null
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-03T16:55:21+00:00
null
null
{}
rensimmons/llama-phi3-translations-v0.1-merged-Q8_0-gguf
null
[ "gguf", "region:us" ]
null
2024-05-03T16:55:28+00:00
null
null
{"license": "cc"}
xdeng77/fc-clip_coconut
null
[ "license:cc", "region:us" ]
null
2024-05-03T16:55:39+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_splice_reconstructed-seqsight_65536_512_47M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset. It achieves the following results on the evaluation set: - Loss: 0.4893 - F1 Score: 0.7920 - Accuracy: 0.7911 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.9712 | 0.7 | 200 | 0.9280 | 0.4284 | 0.5587 | | 0.9193 | 1.4 | 400 | 0.8911 | 0.5372 | 0.5851 | | 0.7622 | 2.1 | 600 | 0.6344 | 0.7222 | 0.7212 | | 0.6373 | 2.8 | 800 | 0.6134 | 0.7349 | 0.7337 | | 0.6223 | 3.5 | 1000 | 0.5896 | 0.7398 | 0.7387 | | 0.6084 | 4.2 | 1200 | 0.5706 | 0.7555 | 0.7549 | | 0.5971 | 4.9 | 1400 | 0.5575 | 0.7614 | 0.7611 | | 0.5886 | 5.59 | 1600 | 0.5583 | 0.7602 | 0.7593 | | 0.5806 | 6.29 | 1800 | 0.5558 | 0.7605 | 0.7593 | | 0.5732 | 6.99 | 2000 | 0.5622 | 0.7574 | 0.7562 | | 0.5696 | 7.69 | 2200 | 0.5306 | 0.7745 | 0.7742 | | 0.5613 | 8.39 | 2400 | 0.5343 | 0.7704 | 0.7696 | | 0.5534 | 9.09 | 2600 | 0.5402 | 0.7668 | 0.7659 | | 0.5546 | 9.79 | 2800 | 0.5284 | 0.7769 | 0.7762 | | 0.5559 | 10.49 | 3000 | 0.5319 | 0.7733 | 0.7722 | | 0.5446 | 11.19 | 3200 | 0.5369 | 0.7696 | 0.7685 | | 0.5478 | 11.89 | 3400 | 0.5145 | 0.7813 | 0.7806 | | 0.5382 | 12.59 | 3600 | 0.5206 | 0.7788 | 0.7779 | | 0.5409 | 13.29 | 3800 | 0.5212 | 0.7783 | 0.7773 | | 0.5424 | 13.99 | 4000 | 0.5214 | 0.7782 | 0.7771 | | 0.5325 | 14.69 | 4200 | 0.5168 | 0.7775 | 0.7764 | | 0.5322 | 15.38 | 4400 | 0.5240 | 0.7739 | 0.7727 | | 0.5249 | 16.08 | 4600 | 0.5278 | 0.7767 | 0.7755 | | 0.5281 | 16.78 | 4800 | 0.5086 | 0.7844 | 0.7834 | | 0.5245 | 17.48 | 5000 | 0.5128 | 0.7830 | 0.7819 | | 0.5203 | 18.18 | 5200 | 0.4971 | 0.7939 | 0.7933 | | 0.5215 | 18.88 | 5400 | 0.5132 | 0.7826 | 0.7815 | | 0.5247 | 19.58 | 5600 | 0.4929 | 0.7940 | 0.7933 | | 0.5168 | 20.28 | 5800 | 0.4974 | 0.7900 | 0.7891 | | 0.5152 | 20.98 | 6000 | 0.4947 | 0.7921 | 0.7913 | | 0.5202 | 21.68 | 6200 | 0.5053 | 0.7878 | 0.7867 | | 0.5135 | 22.38 | 6400 | 0.5030 | 0.7863 | 0.7852 | | 0.5079 | 23.08 | 6600 | 0.4934 | 0.7917 | 0.7909 | | 0.5158 | 23.78 | 6800 | 0.4967 | 0.7886 | 0.7876 | | 0.5136 | 24.48 | 7000 | 0.5089 | 0.7835 | 0.7823 | | 0.5109 | 25.17 | 7200 | 0.4898 | 0.7951 | 0.7942 | | 0.511 | 25.87 | 7400 | 0.4974 | 0.7898 | 0.7887 | | 0.5069 | 26.57 | 7600 | 0.4992 | 0.7900 | 0.7889 | | 0.5063 | 27.27 | 7800 | 0.4970 | 0.7915 | 0.7904 | | 0.5054 | 27.97 | 8000 | 0.4921 | 0.7926 | 0.7915 | | 0.5104 | 28.67 | 8200 | 0.5018 | 0.7887 | 0.7876 | | 0.5086 | 29.37 | 8400 | 0.4981 | 0.7902 | 0.7891 | | 0.5038 | 30.07 | 8600 | 0.4852 | 0.7968 | 0.7959 | | 0.503 | 30.77 | 8800 | 0.4941 | 0.7920 | 0.7909 | | 0.5017 | 31.47 | 9000 | 0.4915 | 0.7928 | 0.7918 | | 0.5068 | 32.17 | 9200 | 0.4919 | 0.7930 | 0.7920 | | 0.5041 | 32.87 | 9400 | 0.4911 | 0.7937 | 0.7926 | | 0.5064 | 33.57 | 9600 | 0.4943 | 0.7909 | 0.7898 | | 0.5021 | 34.27 | 9800 | 0.4911 | 0.7930 | 0.7920 | | 0.5043 | 34.97 | 10000 | 0.4919 | 0.7926 | 0.7915 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_65536_512_47M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_65536_512_47M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:56:36+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/kkngu4g
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T16:56:46+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_splice_reconstructed-seqsight_65536_512_47M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset. It achieves the following results on the evaluation set: - Loss: 0.3998 - F1 Score: 0.8368 - Accuracy: 0.8360 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.9576 | 0.7 | 200 | 0.8977 | 0.5076 | 0.5756 | | 0.7323 | 1.4 | 400 | 0.6126 | 0.7343 | 0.7332 | | 0.6088 | 2.1 | 600 | 0.5636 | 0.7585 | 0.7576 | | 0.577 | 2.8 | 800 | 0.5702 | 0.7504 | 0.7495 | | 0.5636 | 3.5 | 1000 | 0.5334 | 0.7703 | 0.7692 | | 0.5511 | 4.2 | 1200 | 0.5112 | 0.7841 | 0.7834 | | 0.5352 | 4.9 | 1400 | 0.5002 | 0.7869 | 0.7863 | | 0.5245 | 5.59 | 1600 | 0.5010 | 0.7891 | 0.7883 | | 0.5129 | 6.29 | 1800 | 0.4943 | 0.7872 | 0.7861 | | 0.4985 | 6.99 | 2000 | 0.4938 | 0.7917 | 0.7907 | | 0.4944 | 7.69 | 2200 | 0.4628 | 0.8103 | 0.8097 | | 0.4825 | 8.39 | 2400 | 0.4772 | 0.8004 | 0.7994 | | 0.4738 | 9.09 | 2600 | 0.4807 | 0.7942 | 0.7929 | | 0.4711 | 9.79 | 2800 | 0.4627 | 0.8072 | 0.8062 | | 0.4678 | 10.49 | 3000 | 0.4574 | 0.8103 | 0.8093 | | 0.4561 | 11.19 | 3200 | 0.4477 | 0.8120 | 0.8110 | | 0.4569 | 11.89 | 3400 | 0.4407 | 0.8139 | 0.8130 | | 0.4483 | 12.59 | 3600 | 0.4412 | 0.8176 | 0.8167 | | 0.4489 | 13.29 | 3800 | 0.4381 | 0.8164 | 0.8154 | | 0.443 | 13.99 | 4000 | 0.4467 | 0.8167 | 0.8157 | | 0.4326 | 14.69 | 4200 | 0.4359 | 0.8202 | 0.8192 | | 0.4351 | 15.38 | 4400 | 0.4307 | 0.8229 | 0.8220 | | 0.4233 | 16.08 | 4600 | 0.4539 | 0.8141 | 0.8132 | | 0.4261 | 16.78 | 4800 | 0.4231 | 0.8294 | 0.8284 | | 0.4168 | 17.48 | 5000 | 0.4412 | 0.8210 | 0.8198 | | 0.413 | 18.18 | 5200 | 0.4127 | 0.8360 | 0.8352 | | 0.413 | 18.88 | 5400 | 0.4177 | 0.8339 | 0.8330 | | 0.413 | 19.58 | 5600 | 0.4017 | 0.8389 | 0.8382 | | 0.4072 | 20.28 | 5800 | 0.4075 | 0.8395 | 0.8387 | | 0.4077 | 20.98 | 6000 | 0.4081 | 0.8363 | 0.8354 | | 0.4022 | 21.68 | 6200 | 0.4271 | 0.8262 | 0.8253 | | 0.4001 | 22.38 | 6400 | 0.4101 | 0.8347 | 0.8338 | | 0.3911 | 23.08 | 6600 | 0.4135 | 0.8324 | 0.8314 | | 0.3968 | 23.78 | 6800 | 0.4060 | 0.8356 | 0.8347 | | 0.3946 | 24.48 | 7000 | 0.4186 | 0.8299 | 0.8290 | | 0.396 | 25.17 | 7200 | 0.3987 | 0.8391 | 0.8382 | | 0.3933 | 25.87 | 7400 | 0.4123 | 0.8334 | 0.8325 | | 0.3892 | 26.57 | 7600 | 0.4239 | 0.8273 | 0.8264 | | 0.3843 | 27.27 | 7800 | 0.4167 | 0.8312 | 0.8303 | | 0.3886 | 27.97 | 8000 | 0.4022 | 0.8393 | 0.8384 | | 0.3912 | 28.67 | 8200 | 0.4114 | 0.8330 | 0.8321 | | 0.385 | 29.37 | 8400 | 0.4063 | 0.8350 | 0.8341 | | 0.3845 | 30.07 | 8600 | 0.3950 | 0.8397 | 0.8389 | | 0.3839 | 30.77 | 8800 | 0.4045 | 0.8363 | 0.8354 | | 0.38 | 31.47 | 9000 | 0.3989 | 0.8393 | 0.8384 | | 0.3848 | 32.17 | 9200 | 0.4036 | 0.8367 | 0.8358 | | 0.3758 | 32.87 | 9400 | 0.4023 | 0.8371 | 0.8363 | | 0.3849 | 33.57 | 9600 | 0.4066 | 0.8358 | 0.8349 | | 0.3781 | 34.27 | 9800 | 0.4034 | 0.8358 | 0.8349 | | 0.3777 | 34.97 | 10000 | 0.4038 | 0.8363 | 0.8354 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_65536_512_47M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_65536_512_47M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:56:55+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_splice_reconstructed-seqsight_65536_512_47M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset. It achieves the following results on the evaluation set: - Loss: 0.3508 - F1 Score: 0.8581 - Accuracy: 0.8577 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.9474 | 0.7 | 200 | 0.8440 | 0.5497 | 0.6094 | | 0.6437 | 1.4 | 400 | 0.5782 | 0.7461 | 0.7453 | | 0.5627 | 2.1 | 600 | 0.5120 | 0.7830 | 0.7821 | | 0.5223 | 2.8 | 800 | 0.5177 | 0.7806 | 0.7797 | | 0.4979 | 3.5 | 1000 | 0.4653 | 0.8050 | 0.8045 | | 0.4893 | 4.2 | 1200 | 0.4635 | 0.8064 | 0.8053 | | 0.469 | 4.9 | 1400 | 0.4467 | 0.8154 | 0.8146 | | 0.4555 | 5.59 | 1600 | 0.4466 | 0.8149 | 0.8143 | | 0.4423 | 6.29 | 1800 | 0.4464 | 0.8155 | 0.8146 | | 0.4283 | 6.99 | 2000 | 0.4328 | 0.8254 | 0.8244 | | 0.4205 | 7.69 | 2200 | 0.4151 | 0.8286 | 0.8279 | | 0.4142 | 8.39 | 2400 | 0.4213 | 0.8278 | 0.8270 | | 0.402 | 9.09 | 2600 | 0.4412 | 0.8212 | 0.8200 | | 0.3957 | 9.79 | 2800 | 0.4212 | 0.8266 | 0.8257 | | 0.3939 | 10.49 | 3000 | 0.3985 | 0.8401 | 0.8393 | | 0.3785 | 11.19 | 3200 | 0.4036 | 0.8386 | 0.8378 | | 0.3829 | 11.89 | 3400 | 0.3994 | 0.8369 | 0.8360 | | 0.3685 | 12.59 | 3600 | 0.3840 | 0.8456 | 0.8450 | | 0.371 | 13.29 | 3800 | 0.3734 | 0.8487 | 0.8481 | | 0.366 | 13.99 | 4000 | 0.3977 | 0.8425 | 0.8417 | | 0.3548 | 14.69 | 4200 | 0.3847 | 0.8456 | 0.8448 | | 0.3572 | 15.38 | 4400 | 0.3818 | 0.8466 | 0.8459 | | 0.3445 | 16.08 | 4600 | 0.3968 | 0.8436 | 0.8428 | | 0.3461 | 16.78 | 4800 | 0.3712 | 0.8518 | 0.8512 | | 0.3374 | 17.48 | 5000 | 0.3832 | 0.8489 | 0.8481 | | 0.3325 | 18.18 | 5200 | 0.3729 | 0.8516 | 0.8509 | | 0.3346 | 18.88 | 5400 | 0.3818 | 0.8462 | 0.8455 | | 0.3334 | 19.58 | 5600 | 0.3550 | 0.8610 | 0.8606 | | 0.3331 | 20.28 | 5800 | 0.3638 | 0.8579 | 0.8573 | | 0.3291 | 20.98 | 6000 | 0.3564 | 0.8581 | 0.8575 | | 0.3208 | 21.68 | 6200 | 0.3759 | 0.8521 | 0.8514 | | 0.3228 | 22.38 | 6400 | 0.3707 | 0.8545 | 0.8538 | | 0.315 | 23.08 | 6600 | 0.3818 | 0.8541 | 0.8534 | | 0.3188 | 23.78 | 6800 | 0.3773 | 0.8536 | 0.8529 | | 0.3145 | 24.48 | 7000 | 0.3810 | 0.8534 | 0.8527 | | 0.3161 | 25.17 | 7200 | 0.3666 | 0.8545 | 0.8538 | | 0.3117 | 25.87 | 7400 | 0.3760 | 0.8556 | 0.8549 | | 0.3084 | 26.57 | 7600 | 0.3858 | 0.8480 | 0.8472 | | 0.3054 | 27.27 | 7800 | 0.3875 | 0.8497 | 0.8490 | | 0.308 | 27.97 | 8000 | 0.3650 | 0.8593 | 0.8586 | | 0.3059 | 28.67 | 8200 | 0.3730 | 0.8547 | 0.8540 | | 0.3053 | 29.37 | 8400 | 0.3672 | 0.8552 | 0.8544 | | 0.2975 | 30.07 | 8600 | 0.3638 | 0.8599 | 0.8593 | | 0.2965 | 30.77 | 8800 | 0.3696 | 0.8558 | 0.8551 | | 0.297 | 31.47 | 9000 | 0.3701 | 0.8545 | 0.8538 | | 0.3035 | 32.17 | 9200 | 0.3651 | 0.8580 | 0.8573 | | 0.2948 | 32.87 | 9400 | 0.3681 | 0.8553 | 0.8547 | | 0.2976 | 33.57 | 9600 | 0.3725 | 0.8560 | 0.8553 | | 0.2947 | 34.27 | 9800 | 0.3682 | 0.8569 | 0.8562 | | 0.2938 | 34.97 | 10000 | 0.3698 | 0.8564 | 0.8558 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_65536_512_47M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_65536_512_47M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:57:12+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_0-seqsight_65536_512_47M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.3808 - F1 Score: 0.8262 - Accuracy: 0.827 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5459 | 0.79 | 200 | 0.4805 | 0.7586 | 0.759 | | 0.4886 | 1.58 | 400 | 0.4690 | 0.7665 | 0.767 | | 0.4866 | 2.37 | 600 | 0.4709 | 0.7625 | 0.764 | | 0.4748 | 3.16 | 800 | 0.4670 | 0.7649 | 0.765 | | 0.4742 | 3.95 | 1000 | 0.4647 | 0.7630 | 0.763 | | 0.4731 | 4.74 | 1200 | 0.4678 | 0.7671 | 0.767 | | 0.4702 | 5.53 | 1400 | 0.4616 | 0.7659 | 0.766 | | 0.4647 | 6.32 | 1600 | 0.4632 | 0.7631 | 0.763 | | 0.468 | 7.11 | 1800 | 0.4685 | 0.76 | 0.76 | | 0.4668 | 7.91 | 2000 | 0.4611 | 0.7641 | 0.764 | | 0.4625 | 8.7 | 2200 | 0.4625 | 0.7641 | 0.764 | | 0.4594 | 9.49 | 2400 | 0.4583 | 0.7699 | 0.77 | | 0.4608 | 10.28 | 2600 | 0.4690 | 0.7676 | 0.768 | | 0.4585 | 11.07 | 2800 | 0.4645 | 0.7637 | 0.764 | | 0.4596 | 11.86 | 3000 | 0.4615 | 0.7680 | 0.768 | | 0.4593 | 12.65 | 3200 | 0.4650 | 0.7727 | 0.773 | | 0.4553 | 13.44 | 3400 | 0.4540 | 0.7750 | 0.775 | | 0.4536 | 14.23 | 3600 | 0.4534 | 0.7780 | 0.778 | | 0.4539 | 15.02 | 3800 | 0.4592 | 0.7710 | 0.771 | | 0.4573 | 15.81 | 4000 | 0.4610 | 0.7718 | 0.772 | | 0.4499 | 16.6 | 4200 | 0.4534 | 0.7800 | 0.78 | | 0.4556 | 17.39 | 4400 | 0.4605 | 0.7698 | 0.77 | | 0.4543 | 18.18 | 4600 | 0.4560 | 0.7720 | 0.772 | | 0.4509 | 18.97 | 4800 | 0.4665 | 0.7694 | 0.77 | | 0.4584 | 19.76 | 5000 | 0.4503 | 0.7761 | 0.776 | | 0.4505 | 20.55 | 5200 | 0.4473 | 0.7800 | 0.78 | | 0.4502 | 21.34 | 5400 | 0.4520 | 0.774 | 0.774 | | 0.4469 | 22.13 | 5600 | 0.4550 | 0.7730 | 0.773 | | 0.454 | 22.92 | 5800 | 0.4509 | 0.7771 | 0.777 | | 0.4473 | 23.72 | 6000 | 0.4558 | 0.7749 | 0.775 | | 0.4475 | 24.51 | 6200 | 0.4493 | 0.7771 | 0.777 | | 0.4537 | 25.3 | 6400 | 0.4526 | 0.7740 | 0.774 | | 0.4457 | 26.09 | 6600 | 0.4504 | 0.7750 | 0.775 | | 0.446 | 26.88 | 6800 | 0.4531 | 0.7770 | 0.777 | | 0.4496 | 27.67 | 7000 | 0.4484 | 0.7821 | 0.782 | | 0.4482 | 28.46 | 7200 | 0.4471 | 0.7790 | 0.779 | | 0.4476 | 29.25 | 7400 | 0.4488 | 0.7810 | 0.781 | | 0.4499 | 30.04 | 7600 | 0.4475 | 0.7791 | 0.779 | | 0.4467 | 30.83 | 7800 | 0.4508 | 0.7810 | 0.781 | | 0.4477 | 31.62 | 8000 | 0.4461 | 0.7830 | 0.783 | | 0.4468 | 32.41 | 8200 | 0.4516 | 0.7810 | 0.781 | | 0.4442 | 33.2 | 8400 | 0.4512 | 0.7770 | 0.777 | | 0.4501 | 33.99 | 8600 | 0.4484 | 0.7801 | 0.78 | | 0.4484 | 34.78 | 8800 | 0.4477 | 0.7811 | 0.781 | | 0.443 | 35.57 | 9000 | 0.4501 | 0.7780 | 0.778 | | 0.4458 | 36.36 | 9200 | 0.4526 | 0.7800 | 0.78 | | 0.4468 | 37.15 | 9400 | 0.4522 | 0.7800 | 0.78 | | 0.4451 | 37.94 | 9600 | 0.4497 | 0.7810 | 0.781 | | 0.4482 | 38.74 | 9800 | 0.4506 | 0.7790 | 0.779 | | 0.4461 | 39.53 | 10000 | 0.4504 | 0.7800 | 0.78 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_0-seqsight_65536_512_47M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_0-seqsight_65536_512_47M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:57:13+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_0-seqsight_65536_512_47M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.3741 - F1 Score: 0.8225 - Accuracy: 0.823 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5198 | 0.79 | 200 | 0.4707 | 0.7584 | 0.759 | | 0.4744 | 1.58 | 400 | 0.4591 | 0.7701 | 0.771 | | 0.4702 | 2.37 | 600 | 0.4603 | 0.7721 | 0.773 | | 0.4583 | 3.16 | 800 | 0.4562 | 0.7767 | 0.777 | | 0.4564 | 3.95 | 1000 | 0.4510 | 0.7801 | 0.781 | | 0.4522 | 4.74 | 1200 | 0.4505 | 0.7768 | 0.777 | | 0.4473 | 5.53 | 1400 | 0.4625 | 0.7706 | 0.771 | | 0.44 | 6.32 | 1600 | 0.4497 | 0.7861 | 0.786 | | 0.4436 | 7.11 | 1800 | 0.4658 | 0.7684 | 0.769 | | 0.4381 | 7.91 | 2000 | 0.4499 | 0.7840 | 0.784 | | 0.4343 | 8.7 | 2200 | 0.4545 | 0.7701 | 0.77 | | 0.4297 | 9.49 | 2400 | 0.4471 | 0.7780 | 0.778 | | 0.431 | 10.28 | 2600 | 0.4503 | 0.7869 | 0.787 | | 0.4261 | 11.07 | 2800 | 0.4582 | 0.7845 | 0.785 | | 0.4254 | 11.86 | 3000 | 0.4536 | 0.7850 | 0.785 | | 0.4246 | 12.65 | 3200 | 0.4456 | 0.7910 | 0.791 | | 0.4169 | 13.44 | 3400 | 0.4557 | 0.7771 | 0.777 | | 0.4165 | 14.23 | 3600 | 0.4404 | 0.7899 | 0.79 | | 0.4176 | 15.02 | 3800 | 0.4494 | 0.7830 | 0.783 | | 0.4182 | 15.81 | 4000 | 0.4410 | 0.7890 | 0.789 | | 0.4096 | 16.6 | 4200 | 0.4486 | 0.7860 | 0.786 | | 0.416 | 17.39 | 4400 | 0.4556 | 0.7859 | 0.786 | | 0.4119 | 18.18 | 4600 | 0.4499 | 0.7941 | 0.794 | | 0.4097 | 18.97 | 4800 | 0.4572 | 0.7839 | 0.784 | | 0.4137 | 19.76 | 5000 | 0.4425 | 0.7930 | 0.793 | | 0.4066 | 20.55 | 5200 | 0.4477 | 0.7930 | 0.793 | | 0.4037 | 21.34 | 5400 | 0.4497 | 0.7860 | 0.786 | | 0.4024 | 22.13 | 5600 | 0.4530 | 0.7881 | 0.788 | | 0.4045 | 22.92 | 5800 | 0.4496 | 0.7910 | 0.791 | | 0.3986 | 23.72 | 6000 | 0.4513 | 0.7881 | 0.788 | | 0.3987 | 24.51 | 6200 | 0.4456 | 0.7900 | 0.79 | | 0.4005 | 25.3 | 6400 | 0.4478 | 0.7940 | 0.794 | | 0.396 | 26.09 | 6600 | 0.4442 | 0.7910 | 0.791 | | 0.3945 | 26.88 | 6800 | 0.4540 | 0.7830 | 0.783 | | 0.3941 | 27.67 | 7000 | 0.4504 | 0.7931 | 0.793 | | 0.3945 | 28.46 | 7200 | 0.4486 | 0.7940 | 0.794 | | 0.3943 | 29.25 | 7400 | 0.4523 | 0.7881 | 0.788 | | 0.3957 | 30.04 | 7600 | 0.4509 | 0.7901 | 0.79 | | 0.3875 | 30.83 | 7800 | 0.4540 | 0.7890 | 0.789 | | 0.3906 | 31.62 | 8000 | 0.4476 | 0.7930 | 0.793 | | 0.3906 | 32.41 | 8200 | 0.4518 | 0.7901 | 0.79 | | 0.3881 | 33.2 | 8400 | 0.4533 | 0.7881 | 0.788 | | 0.3925 | 33.99 | 8600 | 0.4573 | 0.7830 | 0.783 | | 0.3905 | 34.78 | 8800 | 0.4494 | 0.7900 | 0.79 | | 0.3836 | 35.57 | 9000 | 0.4532 | 0.7921 | 0.792 | | 0.3867 | 36.36 | 9200 | 0.4591 | 0.79 | 0.79 | | 0.3886 | 37.15 | 9400 | 0.4594 | 0.79 | 0.79 | | 0.387 | 37.94 | 9600 | 0.4553 | 0.7891 | 0.789 | | 0.3879 | 38.74 | 9800 | 0.4559 | 0.7901 | 0.79 | | 0.3833 | 39.53 | 10000 | 0.4562 | 0.7891 | 0.789 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_0-seqsight_65536_512_47M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_0-seqsight_65536_512_47M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:58:02+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_0-seqsight_65536_512_47M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.3793 - F1 Score: 0.8292 - Accuracy: 0.83 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5291 | 0.79 | 200 | 0.4742 | 0.7544 | 0.755 | | 0.4793 | 1.58 | 400 | 0.4651 | 0.7628 | 0.764 | | 0.4762 | 2.37 | 600 | 0.4678 | 0.7638 | 0.766 | | 0.4656 | 3.16 | 800 | 0.4603 | 0.7729 | 0.773 | | 0.4649 | 3.95 | 1000 | 0.4592 | 0.7679 | 0.769 | | 0.4622 | 4.74 | 1200 | 0.4592 | 0.7639 | 0.764 | | 0.4576 | 5.53 | 1400 | 0.4620 | 0.7689 | 0.769 | | 0.4532 | 6.32 | 1600 | 0.4548 | 0.7721 | 0.772 | | 0.4555 | 7.11 | 1800 | 0.4651 | 0.7709 | 0.771 | | 0.4516 | 7.91 | 2000 | 0.4563 | 0.7730 | 0.773 | | 0.4486 | 8.7 | 2200 | 0.4538 | 0.7730 | 0.773 | | 0.4447 | 9.49 | 2400 | 0.4488 | 0.7830 | 0.783 | | 0.446 | 10.28 | 2600 | 0.4535 | 0.7709 | 0.771 | | 0.4422 | 11.07 | 2800 | 0.4584 | 0.7686 | 0.769 | | 0.4423 | 11.86 | 3000 | 0.4536 | 0.7790 | 0.779 | | 0.4418 | 12.65 | 3200 | 0.4499 | 0.7790 | 0.779 | | 0.4367 | 13.44 | 3400 | 0.4469 | 0.7871 | 0.787 | | 0.4352 | 14.23 | 3600 | 0.4440 | 0.7920 | 0.792 | | 0.4367 | 15.02 | 3800 | 0.4526 | 0.7750 | 0.775 | | 0.4381 | 15.81 | 4000 | 0.4469 | 0.7791 | 0.779 | | 0.4294 | 16.6 | 4200 | 0.4468 | 0.7890 | 0.789 | | 0.4365 | 17.39 | 4400 | 0.4595 | 0.7657 | 0.766 | | 0.4333 | 18.18 | 4600 | 0.4469 | 0.7831 | 0.783 | | 0.4307 | 18.97 | 4800 | 0.4567 | 0.7698 | 0.77 | | 0.4373 | 19.76 | 5000 | 0.4450 | 0.7821 | 0.782 | | 0.4297 | 20.55 | 5200 | 0.4435 | 0.7890 | 0.789 | | 0.4276 | 21.34 | 5400 | 0.4492 | 0.7790 | 0.779 | | 0.4279 | 22.13 | 5600 | 0.4503 | 0.7801 | 0.78 | | 0.4314 | 22.92 | 5800 | 0.4471 | 0.7821 | 0.782 | | 0.4252 | 23.72 | 6000 | 0.4479 | 0.7771 | 0.777 | | 0.4253 | 24.51 | 6200 | 0.4454 | 0.7820 | 0.782 | | 0.4308 | 25.3 | 6400 | 0.4440 | 0.7880 | 0.788 | | 0.4225 | 26.09 | 6600 | 0.4435 | 0.7860 | 0.786 | | 0.4237 | 26.88 | 6800 | 0.4477 | 0.7791 | 0.779 | | 0.4234 | 27.67 | 7000 | 0.4441 | 0.7890 | 0.789 | | 0.4261 | 28.46 | 7200 | 0.4437 | 0.7859 | 0.786 | | 0.4229 | 29.25 | 7400 | 0.4470 | 0.7861 | 0.786 | | 0.4265 | 30.04 | 7600 | 0.4451 | 0.7850 | 0.785 | | 0.4204 | 30.83 | 7800 | 0.4475 | 0.7850 | 0.785 | | 0.4231 | 31.62 | 8000 | 0.4417 | 0.7849 | 0.785 | | 0.4232 | 32.41 | 8200 | 0.4475 | 0.7811 | 0.781 | | 0.4202 | 33.2 | 8400 | 0.4474 | 0.7821 | 0.782 | | 0.4255 | 33.99 | 8600 | 0.4461 | 0.7830 | 0.783 | | 0.4223 | 34.78 | 8800 | 0.4442 | 0.786 | 0.786 | | 0.4169 | 35.57 | 9000 | 0.4456 | 0.7860 | 0.786 | | 0.4204 | 36.36 | 9200 | 0.4498 | 0.7841 | 0.784 | | 0.4221 | 37.15 | 9400 | 0.4491 | 0.7791 | 0.779 | | 0.4203 | 37.94 | 9600 | 0.4464 | 0.7800 | 0.78 | | 0.4223 | 38.74 | 9800 | 0.4472 | 0.7821 | 0.782 | | 0.4188 | 39.53 | 10000 | 0.4469 | 0.7811 | 0.781 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_0-seqsight_65536_512_47M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_0-seqsight_65536_512_47M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:58:04+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_1-seqsight_65536_512_47M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.3515 - F1 Score: 0.8466 - Accuracy: 0.847 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5616 | 0.83 | 200 | 0.5355 | 0.7350 | 0.735 | | 0.5085 | 1.67 | 400 | 0.5291 | 0.7389 | 0.739 | | 0.4987 | 2.5 | 600 | 0.5290 | 0.7349 | 0.735 | | 0.4953 | 3.33 | 800 | 0.5366 | 0.7267 | 0.728 | | 0.4952 | 4.17 | 1000 | 0.5239 | 0.7350 | 0.735 | | 0.4878 | 5.0 | 1200 | 0.5274 | 0.7400 | 0.74 | | 0.487 | 5.83 | 1400 | 0.5216 | 0.7380 | 0.738 | | 0.4877 | 6.67 | 1600 | 0.5217 | 0.7419 | 0.742 | | 0.4852 | 7.5 | 1800 | 0.5164 | 0.7440 | 0.744 | | 0.481 | 8.33 | 2000 | 0.5176 | 0.7400 | 0.74 | | 0.4818 | 9.17 | 2200 | 0.5164 | 0.7498 | 0.75 | | 0.4828 | 10.0 | 2400 | 0.5210 | 0.7397 | 0.74 | | 0.4823 | 10.83 | 2600 | 0.5184 | 0.7395 | 0.74 | | 0.482 | 11.67 | 2800 | 0.5175 | 0.7374 | 0.738 | | 0.4744 | 12.5 | 3000 | 0.5166 | 0.7437 | 0.744 | | 0.4852 | 13.33 | 3200 | 0.5082 | 0.7470 | 0.747 | | 0.4753 | 14.17 | 3400 | 0.5097 | 0.7510 | 0.751 | | 0.4744 | 15.0 | 3600 | 0.5173 | 0.7415 | 0.743 | | 0.4751 | 15.83 | 3800 | 0.5110 | 0.7480 | 0.748 | | 0.4766 | 16.67 | 4000 | 0.5141 | 0.7496 | 0.75 | | 0.4743 | 17.5 | 4200 | 0.5116 | 0.7463 | 0.747 | | 0.4682 | 18.33 | 4400 | 0.5178 | 0.7499 | 0.75 | | 0.4784 | 19.17 | 4600 | 0.5126 | 0.7495 | 0.75 | | 0.4742 | 20.0 | 4800 | 0.5082 | 0.7529 | 0.753 | | 0.4749 | 20.83 | 5000 | 0.5113 | 0.7579 | 0.758 | | 0.47 | 21.67 | 5200 | 0.5097 | 0.7549 | 0.755 | | 0.4695 | 22.5 | 5400 | 0.5081 | 0.7550 | 0.755 | | 0.4702 | 23.33 | 5600 | 0.5095 | 0.7578 | 0.758 | | 0.4707 | 24.17 | 5800 | 0.5140 | 0.7536 | 0.754 | | 0.4723 | 25.0 | 6000 | 0.5063 | 0.7600 | 0.76 | | 0.4702 | 25.83 | 6200 | 0.5059 | 0.7578 | 0.758 | | 0.4681 | 26.67 | 6400 | 0.5072 | 0.7520 | 0.752 | | 0.471 | 27.5 | 6600 | 0.5095 | 0.7577 | 0.758 | | 0.4688 | 28.33 | 6800 | 0.5063 | 0.7557 | 0.756 | | 0.467 | 29.17 | 7000 | 0.5079 | 0.7567 | 0.757 | | 0.4679 | 30.0 | 7200 | 0.5081 | 0.7567 | 0.757 | | 0.4697 | 30.83 | 7400 | 0.5116 | 0.7521 | 0.753 | | 0.4635 | 31.67 | 7600 | 0.5058 | 0.7619 | 0.762 | | 0.4712 | 32.5 | 7800 | 0.5062 | 0.7608 | 0.761 | | 0.4645 | 33.33 | 8000 | 0.5061 | 0.7587 | 0.759 | | 0.4693 | 34.17 | 8200 | 0.5061 | 0.7567 | 0.757 | | 0.4665 | 35.0 | 8400 | 0.5048 | 0.7619 | 0.762 | | 0.47 | 35.83 | 8600 | 0.5040 | 0.7598 | 0.76 | | 0.4704 | 36.67 | 8800 | 0.5040 | 0.7607 | 0.761 | | 0.4649 | 37.5 | 9000 | 0.5090 | 0.7541 | 0.755 | | 0.4646 | 38.33 | 9200 | 0.5054 | 0.7638 | 0.764 | | 0.4669 | 39.17 | 9400 | 0.5052 | 0.7628 | 0.763 | | 0.4654 | 40.0 | 9600 | 0.5058 | 0.7628 | 0.763 | | 0.4677 | 40.83 | 9800 | 0.5044 | 0.7638 | 0.764 | | 0.4652 | 41.67 | 10000 | 0.5048 | 0.7638 | 0.764 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_1-seqsight_65536_512_47M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_1-seqsight_65536_512_47M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:58:21+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_1-seqsight_65536_512_47M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.3450 - F1 Score: 0.8487 - Accuracy: 0.849 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5426 | 0.83 | 200 | 0.5290 | 0.7440 | 0.744 | | 0.4977 | 1.67 | 400 | 0.5235 | 0.7433 | 0.744 | | 0.4907 | 2.5 | 600 | 0.5192 | 0.7419 | 0.742 | | 0.4864 | 3.33 | 800 | 0.5160 | 0.7408 | 0.741 | | 0.4874 | 4.17 | 1000 | 0.5147 | 0.7417 | 0.742 | | 0.4788 | 5.0 | 1200 | 0.5142 | 0.7450 | 0.745 | | 0.4768 | 5.83 | 1400 | 0.5102 | 0.7440 | 0.744 | | 0.477 | 6.67 | 1600 | 0.5068 | 0.746 | 0.746 | | 0.4756 | 7.5 | 1800 | 0.5057 | 0.7496 | 0.75 | | 0.4692 | 8.33 | 2000 | 0.5048 | 0.7470 | 0.747 | | 0.4702 | 9.17 | 2200 | 0.4995 | 0.7520 | 0.752 | | 0.4689 | 10.0 | 2400 | 0.5099 | 0.7520 | 0.753 | | 0.469 | 10.83 | 2600 | 0.5097 | 0.7524 | 0.754 | | 0.4645 | 11.67 | 2800 | 0.5029 | 0.7531 | 0.754 | | 0.4572 | 12.5 | 3000 | 0.4997 | 0.7506 | 0.751 | | 0.4689 | 13.33 | 3200 | 0.4994 | 0.7513 | 0.752 | | 0.4581 | 14.17 | 3400 | 0.4953 | 0.7438 | 0.744 | | 0.4552 | 15.0 | 3600 | 0.5015 | 0.7580 | 0.759 | | 0.4557 | 15.83 | 3800 | 0.4990 | 0.7545 | 0.755 | | 0.4571 | 16.67 | 4000 | 0.5008 | 0.7545 | 0.755 | | 0.4532 | 17.5 | 4200 | 0.5042 | 0.7569 | 0.758 | | 0.4481 | 18.33 | 4400 | 0.5031 | 0.7568 | 0.757 | | 0.4569 | 19.17 | 4600 | 0.4986 | 0.7576 | 0.758 | | 0.4535 | 20.0 | 4800 | 0.4959 | 0.7549 | 0.755 | | 0.4517 | 20.83 | 5000 | 0.5015 | 0.7589 | 0.759 | | 0.448 | 21.67 | 5200 | 0.4988 | 0.7579 | 0.758 | | 0.4457 | 22.5 | 5400 | 0.4977 | 0.7550 | 0.755 | | 0.4477 | 23.33 | 5600 | 0.5039 | 0.7514 | 0.752 | | 0.4487 | 24.17 | 5800 | 0.5021 | 0.7595 | 0.76 | | 0.4487 | 25.0 | 6000 | 0.4963 | 0.7520 | 0.752 | | 0.4456 | 25.83 | 6200 | 0.4956 | 0.7499 | 0.75 | | 0.4443 | 26.67 | 6400 | 0.4957 | 0.7489 | 0.749 | | 0.4454 | 27.5 | 6600 | 0.4992 | 0.7599 | 0.76 | | 0.4431 | 28.33 | 6800 | 0.4964 | 0.7480 | 0.748 | | 0.4416 | 29.17 | 7000 | 0.4987 | 0.7510 | 0.751 | | 0.4424 | 30.0 | 7200 | 0.5007 | 0.7536 | 0.754 | | 0.4434 | 30.83 | 7400 | 0.4988 | 0.7569 | 0.757 | | 0.4373 | 31.67 | 7600 | 0.4978 | 0.7580 | 0.758 | | 0.4432 | 32.5 | 7800 | 0.4988 | 0.7540 | 0.754 | | 0.4391 | 33.33 | 8000 | 0.4969 | 0.7550 | 0.755 | | 0.4447 | 34.17 | 8200 | 0.4996 | 0.7589 | 0.759 | | 0.4396 | 35.0 | 8400 | 0.4987 | 0.7609 | 0.761 | | 0.4424 | 35.83 | 8600 | 0.4968 | 0.7550 | 0.755 | | 0.4443 | 36.67 | 8800 | 0.4973 | 0.7568 | 0.757 | | 0.4376 | 37.5 | 9000 | 0.5016 | 0.7495 | 0.75 | | 0.4362 | 38.33 | 9200 | 0.4981 | 0.7570 | 0.757 | | 0.4408 | 39.17 | 9400 | 0.4968 | 0.7570 | 0.757 | | 0.4375 | 40.0 | 9600 | 0.4979 | 0.7579 | 0.758 | | 0.4402 | 40.83 | 9800 | 0.4969 | 0.7540 | 0.754 | | 0.4382 | 41.67 | 10000 | 0.4972 | 0.7590 | 0.759 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_1-seqsight_65536_512_47M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_1-seqsight_65536_512_47M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T16:59:12+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output_v3 This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8916 - Qwk: 0.7949 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.9249 | 1.0 | 1731 | 1.0209 | 0.7428 | | 0.8301 | 2.0 | 3462 | 0.8321 | 0.7973 | | 0.7726 | 3.0 | 5193 | 0.9609 | 0.7834 | | 0.7125 | 4.0 | 6924 | 0.8916 | 0.7949 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/deberta-v3-small", "model-index": [{"name": "output_v3", "results": []}]}
lemmein/output_v3
null
[ "transformers", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-small", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T16:59:28+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Tunisien This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the comondov dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:--------:| | No log | 6.6667 | 20 | 10.2887 | 145.3174 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["ar"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["Arbi-Houssem/comondov"], "base_model": "openai/whisper-medium", "model-index": [{"name": "Whisper Tunisien", "results": []}]}
Arbi-Houssem/output
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ar", "dataset:Arbi-Houssem/comondov", "base_model:openai/whisper-medium", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-03T16:59:33+00:00
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
dabagyan/bert-sarcasm-model-with-context
null
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T16:59:38+00:00
text-to-audio
transformers
# MusicGen - Large - 3.3B MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts. It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz. Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass. By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio. MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*. Four checkpoints are released: - [small](https://huggingface.co/facebook/musicgen-small) - [medium](https://huggingface.co/facebook/musicgen-medium) - [**large** (this checkpoint)](https://huggingface.co/facebook/musicgen-large) - [melody](https://huggingface.co/facebook/musicgen-melody) ## Example Try out MusicGen yourself! * Audiocraft Colab: <a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> * Hugging Face Colab: <a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> * Hugging Face Demo: <a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen"> <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/> </a> ## 🤗 Transformers Usage You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards. 1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) and scipy: ``` pip install --upgrade pip pip install --upgrade transformers scipy ``` 2. Run inference via the `Text-to-Audio` (TTA) pipeline. You can infer the MusicGen model via the TTA pipeline in just a few lines of code! ```python from transformers import pipeline import scipy synthesiser = pipeline("text-to-audio", "facebook/musicgen-large") music = synthesiser("lo-fi music with a soothing melody", forward_params={"do_sample": True}) scipy.io.wavfile.write("musicgen_out.wav", rate=music["sampling_rate"], data=music["audio"]) ``` 3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control. ```python from transformers import AutoProcessor, MusicgenForConditionalGeneration processor = AutoProcessor.from_pretrained("facebook/musicgen-large") model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-large") inputs = processor( text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"], padding=True, return_tensors="pt", ) audio_values = model.generate(**inputs, max_new_tokens=256) ``` 4. Listen to the audio samples either in an ipynb notebook: ```python from IPython.display import Audio sampling_rate = model.config.audio_encoder.sampling_rate Audio(audio_values[0].numpy(), rate=sampling_rate) ``` Or save them as a `.wav` file using a third-party library, e.g. `scipy`: ```python import scipy sampling_rate = model.config.audio_encoder.sampling_rate scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy()) ``` For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen). ## Audiocraft Usage You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft): 1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft) ``` pip install git+https://github.com/facebookresearch/audiocraft.git ``` 2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed: ``` apt get install ffmpeg ``` 3. Run the following Python code: ```py from audiocraft.models import MusicGen from audiocraft.data.audio import audio_write model = MusicGen.get_pretrained("large") model.set_generation_params(duration=8) # generate 8 seconds. descriptions = ["happy rock", "energetic EDM"] wav = model.generate(descriptions) # generates 2 samples. for idx, one_wav in enumerate(wav): # Will save under {idx}.wav, with loudness normalization at -14 db LUFS. audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness") ``` ## Model details **Organization developing the model:** The FAIR team of Meta AI. **Model date:** MusicGen was trained between April 2023 and May 2023. **Model version:** This is the version 1 of the model. **Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation. **Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284). **Citation details:** ``` @misc{copet2023simple, title={Simple and Controllable Music Generation}, author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez}, year={2023}, eprint={2306.05284}, archivePrefix={arXiv}, primaryClass={cs.SD} } ``` **License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0. **Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue. ## Intended use **Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including: - Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science - Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs **Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models. **Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. ## Metrics **Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark: - Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish) - Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST) - CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes: - Overall quality of the music samples; - Text relevance to the provided text input; - Adherence to the melody for melody-guided music generation. More details on performance measures and human studies can be found in the paper. **Decision thresholds:** Not applicable. ## Evaluation datasets The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set. ## Training datasets The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing. ## Evaluation results Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper. | Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity | |---|---|---|---|---| | facebook/musicgen-small | 4.88 | 1.42 | 0.27 | - | | facebook/musicgen-medium | 5.14 | 1.38 | 0.28 | - | | **facebook/musicgen-large** | 5.48 | 1.37 | 0.28 | - | | facebook/musicgen-melody | 4.93 | 1.41 | 0.27 | 0.44 | More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284), in the Results section. ## Limitations and biases **Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model. **Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs). **Limitations:** - The model is not able to generate realistic vocals. - The model has been trained with English descriptions and will not perform as well in other languages. - The model does not perform equally well for all music styles and cultures. - The model sometimes generates end of songs, collapsing to silence. - It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results. **Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive. **Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data. **Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
{"license": "cc-by-nc-4.0", "tags": ["musicgen"], "inference": true}
karlwennerstrom/text-to-music
null
[ "transformers", "pytorch", "musicgen", "text-to-audio", "arxiv:2306.05284", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-05-03T17:00:24+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine_tuned_cb_bert This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.2169 - Accuracy: 0.3636 - F1: 0.2430 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:| | 0.7239 | 3.5714 | 50 | 1.2945 | 0.3182 | 0.1536 | | 0.3879 | 7.1429 | 100 | 1.6236 | 0.4545 | 0.4158 | | 0.1546 | 10.7143 | 150 | 3.1975 | 0.3636 | 0.2430 | | 0.0741 | 14.2857 | 200 | 2.9703 | 0.4545 | 0.3895 | | 0.0323 | 17.8571 | 250 | 3.8104 | 0.3636 | 0.2430 | | 0.0073 | 21.4286 | 300 | 4.0583 | 0.3636 | 0.2430 | | 0.0037 | 25.0 | 350 | 4.3166 | 0.3636 | 0.2430 | | 0.0032 | 28.5714 | 400 | 4.2169 | 0.3636 | 0.2430 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "fine_tuned_cb_bert", "results": []}]}
lenatr99/fine_tuned_cb_bert
null
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T17:00:27+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_1-seqsight_65536_512_47M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.3560 - F1 Score: 0.8466 - Accuracy: 0.847 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5345 | 0.83 | 200 | 0.5325 | 0.7312 | 0.733 | | 0.4938 | 1.67 | 400 | 0.5185 | 0.7422 | 0.743 | | 0.4863 | 2.5 | 600 | 0.5107 | 0.7550 | 0.755 | | 0.4809 | 3.33 | 800 | 0.5049 | 0.7437 | 0.744 | | 0.4788 | 4.17 | 1000 | 0.5083 | 0.7518 | 0.754 | | 0.4687 | 5.0 | 1200 | 0.5023 | 0.7544 | 0.755 | | 0.4655 | 5.83 | 1400 | 0.4938 | 0.7450 | 0.745 | | 0.463 | 6.67 | 1600 | 0.4967 | 0.7490 | 0.749 | | 0.4618 | 7.5 | 1800 | 0.4922 | 0.7523 | 0.753 | | 0.4539 | 8.33 | 2000 | 0.4933 | 0.7569 | 0.757 | | 0.454 | 9.17 | 2200 | 0.4876 | 0.7560 | 0.756 | | 0.4526 | 10.0 | 2400 | 0.4948 | 0.7604 | 0.761 | | 0.4519 | 10.83 | 2600 | 0.4926 | 0.7617 | 0.763 | | 0.4475 | 11.67 | 2800 | 0.4907 | 0.7559 | 0.756 | | 0.4382 | 12.5 | 3000 | 0.4924 | 0.7630 | 0.763 | | 0.4491 | 13.33 | 3200 | 0.4939 | 0.7457 | 0.746 | | 0.4398 | 14.17 | 3400 | 0.4853 | 0.7516 | 0.752 | | 0.4363 | 15.0 | 3600 | 0.4910 | 0.7672 | 0.768 | | 0.4353 | 15.83 | 3800 | 0.4913 | 0.7627 | 0.763 | | 0.4364 | 16.67 | 4000 | 0.4920 | 0.7656 | 0.766 | | 0.4324 | 17.5 | 4200 | 0.4928 | 0.7567 | 0.757 | | 0.4252 | 18.33 | 4400 | 0.5010 | 0.7638 | 0.764 | | 0.4366 | 19.17 | 4600 | 0.4923 | 0.7638 | 0.764 | | 0.4309 | 20.0 | 4800 | 0.4919 | 0.7610 | 0.761 | | 0.428 | 20.83 | 5000 | 0.4988 | 0.7630 | 0.763 | | 0.4249 | 21.67 | 5200 | 0.4914 | 0.7670 | 0.767 | | 0.421 | 22.5 | 5400 | 0.4998 | 0.7599 | 0.76 | | 0.4217 | 23.33 | 5600 | 0.4969 | 0.7646 | 0.765 | | 0.4248 | 24.17 | 5800 | 0.4990 | 0.7588 | 0.759 | | 0.4222 | 25.0 | 6000 | 0.4928 | 0.7630 | 0.763 | | 0.4194 | 25.83 | 6200 | 0.4907 | 0.7620 | 0.762 | | 0.4159 | 26.67 | 6400 | 0.4950 | 0.7659 | 0.766 | | 0.4183 | 27.5 | 6600 | 0.4966 | 0.7680 | 0.768 | | 0.4134 | 28.33 | 6800 | 0.4951 | 0.7659 | 0.766 | | 0.4152 | 29.17 | 7000 | 0.4956 | 0.7620 | 0.762 | | 0.4143 | 30.0 | 7200 | 0.4943 | 0.7518 | 0.752 | | 0.4141 | 30.83 | 7400 | 0.4967 | 0.7599 | 0.76 | | 0.4063 | 31.67 | 7600 | 0.5028 | 0.7579 | 0.758 | | 0.4144 | 32.5 | 7800 | 0.4986 | 0.7610 | 0.761 | | 0.4087 | 33.33 | 8000 | 0.4979 | 0.7629 | 0.763 | | 0.4125 | 34.17 | 8200 | 0.4999 | 0.7650 | 0.765 | | 0.4084 | 35.0 | 8400 | 0.4981 | 0.7640 | 0.764 | | 0.411 | 35.83 | 8600 | 0.4975 | 0.7580 | 0.758 | | 0.4117 | 36.67 | 8800 | 0.4977 | 0.7570 | 0.757 | | 0.4042 | 37.5 | 9000 | 0.5037 | 0.7567 | 0.757 | | 0.4046 | 38.33 | 9200 | 0.5019 | 0.7620 | 0.762 | | 0.407 | 39.17 | 9400 | 0.5006 | 0.7650 | 0.765 | | 0.404 | 40.0 | 9600 | 0.5043 | 0.7599 | 0.76 | | 0.4041 | 40.83 | 9800 | 0.5028 | 0.7620 | 0.762 | | 0.4037 | 41.67 | 10000 | 0.5027 | 0.7580 | 0.758 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_1-seqsight_65536_512_47M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_1-seqsight_65536_512_47M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T17:00:42+00:00
null
transformers
# Uploaded model - **Developed by:** animaRegem - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-7b-bnb-4bit"}
animaRegem/gemma-7b-lora-0_1-malayalam
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-03T17:01:22+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
animaRegem/gemma-2b-lora-0_1-malayalam-tokenizer
null
[ "transformers", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T17:01:36+00:00
null
null
{}
ttc0000/mistral_HFTrainer_instruct02_Sample1_lora_r64_a128_optim32bit_No2
null
[ "safetensors", "region:us" ]
null
2024-05-03T17:01:36+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_4-seqsight_65536_512_47M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.3592 - F1 Score: 0.8408 - Accuracy: 0.841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5524 | 1.34 | 200 | 0.5012 | 0.7490 | 0.749 | | 0.4884 | 2.68 | 400 | 0.4884 | 0.7503 | 0.751 | | 0.4809 | 4.03 | 600 | 0.4846 | 0.7535 | 0.754 | | 0.4723 | 5.37 | 800 | 0.4818 | 0.7550 | 0.755 | | 0.4605 | 6.71 | 1000 | 0.4785 | 0.7562 | 0.757 | | 0.4623 | 8.05 | 1200 | 0.4734 | 0.7670 | 0.767 | | 0.458 | 9.4 | 1400 | 0.4741 | 0.7566 | 0.757 | | 0.457 | 10.74 | 1600 | 0.4798 | 0.7559 | 0.757 | | 0.4518 | 12.08 | 1800 | 0.4766 | 0.7597 | 0.761 | | 0.4501 | 13.42 | 2000 | 0.4673 | 0.7566 | 0.757 | | 0.4479 | 14.77 | 2200 | 0.4684 | 0.7640 | 0.764 | | 0.4487 | 16.11 | 2400 | 0.4664 | 0.7636 | 0.764 | | 0.4443 | 17.45 | 2600 | 0.4687 | 0.7640 | 0.764 | | 0.4431 | 18.79 | 2800 | 0.4678 | 0.7610 | 0.761 | | 0.4454 | 20.13 | 3000 | 0.4639 | 0.7580 | 0.758 | | 0.4384 | 21.48 | 3200 | 0.4688 | 0.7618 | 0.762 | | 0.4413 | 22.82 | 3400 | 0.4657 | 0.7669 | 0.767 | | 0.4389 | 24.16 | 3600 | 0.4631 | 0.7620 | 0.762 | | 0.4391 | 25.5 | 3800 | 0.4676 | 0.7645 | 0.765 | | 0.4374 | 26.85 | 4000 | 0.4624 | 0.7710 | 0.771 | | 0.436 | 28.19 | 4200 | 0.4631 | 0.7660 | 0.766 | | 0.434 | 29.53 | 4400 | 0.4614 | 0.7630 | 0.763 | | 0.4349 | 30.87 | 4600 | 0.4602 | 0.7679 | 0.768 | | 0.4348 | 32.21 | 4800 | 0.4602 | 0.7670 | 0.767 | | 0.43 | 33.56 | 5000 | 0.4626 | 0.7647 | 0.765 | | 0.4317 | 34.9 | 5200 | 0.4601 | 0.7700 | 0.77 | | 0.4345 | 36.24 | 5400 | 0.4570 | 0.7680 | 0.768 | | 0.4285 | 37.58 | 5600 | 0.4581 | 0.7670 | 0.767 | | 0.4292 | 38.93 | 5800 | 0.4563 | 0.7650 | 0.765 | | 0.4294 | 40.27 | 6000 | 0.4574 | 0.7650 | 0.765 | | 0.4272 | 41.61 | 6200 | 0.4580 | 0.7678 | 0.768 | | 0.4283 | 42.95 | 6400 | 0.4558 | 0.7670 | 0.767 | | 0.4296 | 44.3 | 6600 | 0.4553 | 0.7690 | 0.769 | | 0.4236 | 45.64 | 6800 | 0.4552 | 0.7700 | 0.77 | | 0.4276 | 46.98 | 7000 | 0.4557 | 0.7670 | 0.767 | | 0.4287 | 48.32 | 7200 | 0.4534 | 0.7670 | 0.767 | | 0.4249 | 49.66 | 7400 | 0.4563 | 0.7678 | 0.768 | | 0.4235 | 51.01 | 7600 | 0.4532 | 0.7640 | 0.764 | | 0.4265 | 52.35 | 7800 | 0.4539 | 0.7630 | 0.763 | | 0.4211 | 53.69 | 8000 | 0.4534 | 0.7720 | 0.772 | | 0.4253 | 55.03 | 8200 | 0.4546 | 0.7770 | 0.777 | | 0.4232 | 56.38 | 8400 | 0.4547 | 0.7710 | 0.771 | | 0.4248 | 57.72 | 8600 | 0.4541 | 0.7697 | 0.77 | | 0.4218 | 59.06 | 8800 | 0.4536 | 0.7710 | 0.771 | | 0.4235 | 60.4 | 9000 | 0.4524 | 0.7710 | 0.771 | | 0.4232 | 61.74 | 9200 | 0.4526 | 0.7699 | 0.77 | | 0.4238 | 63.09 | 9400 | 0.4524 | 0.7710 | 0.771 | | 0.4265 | 64.43 | 9600 | 0.4520 | 0.7730 | 0.773 | | 0.4192 | 65.77 | 9800 | 0.4526 | 0.7710 | 0.771 | | 0.4209 | 67.11 | 10000 | 0.4525 | 0.7710 | 0.771 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_4-seqsight_65536_512_47M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_4-seqsight_65536_512_47M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T17:03:11+00:00
null
transformers
# Uploaded model - **Developed by:** xsa-dev - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
xsa-dev/hugs_llama3_technique_ft_16bit_GGUF_1
null
[ "transformers", "text-generation-inference", "unsloth", "llama", "gguf", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-03T17:03:18+00:00
null
null
{}
yungplin/lasttest1
null
[ "region:us" ]
null
2024-05-03T17:03:25+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_4-seqsight_65536_512_47M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.3946 - F1 Score: 0.8378 - Accuracy: 0.838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5197 | 1.34 | 200 | 0.4804 | 0.7559 | 0.756 | | 0.4632 | 2.68 | 400 | 0.4707 | 0.7573 | 0.758 | | 0.451 | 4.03 | 600 | 0.4674 | 0.7706 | 0.771 | | 0.4386 | 5.37 | 800 | 0.4713 | 0.7608 | 0.761 | | 0.4245 | 6.71 | 1000 | 0.4641 | 0.7704 | 0.771 | | 0.4206 | 8.05 | 1200 | 0.4561 | 0.7650 | 0.765 | | 0.4135 | 9.4 | 1400 | 0.4505 | 0.7729 | 0.773 | | 0.4101 | 10.74 | 1600 | 0.4429 | 0.7760 | 0.776 | | 0.398 | 12.08 | 1800 | 0.4503 | 0.7834 | 0.785 | | 0.3924 | 13.42 | 2000 | 0.4314 | 0.7789 | 0.779 | | 0.3862 | 14.77 | 2200 | 0.4378 | 0.7790 | 0.779 | | 0.3818 | 16.11 | 2400 | 0.4344 | 0.7856 | 0.786 | | 0.37 | 17.45 | 2600 | 0.4382 | 0.7819 | 0.782 | | 0.3673 | 18.79 | 2800 | 0.4382 | 0.7930 | 0.793 | | 0.3668 | 20.13 | 3000 | 0.4375 | 0.7919 | 0.792 | | 0.355 | 21.48 | 3200 | 0.4364 | 0.8042 | 0.805 | | 0.3526 | 22.82 | 3400 | 0.4336 | 0.8015 | 0.802 | | 0.3472 | 24.16 | 3600 | 0.4297 | 0.8036 | 0.804 | | 0.3397 | 25.5 | 3800 | 0.4356 | 0.8021 | 0.803 | | 0.3336 | 26.85 | 4000 | 0.4270 | 0.8070 | 0.807 | | 0.3311 | 28.19 | 4200 | 0.4383 | 0.8111 | 0.812 | | 0.3216 | 29.53 | 4400 | 0.4312 | 0.8140 | 0.814 | | 0.3223 | 30.87 | 4600 | 0.4287 | 0.8110 | 0.811 | | 0.3171 | 32.21 | 4800 | 0.4274 | 0.8198 | 0.82 | | 0.3087 | 33.56 | 5000 | 0.4340 | 0.8119 | 0.812 | | 0.3112 | 34.9 | 5200 | 0.4324 | 0.8200 | 0.82 | | 0.3074 | 36.24 | 5400 | 0.4328 | 0.8227 | 0.823 | | 0.3009 | 37.58 | 5600 | 0.4299 | 0.8179 | 0.818 | | 0.295 | 38.93 | 5800 | 0.4297 | 0.8229 | 0.823 | | 0.2955 | 40.27 | 6000 | 0.4356 | 0.8257 | 0.826 | | 0.291 | 41.61 | 6200 | 0.4261 | 0.8248 | 0.825 | | 0.2879 | 42.95 | 6400 | 0.4289 | 0.8180 | 0.818 | | 0.2859 | 44.3 | 6600 | 0.4275 | 0.8246 | 0.825 | | 0.2799 | 45.64 | 6800 | 0.4301 | 0.8209 | 0.821 | | 0.2806 | 46.98 | 7000 | 0.4298 | 0.8258 | 0.826 | | 0.28 | 48.32 | 7200 | 0.4359 | 0.8283 | 0.829 | | 0.2787 | 49.66 | 7400 | 0.4247 | 0.8276 | 0.828 | | 0.2715 | 51.01 | 7600 | 0.4292 | 0.8298 | 0.83 | | 0.2738 | 52.35 | 7800 | 0.4339 | 0.8294 | 0.83 | | 0.2676 | 53.69 | 8000 | 0.4320 | 0.8257 | 0.826 | | 0.2698 | 55.03 | 8200 | 0.4308 | 0.8289 | 0.829 | | 0.2661 | 56.38 | 8400 | 0.4333 | 0.8297 | 0.83 | | 0.2659 | 57.72 | 8600 | 0.4364 | 0.8286 | 0.829 | | 0.265 | 59.06 | 8800 | 0.4285 | 0.8267 | 0.827 | | 0.2613 | 60.4 | 9000 | 0.4340 | 0.8297 | 0.83 | | 0.2622 | 61.74 | 9200 | 0.4372 | 0.8294 | 0.83 | | 0.259 | 63.09 | 9400 | 0.4359 | 0.8346 | 0.835 | | 0.2587 | 64.43 | 9600 | 0.4384 | 0.8324 | 0.833 | | 0.2568 | 65.77 | 9800 | 0.4364 | 0.8326 | 0.833 | | 0.2581 | 67.11 | 10000 | 0.4376 | 0.8325 | 0.833 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_4-seqsight_65536_512_47M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_4-seqsight_65536_512_47M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T17:03:56+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_4-seqsight_65536_512_47M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.3586 - F1 Score: 0.8414 - Accuracy: 0.842 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5313 | 1.34 | 200 | 0.4867 | 0.7530 | 0.753 | | 0.4729 | 2.68 | 400 | 0.4783 | 0.7539 | 0.755 | | 0.4632 | 4.03 | 600 | 0.4764 | 0.7586 | 0.76 | | 0.4539 | 5.37 | 800 | 0.4722 | 0.7579 | 0.758 | | 0.4421 | 6.71 | 1000 | 0.4692 | 0.7671 | 0.768 | | 0.4418 | 8.05 | 1200 | 0.4633 | 0.7627 | 0.763 | | 0.4376 | 9.4 | 1400 | 0.4623 | 0.7610 | 0.761 | | 0.437 | 10.74 | 1600 | 0.4580 | 0.7719 | 0.772 | | 0.4274 | 12.08 | 1800 | 0.4650 | 0.7677 | 0.769 | | 0.4248 | 13.42 | 2000 | 0.4510 | 0.7720 | 0.772 | | 0.4224 | 14.77 | 2200 | 0.4550 | 0.7700 | 0.77 | | 0.4205 | 16.11 | 2400 | 0.4479 | 0.7729 | 0.773 | | 0.4143 | 17.45 | 2600 | 0.4532 | 0.7680 | 0.768 | | 0.413 | 18.79 | 2800 | 0.4500 | 0.7770 | 0.777 | | 0.4137 | 20.13 | 3000 | 0.4524 | 0.7658 | 0.766 | | 0.4041 | 21.48 | 3200 | 0.4516 | 0.7626 | 0.763 | | 0.4082 | 22.82 | 3400 | 0.4464 | 0.7708 | 0.771 | | 0.4037 | 24.16 | 3600 | 0.4444 | 0.7718 | 0.772 | | 0.4025 | 25.5 | 3800 | 0.4515 | 0.7690 | 0.77 | | 0.3983 | 26.85 | 4000 | 0.4446 | 0.7769 | 0.777 | | 0.3976 | 28.19 | 4200 | 0.4387 | 0.7738 | 0.774 | | 0.3931 | 29.53 | 4400 | 0.4395 | 0.7800 | 0.78 | | 0.3931 | 30.87 | 4600 | 0.4362 | 0.7789 | 0.779 | | 0.393 | 32.21 | 4800 | 0.4352 | 0.7820 | 0.782 | | 0.3884 | 33.56 | 5000 | 0.4389 | 0.7770 | 0.777 | | 0.3885 | 34.9 | 5200 | 0.4355 | 0.7770 | 0.777 | | 0.3895 | 36.24 | 5400 | 0.4320 | 0.7809 | 0.781 | | 0.382 | 37.58 | 5600 | 0.4337 | 0.7840 | 0.784 | | 0.3804 | 38.93 | 5800 | 0.4337 | 0.7840 | 0.784 | | 0.3816 | 40.27 | 6000 | 0.4326 | 0.7879 | 0.788 | | 0.3756 | 41.61 | 6200 | 0.4336 | 0.7950 | 0.795 | | 0.3769 | 42.95 | 6400 | 0.4329 | 0.7850 | 0.785 | | 0.3767 | 44.3 | 6600 | 0.4299 | 0.7939 | 0.794 | | 0.3706 | 45.64 | 6800 | 0.4318 | 0.7890 | 0.789 | | 0.3749 | 46.98 | 7000 | 0.4320 | 0.7910 | 0.791 | | 0.3776 | 48.32 | 7200 | 0.4268 | 0.7909 | 0.791 | | 0.3712 | 49.66 | 7400 | 0.4277 | 0.7920 | 0.792 | | 0.3688 | 51.01 | 7600 | 0.4292 | 0.7930 | 0.793 | | 0.3726 | 52.35 | 7800 | 0.4302 | 0.7919 | 0.792 | | 0.367 | 53.69 | 8000 | 0.4283 | 0.7950 | 0.795 | | 0.3693 | 55.03 | 8200 | 0.4328 | 0.7920 | 0.792 | | 0.3686 | 56.38 | 8400 | 0.4288 | 0.7940 | 0.794 | | 0.3668 | 57.72 | 8600 | 0.4300 | 0.7958 | 0.796 | | 0.3645 | 59.06 | 8800 | 0.4292 | 0.7930 | 0.793 | | 0.3665 | 60.4 | 9000 | 0.4279 | 0.7900 | 0.79 | | 0.3669 | 61.74 | 9200 | 0.4286 | 0.7909 | 0.791 | | 0.3658 | 63.09 | 9400 | 0.4284 | 0.7920 | 0.792 | | 0.3654 | 64.43 | 9600 | 0.4283 | 0.7929 | 0.793 | | 0.3628 | 65.77 | 9800 | 0.4286 | 0.7920 | 0.792 | | 0.3624 | 67.11 | 10000 | 0.4286 | 0.7910 | 0.791 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_4-seqsight_65536_512_47M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_4-seqsight_65536_512_47M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T17:03:56+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_3-seqsight_65536_512_47M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5577 - F1 Score: 0.7107 - Accuracy: 0.712 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6292 | 0.93 | 200 | 0.5841 | 0.6911 | 0.691 | | 0.6031 | 1.87 | 400 | 0.5730 | 0.6984 | 0.699 | | 0.598 | 2.8 | 600 | 0.5653 | 0.7042 | 0.708 | | 0.5917 | 3.74 | 800 | 0.5626 | 0.7055 | 0.706 | | 0.5892 | 4.67 | 1000 | 0.5581 | 0.7149 | 0.717 | | 0.5869 | 5.61 | 1200 | 0.5559 | 0.7156 | 0.717 | | 0.583 | 6.54 | 1400 | 0.5525 | 0.7224 | 0.725 | | 0.5847 | 7.48 | 1600 | 0.5568 | 0.7131 | 0.713 | | 0.5835 | 8.41 | 1800 | 0.5518 | 0.7112 | 0.712 | | 0.5863 | 9.35 | 2000 | 0.5531 | 0.7186 | 0.719 | | 0.5804 | 10.28 | 2200 | 0.5613 | 0.6986 | 0.699 | | 0.5786 | 11.21 | 2400 | 0.5500 | 0.7256 | 0.727 | | 0.5795 | 12.15 | 2600 | 0.5485 | 0.7174 | 0.719 | | 0.5781 | 13.08 | 2800 | 0.5472 | 0.7237 | 0.726 | | 0.577 | 14.02 | 3000 | 0.5497 | 0.7154 | 0.716 | | 0.5776 | 14.95 | 3200 | 0.5473 | 0.7127 | 0.714 | | 0.5774 | 15.89 | 3400 | 0.5464 | 0.7134 | 0.715 | | 0.5741 | 16.82 | 3600 | 0.5471 | 0.7119 | 0.713 | | 0.5733 | 17.76 | 3800 | 0.5490 | 0.7141 | 0.715 | | 0.5749 | 18.69 | 4000 | 0.5510 | 0.7167 | 0.717 | | 0.5727 | 19.63 | 4200 | 0.5438 | 0.7212 | 0.724 | | 0.5754 | 20.56 | 4400 | 0.5446 | 0.7156 | 0.717 | | 0.5712 | 21.5 | 4600 | 0.5517 | 0.7121 | 0.712 | | 0.5709 | 22.43 | 4800 | 0.5448 | 0.7250 | 0.726 | | 0.5744 | 23.36 | 5000 | 0.5475 | 0.7176 | 0.718 | | 0.5717 | 24.3 | 5200 | 0.5508 | 0.7131 | 0.713 | | 0.5699 | 25.23 | 5400 | 0.5450 | 0.7226 | 0.724 | | 0.5734 | 26.17 | 5600 | 0.5457 | 0.7183 | 0.719 | | 0.5695 | 27.1 | 5800 | 0.5439 | 0.7183 | 0.72 | | 0.569 | 28.04 | 6000 | 0.5439 | 0.7221 | 0.723 | | 0.568 | 28.97 | 6200 | 0.5522 | 0.7059 | 0.706 | | 0.572 | 29.91 | 6400 | 0.5458 | 0.7225 | 0.723 | | 0.5703 | 30.84 | 6600 | 0.5456 | 0.7164 | 0.717 | | 0.5681 | 31.78 | 6800 | 0.5452 | 0.7238 | 0.724 | | 0.5679 | 32.71 | 7000 | 0.5425 | 0.7241 | 0.725 | | 0.572 | 33.64 | 7200 | 0.5433 | 0.7218 | 0.723 | | 0.5652 | 34.58 | 7400 | 0.5510 | 0.7109 | 0.711 | | 0.5702 | 35.51 | 7600 | 0.5463 | 0.7180 | 0.718 | | 0.5678 | 36.45 | 7800 | 0.5453 | 0.7268 | 0.727 | | 0.5686 | 37.38 | 8000 | 0.5444 | 0.7207 | 0.721 | | 0.5625 | 38.32 | 8200 | 0.5423 | 0.7175 | 0.719 | | 0.5671 | 39.25 | 8400 | 0.5440 | 0.7212 | 0.722 | | 0.5668 | 40.19 | 8600 | 0.5440 | 0.7233 | 0.724 | | 0.5653 | 41.12 | 8800 | 0.5445 | 0.7244 | 0.725 | | 0.567 | 42.06 | 9000 | 0.5445 | 0.7285 | 0.729 | | 0.566 | 42.99 | 9200 | 0.5456 | 0.7228 | 0.723 | | 0.5676 | 43.93 | 9400 | 0.5465 | 0.7229 | 0.723 | | 0.5667 | 44.86 | 9600 | 0.5445 | 0.7276 | 0.728 | | 0.5662 | 45.79 | 9800 | 0.5445 | 0.7286 | 0.729 | | 0.5634 | 46.73 | 10000 | 0.5447 | 0.7266 | 0.727 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_3-seqsight_65536_512_47M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_3-seqsight_65536_512_47M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T17:04:22+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_3-seqsight_65536_512_47M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5490 - F1 Score: 0.6962 - Accuracy: 0.698 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6208 | 0.93 | 200 | 0.5737 | 0.6929 | 0.693 | | 0.5963 | 1.87 | 400 | 0.5696 | 0.6930 | 0.693 | | 0.5907 | 2.8 | 600 | 0.5576 | 0.7162 | 0.718 | | 0.5842 | 3.74 | 800 | 0.5619 | 0.7001 | 0.7 | | 0.5822 | 4.67 | 1000 | 0.5538 | 0.7155 | 0.716 | | 0.5791 | 5.61 | 1200 | 0.5470 | 0.7250 | 0.727 | | 0.5749 | 6.54 | 1400 | 0.5498 | 0.7267 | 0.728 | | 0.5747 | 7.48 | 1600 | 0.5501 | 0.7191 | 0.719 | | 0.573 | 8.41 | 1800 | 0.5464 | 0.7147 | 0.715 | | 0.5762 | 9.35 | 2000 | 0.5457 | 0.7265 | 0.728 | | 0.5689 | 10.28 | 2200 | 0.5498 | 0.7169 | 0.717 | | 0.5662 | 11.21 | 2400 | 0.5440 | 0.7234 | 0.725 | | 0.5668 | 12.15 | 2600 | 0.5410 | 0.7185 | 0.721 | | 0.5634 | 13.08 | 2800 | 0.5422 | 0.7176 | 0.722 | | 0.5631 | 14.02 | 3000 | 0.5416 | 0.7290 | 0.73 | | 0.5618 | 14.95 | 3200 | 0.5383 | 0.7208 | 0.724 | | 0.5617 | 15.89 | 3400 | 0.5381 | 0.7291 | 0.731 | | 0.5597 | 16.82 | 3600 | 0.5400 | 0.7295 | 0.731 | | 0.5567 | 17.76 | 3800 | 0.5420 | 0.7249 | 0.727 | | 0.558 | 18.69 | 4000 | 0.5463 | 0.7289 | 0.729 | | 0.5563 | 19.63 | 4200 | 0.5375 | 0.7251 | 0.728 | | 0.5584 | 20.56 | 4400 | 0.5381 | 0.7264 | 0.728 | | 0.5523 | 21.5 | 4600 | 0.5479 | 0.7140 | 0.714 | | 0.5526 | 22.43 | 4800 | 0.5387 | 0.7275 | 0.729 | | 0.5567 | 23.36 | 5000 | 0.5453 | 0.7251 | 0.725 | | 0.551 | 24.3 | 5200 | 0.5539 | 0.7054 | 0.706 | | 0.5498 | 25.23 | 5400 | 0.5404 | 0.7268 | 0.729 | | 0.5545 | 26.17 | 5600 | 0.5407 | 0.7299 | 0.731 | | 0.5489 | 27.1 | 5800 | 0.5393 | 0.7272 | 0.728 | | 0.5478 | 28.04 | 6000 | 0.5395 | 0.7292 | 0.73 | | 0.5469 | 28.97 | 6200 | 0.5465 | 0.7191 | 0.719 | | 0.5509 | 29.91 | 6400 | 0.5414 | 0.7290 | 0.73 | | 0.5488 | 30.84 | 6600 | 0.5385 | 0.7241 | 0.725 | | 0.5459 | 31.78 | 6800 | 0.5413 | 0.7247 | 0.725 | | 0.5463 | 32.71 | 7000 | 0.5390 | 0.7283 | 0.729 | | 0.5501 | 33.64 | 7200 | 0.5389 | 0.7248 | 0.726 | | 0.5427 | 34.58 | 7400 | 0.5485 | 0.7079 | 0.708 | | 0.5464 | 35.51 | 7600 | 0.5422 | 0.7220 | 0.722 | | 0.5448 | 36.45 | 7800 | 0.5403 | 0.7304 | 0.731 | | 0.5453 | 37.38 | 8000 | 0.5399 | 0.7252 | 0.726 | | 0.5374 | 38.32 | 8200 | 0.5403 | 0.7270 | 0.728 | | 0.5424 | 39.25 | 8400 | 0.5400 | 0.7282 | 0.729 | | 0.5439 | 40.19 | 8600 | 0.5402 | 0.7264 | 0.727 | | 0.541 | 41.12 | 8800 | 0.5409 | 0.7264 | 0.727 | | 0.5428 | 42.06 | 9000 | 0.5407 | 0.7272 | 0.728 | | 0.5414 | 42.99 | 9200 | 0.5419 | 0.7248 | 0.725 | | 0.5432 | 43.93 | 9400 | 0.5418 | 0.7238 | 0.724 | | 0.5424 | 44.86 | 9600 | 0.5401 | 0.7255 | 0.726 | | 0.5406 | 45.79 | 9800 | 0.5407 | 0.7264 | 0.727 | | 0.5384 | 46.73 | 10000 | 0.5410 | 0.7245 | 0.725 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_3-seqsight_65536_512_47M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_3-seqsight_65536_512_47M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T17:05:07+00:00
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) GritLM-7B - bnb 4bits - Model creator: https://huggingface.co/GritLM/ - Original model: https://huggingface.co/GritLM/GritLM-7B/ Original model description: --- pipeline_tag: text-generation inference: true license: apache-2.0 datasets: - GritLM/tulu2 tags: - mteb model-index: - name: GritLM-7B results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 81.17910447761194 - type: ap value: 46.26260671758199 - type: f1 value: 75.44565719934167 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 96.5161 - type: ap value: 94.79131981460425 - type: f1 value: 96.51506148413065 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 57.806000000000004 - type: f1 value: 56.78350156257903 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 38.478 - type: map_at_10 value: 54.955 - type: map_at_100 value: 54.955 - type: map_at_1000 value: 54.955 - type: map_at_3 value: 50.888999999999996 - type: map_at_5 value: 53.349999999999994 - type: mrr_at_1 value: 39.757999999999996 - type: mrr_at_10 value: 55.449000000000005 - type: mrr_at_100 value: 55.449000000000005 - type: mrr_at_1000 value: 55.449000000000005 - type: mrr_at_3 value: 51.37500000000001 - type: mrr_at_5 value: 53.822 - type: ndcg_at_1 value: 38.478 - type: ndcg_at_10 value: 63.239999999999995 - type: ndcg_at_100 value: 63.239999999999995 - type: ndcg_at_1000 value: 63.239999999999995 - type: ndcg_at_3 value: 54.935 - type: ndcg_at_5 value: 59.379000000000005 - type: precision_at_1 value: 38.478 - type: precision_at_10 value: 8.933 - type: precision_at_100 value: 0.893 - type: precision_at_1000 value: 0.089 - type: precision_at_3 value: 22.214 - type: precision_at_5 value: 15.491 - type: recall_at_1 value: 38.478 - type: recall_at_10 value: 89.331 - type: recall_at_100 value: 89.331 - type: recall_at_1000 value: 89.331 - type: recall_at_3 value: 66.643 - type: recall_at_5 value: 77.45400000000001 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 51.67144081472449 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 48.11256154264126 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 67.33801955487878 - type: mrr value: 80.71549487754474 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 88.1935203751726 - type: cos_sim_spearman value: 86.35497970498659 - type: euclidean_pearson value: 85.46910708503744 - type: euclidean_spearman value: 85.13928935405485 - type: manhattan_pearson value: 85.68373836333303 - type: manhattan_spearman value: 85.40013867117746 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 88.46753246753248 - type: f1 value: 88.43006344981134 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 40.86793640310432 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 39.80291334130727 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 38.421 - type: map_at_10 value: 52.349000000000004 - type: map_at_100 value: 52.349000000000004 - type: map_at_1000 value: 52.349000000000004 - type: map_at_3 value: 48.17 - type: map_at_5 value: 50.432 - type: mrr_at_1 value: 47.353 - type: mrr_at_10 value: 58.387 - type: mrr_at_100 value: 58.387 - type: mrr_at_1000 value: 58.387 - type: mrr_at_3 value: 56.199 - type: mrr_at_5 value: 57.487 - type: ndcg_at_1 value: 47.353 - type: ndcg_at_10 value: 59.202 - type: ndcg_at_100 value: 58.848 - type: ndcg_at_1000 value: 58.831999999999994 - type: ndcg_at_3 value: 54.112 - type: ndcg_at_5 value: 56.312 - type: precision_at_1 value: 47.353 - type: precision_at_10 value: 11.459 - type: precision_at_100 value: 1.146 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 26.133 - type: precision_at_5 value: 18.627 - type: recall_at_1 value: 38.421 - type: recall_at_10 value: 71.89 - type: recall_at_100 value: 71.89 - type: recall_at_1000 value: 71.89 - type: recall_at_3 value: 56.58 - type: recall_at_5 value: 63.125 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 38.025999999999996 - type: map_at_10 value: 50.590999999999994 - type: map_at_100 value: 51.99700000000001 - type: map_at_1000 value: 52.11599999999999 - type: map_at_3 value: 47.435 - type: map_at_5 value: 49.236000000000004 - type: mrr_at_1 value: 48.28 - type: mrr_at_10 value: 56.814 - type: mrr_at_100 value: 57.446 - type: mrr_at_1000 value: 57.476000000000006 - type: mrr_at_3 value: 54.958 - type: mrr_at_5 value: 56.084999999999994 - type: ndcg_at_1 value: 48.28 - type: ndcg_at_10 value: 56.442 - type: ndcg_at_100 value: 60.651999999999994 - type: ndcg_at_1000 value: 62.187000000000005 - type: ndcg_at_3 value: 52.866 - type: ndcg_at_5 value: 54.515 - type: precision_at_1 value: 48.28 - type: precision_at_10 value: 10.586 - type: precision_at_100 value: 1.6310000000000002 - type: precision_at_1000 value: 0.20600000000000002 - type: precision_at_3 value: 25.945 - type: precision_at_5 value: 18.076 - type: recall_at_1 value: 38.025999999999996 - type: recall_at_10 value: 66.11399999999999 - type: recall_at_100 value: 83.339 - type: recall_at_1000 value: 92.413 - type: recall_at_3 value: 54.493 - type: recall_at_5 value: 59.64699999999999 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 47.905 - type: map_at_10 value: 61.58 - type: map_at_100 value: 62.605 - type: map_at_1000 value: 62.637 - type: map_at_3 value: 58.074000000000005 - type: map_at_5 value: 60.260000000000005 - type: mrr_at_1 value: 54.42 - type: mrr_at_10 value: 64.847 - type: mrr_at_100 value: 65.403 - type: mrr_at_1000 value: 65.41900000000001 - type: mrr_at_3 value: 62.675000000000004 - type: mrr_at_5 value: 64.101 - type: ndcg_at_1 value: 54.42 - type: ndcg_at_10 value: 67.394 - type: ndcg_at_100 value: 70.846 - type: ndcg_at_1000 value: 71.403 - type: ndcg_at_3 value: 62.025 - type: ndcg_at_5 value: 65.032 - type: precision_at_1 value: 54.42 - type: precision_at_10 value: 10.646 - type: precision_at_100 value: 1.325 - type: precision_at_1000 value: 0.13999999999999999 - type: precision_at_3 value: 27.398 - type: precision_at_5 value: 18.796 - type: recall_at_1 value: 47.905 - type: recall_at_10 value: 80.84599999999999 - type: recall_at_100 value: 95.078 - type: recall_at_1000 value: 98.878 - type: recall_at_3 value: 67.05600000000001 - type: recall_at_5 value: 74.261 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.745 - type: map_at_10 value: 41.021 - type: map_at_100 value: 41.021 - type: map_at_1000 value: 41.021 - type: map_at_3 value: 37.714999999999996 - type: map_at_5 value: 39.766 - type: mrr_at_1 value: 33.559 - type: mrr_at_10 value: 43.537 - type: mrr_at_100 value: 43.537 - type: mrr_at_1000 value: 43.537 - type: mrr_at_3 value: 40.546 - type: mrr_at_5 value: 42.439 - type: ndcg_at_1 value: 33.559 - type: ndcg_at_10 value: 46.781 - type: ndcg_at_100 value: 46.781 - type: ndcg_at_1000 value: 46.781 - type: ndcg_at_3 value: 40.516000000000005 - type: ndcg_at_5 value: 43.957 - type: precision_at_1 value: 33.559 - type: precision_at_10 value: 7.198 - type: precision_at_100 value: 0.72 - type: precision_at_1000 value: 0.07200000000000001 - type: precision_at_3 value: 17.1 - type: precision_at_5 value: 12.316 - type: recall_at_1 value: 30.745 - type: recall_at_10 value: 62.038000000000004 - type: recall_at_100 value: 62.038000000000004 - type: recall_at_1000 value: 62.038000000000004 - type: recall_at_3 value: 45.378 - type: recall_at_5 value: 53.580000000000005 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 19.637999999999998 - type: map_at_10 value: 31.05 - type: map_at_100 value: 31.05 - type: map_at_1000 value: 31.05 - type: map_at_3 value: 27.628000000000004 - type: map_at_5 value: 29.767 - type: mrr_at_1 value: 25.0 - type: mrr_at_10 value: 36.131 - type: mrr_at_100 value: 36.131 - type: mrr_at_1000 value: 36.131 - type: mrr_at_3 value: 33.333 - type: mrr_at_5 value: 35.143 - type: ndcg_at_1 value: 25.0 - type: ndcg_at_10 value: 37.478 - type: ndcg_at_100 value: 37.469 - type: ndcg_at_1000 value: 37.469 - type: ndcg_at_3 value: 31.757999999999996 - type: ndcg_at_5 value: 34.821999999999996 - type: precision_at_1 value: 25.0 - type: precision_at_10 value: 7.188999999999999 - type: precision_at_100 value: 0.719 - type: precision_at_1000 value: 0.07200000000000001 - type: precision_at_3 value: 15.837000000000002 - type: precision_at_5 value: 11.841 - type: recall_at_1 value: 19.637999999999998 - type: recall_at_10 value: 51.836000000000006 - type: recall_at_100 value: 51.836000000000006 - type: recall_at_1000 value: 51.836000000000006 - type: recall_at_3 value: 36.384 - type: recall_at_5 value: 43.964 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 34.884 - type: map_at_10 value: 47.88 - type: map_at_100 value: 47.88 - type: map_at_1000 value: 47.88 - type: map_at_3 value: 43.85 - type: map_at_5 value: 46.414 - type: mrr_at_1 value: 43.022 - type: mrr_at_10 value: 53.569 - type: mrr_at_100 value: 53.569 - type: mrr_at_1000 value: 53.569 - type: mrr_at_3 value: 51.075 - type: mrr_at_5 value: 52.725 - type: ndcg_at_1 value: 43.022 - type: ndcg_at_10 value: 54.461000000000006 - type: ndcg_at_100 value: 54.388000000000005 - type: ndcg_at_1000 value: 54.388000000000005 - type: ndcg_at_3 value: 48.864999999999995 - type: ndcg_at_5 value: 52.032000000000004 - type: precision_at_1 value: 43.022 - type: precision_at_10 value: 9.885 - type: precision_at_100 value: 0.988 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 23.612 - type: precision_at_5 value: 16.997 - type: recall_at_1 value: 34.884 - type: recall_at_10 value: 68.12899999999999 - type: recall_at_100 value: 68.12899999999999 - type: recall_at_1000 value: 68.12899999999999 - type: recall_at_3 value: 52.428 - type: recall_at_5 value: 60.662000000000006 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 31.588 - type: map_at_10 value: 43.85 - type: map_at_100 value: 45.317 - type: map_at_1000 value: 45.408 - type: map_at_3 value: 39.73 - type: map_at_5 value: 42.122 - type: mrr_at_1 value: 38.927 - type: mrr_at_10 value: 49.582 - type: mrr_at_100 value: 50.39 - type: mrr_at_1000 value: 50.426 - type: mrr_at_3 value: 46.518 - type: mrr_at_5 value: 48.271 - type: ndcg_at_1 value: 38.927 - type: ndcg_at_10 value: 50.605999999999995 - type: ndcg_at_100 value: 56.22200000000001 - type: ndcg_at_1000 value: 57.724 - type: ndcg_at_3 value: 44.232 - type: ndcg_at_5 value: 47.233999999999995 - type: precision_at_1 value: 38.927 - type: precision_at_10 value: 9.429 - type: precision_at_100 value: 1.435 - type: precision_at_1000 value: 0.172 - type: precision_at_3 value: 21.271 - type: precision_at_5 value: 15.434000000000001 - type: recall_at_1 value: 31.588 - type: recall_at_10 value: 64.836 - type: recall_at_100 value: 88.066 - type: recall_at_1000 value: 97.748 - type: recall_at_3 value: 47.128 - type: recall_at_5 value: 54.954 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 31.956083333333336 - type: map_at_10 value: 43.33483333333333 - type: map_at_100 value: 44.64883333333333 - type: map_at_1000 value: 44.75 - type: map_at_3 value: 39.87741666666666 - type: map_at_5 value: 41.86766666666667 - type: mrr_at_1 value: 38.06341666666667 - type: mrr_at_10 value: 47.839666666666666 - type: mrr_at_100 value: 48.644000000000005 - type: mrr_at_1000 value: 48.68566666666667 - type: mrr_at_3 value: 45.26358333333334 - type: mrr_at_5 value: 46.790000000000006 - type: ndcg_at_1 value: 38.06341666666667 - type: ndcg_at_10 value: 49.419333333333334 - type: ndcg_at_100 value: 54.50166666666667 - type: ndcg_at_1000 value: 56.161166666666674 - type: ndcg_at_3 value: 43.982416666666666 - type: ndcg_at_5 value: 46.638083333333334 - type: precision_at_1 value: 38.06341666666667 - type: precision_at_10 value: 8.70858333333333 - type: precision_at_100 value: 1.327 - type: precision_at_1000 value: 0.165 - type: precision_at_3 value: 20.37816666666667 - type: precision_at_5 value: 14.516333333333334 - type: recall_at_1 value: 31.956083333333336 - type: recall_at_10 value: 62.69458333333334 - type: recall_at_100 value: 84.46433333333334 - type: recall_at_1000 value: 95.58449999999999 - type: recall_at_3 value: 47.52016666666666 - type: recall_at_5 value: 54.36066666666666 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.912 - type: map_at_10 value: 38.291 - type: map_at_100 value: 39.44 - type: map_at_1000 value: 39.528 - type: map_at_3 value: 35.638 - type: map_at_5 value: 37.218 - type: mrr_at_1 value: 32.822 - type: mrr_at_10 value: 41.661 - type: mrr_at_100 value: 42.546 - type: mrr_at_1000 value: 42.603 - type: mrr_at_3 value: 39.238 - type: mrr_at_5 value: 40.726 - type: ndcg_at_1 value: 32.822 - type: ndcg_at_10 value: 43.373 - type: ndcg_at_100 value: 48.638 - type: ndcg_at_1000 value: 50.654999999999994 - type: ndcg_at_3 value: 38.643 - type: ndcg_at_5 value: 41.126000000000005 - type: precision_at_1 value: 32.822 - type: precision_at_10 value: 6.8709999999999996 - type: precision_at_100 value: 1.032 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 16.82 - type: precision_at_5 value: 11.718 - type: recall_at_1 value: 28.912 - type: recall_at_10 value: 55.376999999999995 - type: recall_at_100 value: 79.066 - type: recall_at_1000 value: 93.664 - type: recall_at_3 value: 42.569 - type: recall_at_5 value: 48.719 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.181 - type: map_at_10 value: 31.462 - type: map_at_100 value: 32.73 - type: map_at_1000 value: 32.848 - type: map_at_3 value: 28.57 - type: map_at_5 value: 30.182 - type: mrr_at_1 value: 27.185 - type: mrr_at_10 value: 35.846000000000004 - type: mrr_at_100 value: 36.811 - type: mrr_at_1000 value: 36.873 - type: mrr_at_3 value: 33.437 - type: mrr_at_5 value: 34.813 - type: ndcg_at_1 value: 27.185 - type: ndcg_at_10 value: 36.858000000000004 - type: ndcg_at_100 value: 42.501 - type: ndcg_at_1000 value: 44.945 - type: ndcg_at_3 value: 32.066 - type: ndcg_at_5 value: 34.29 - type: precision_at_1 value: 27.185 - type: precision_at_10 value: 6.752 - type: precision_at_100 value: 1.111 - type: precision_at_1000 value: 0.151 - type: precision_at_3 value: 15.290000000000001 - type: precision_at_5 value: 11.004999999999999 - type: recall_at_1 value: 22.181 - type: recall_at_10 value: 48.513 - type: recall_at_100 value: 73.418 - type: recall_at_1000 value: 90.306 - type: recall_at_3 value: 35.003 - type: recall_at_5 value: 40.876000000000005 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 33.934999999999995 - type: map_at_10 value: 44.727 - type: map_at_100 value: 44.727 - type: map_at_1000 value: 44.727 - type: map_at_3 value: 40.918 - type: map_at_5 value: 42.961 - type: mrr_at_1 value: 39.646 - type: mrr_at_10 value: 48.898 - type: mrr_at_100 value: 48.898 - type: mrr_at_1000 value: 48.898 - type: mrr_at_3 value: 45.896 - type: mrr_at_5 value: 47.514 - type: ndcg_at_1 value: 39.646 - type: ndcg_at_10 value: 50.817 - type: ndcg_at_100 value: 50.803 - type: ndcg_at_1000 value: 50.803 - type: ndcg_at_3 value: 44.507999999999996 - type: ndcg_at_5 value: 47.259 - type: precision_at_1 value: 39.646 - type: precision_at_10 value: 8.759 - type: precision_at_100 value: 0.876 - type: precision_at_1000 value: 0.08800000000000001 - type: precision_at_3 value: 20.274 - type: precision_at_5 value: 14.366000000000001 - type: recall_at_1 value: 33.934999999999995 - type: recall_at_10 value: 65.037 - type: recall_at_100 value: 65.037 - type: recall_at_1000 value: 65.037 - type: recall_at_3 value: 47.439 - type: recall_at_5 value: 54.567 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.058 - type: map_at_10 value: 43.137 - type: map_at_100 value: 43.137 - type: map_at_1000 value: 43.137 - type: map_at_3 value: 39.882 - type: map_at_5 value: 41.379 - type: mrr_at_1 value: 38.933 - type: mrr_at_10 value: 48.344 - type: mrr_at_100 value: 48.344 - type: mrr_at_1000 value: 48.344 - type: mrr_at_3 value: 45.652 - type: mrr_at_5 value: 46.877 - type: ndcg_at_1 value: 38.933 - type: ndcg_at_10 value: 49.964 - type: ndcg_at_100 value: 49.242000000000004 - type: ndcg_at_1000 value: 49.222 - type: ndcg_at_3 value: 44.605 - type: ndcg_at_5 value: 46.501999999999995 - type: precision_at_1 value: 38.933 - type: precision_at_10 value: 9.427000000000001 - type: precision_at_100 value: 0.943 - type: precision_at_1000 value: 0.094 - type: precision_at_3 value: 20.685000000000002 - type: precision_at_5 value: 14.585 - type: recall_at_1 value: 32.058 - type: recall_at_10 value: 63.074 - type: recall_at_100 value: 63.074 - type: recall_at_1000 value: 63.074 - type: recall_at_3 value: 47.509 - type: recall_at_5 value: 52.455 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.029000000000003 - type: map_at_10 value: 34.646 - type: map_at_100 value: 34.646 - type: map_at_1000 value: 34.646 - type: map_at_3 value: 31.456 - type: map_at_5 value: 33.138 - type: mrr_at_1 value: 28.281 - type: mrr_at_10 value: 36.905 - type: mrr_at_100 value: 36.905 - type: mrr_at_1000 value: 36.905 - type: mrr_at_3 value: 34.011 - type: mrr_at_5 value: 35.638 - type: ndcg_at_1 value: 28.281 - type: ndcg_at_10 value: 40.159 - type: ndcg_at_100 value: 40.159 - type: ndcg_at_1000 value: 40.159 - type: ndcg_at_3 value: 33.995 - type: ndcg_at_5 value: 36.836999999999996 - type: precision_at_1 value: 28.281 - type: precision_at_10 value: 6.358999999999999 - type: precision_at_100 value: 0.636 - type: precision_at_1000 value: 0.064 - type: precision_at_3 value: 14.233 - type: precision_at_5 value: 10.314 - type: recall_at_1 value: 26.029000000000003 - type: recall_at_10 value: 55.08 - type: recall_at_100 value: 55.08 - type: recall_at_1000 value: 55.08 - type: recall_at_3 value: 38.487 - type: recall_at_5 value: 45.308 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 12.842999999999998 - type: map_at_10 value: 22.101000000000003 - type: map_at_100 value: 24.319 - type: map_at_1000 value: 24.51 - type: map_at_3 value: 18.372 - type: map_at_5 value: 20.323 - type: mrr_at_1 value: 27.948 - type: mrr_at_10 value: 40.321 - type: mrr_at_100 value: 41.262 - type: mrr_at_1000 value: 41.297 - type: mrr_at_3 value: 36.558 - type: mrr_at_5 value: 38.824999999999996 - type: ndcg_at_1 value: 27.948 - type: ndcg_at_10 value: 30.906 - type: ndcg_at_100 value: 38.986 - type: ndcg_at_1000 value: 42.136 - type: ndcg_at_3 value: 24.911 - type: ndcg_at_5 value: 27.168999999999997 - type: precision_at_1 value: 27.948 - type: precision_at_10 value: 9.798 - type: precision_at_100 value: 1.8399999999999999 - type: precision_at_1000 value: 0.243 - type: precision_at_3 value: 18.328 - type: precision_at_5 value: 14.502 - type: recall_at_1 value: 12.842999999999998 - type: recall_at_10 value: 37.245 - type: recall_at_100 value: 64.769 - type: recall_at_1000 value: 82.055 - type: recall_at_3 value: 23.159 - type: recall_at_5 value: 29.113 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 8.934000000000001 - type: map_at_10 value: 21.915000000000003 - type: map_at_100 value: 21.915000000000003 - type: map_at_1000 value: 21.915000000000003 - type: map_at_3 value: 14.623 - type: map_at_5 value: 17.841 - type: mrr_at_1 value: 71.25 - type: mrr_at_10 value: 78.994 - type: mrr_at_100 value: 78.994 - type: mrr_at_1000 value: 78.994 - type: mrr_at_3 value: 77.208 - type: mrr_at_5 value: 78.55799999999999 - type: ndcg_at_1 value: 60.62499999999999 - type: ndcg_at_10 value: 46.604 - type: ndcg_at_100 value: 35.653 - type: ndcg_at_1000 value: 35.531 - type: ndcg_at_3 value: 50.605 - type: ndcg_at_5 value: 48.730000000000004 - type: precision_at_1 value: 71.25 - type: precision_at_10 value: 37.75 - type: precision_at_100 value: 3.775 - type: precision_at_1000 value: 0.377 - type: precision_at_3 value: 54.417 - type: precision_at_5 value: 48.15 - type: recall_at_1 value: 8.934000000000001 - type: recall_at_10 value: 28.471000000000004 - type: recall_at_100 value: 28.471000000000004 - type: recall_at_1000 value: 28.471000000000004 - type: recall_at_3 value: 16.019 - type: recall_at_5 value: 21.410999999999998 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 52.81 - type: f1 value: 47.987573380720114 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 66.81899999999999 - type: map_at_10 value: 78.034 - type: map_at_100 value: 78.034 - type: map_at_1000 value: 78.034 - type: map_at_3 value: 76.43100000000001 - type: map_at_5 value: 77.515 - type: mrr_at_1 value: 71.542 - type: mrr_at_10 value: 81.638 - type: mrr_at_100 value: 81.638 - type: mrr_at_1000 value: 81.638 - type: mrr_at_3 value: 80.403 - type: mrr_at_5 value: 81.256 - type: ndcg_at_1 value: 71.542 - type: ndcg_at_10 value: 82.742 - type: ndcg_at_100 value: 82.741 - type: ndcg_at_1000 value: 82.741 - type: ndcg_at_3 value: 80.039 - type: ndcg_at_5 value: 81.695 - type: precision_at_1 value: 71.542 - type: precision_at_10 value: 10.387 - type: precision_at_100 value: 1.039 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 31.447999999999997 - type: precision_at_5 value: 19.91 - type: recall_at_1 value: 66.81899999999999 - type: recall_at_10 value: 93.372 - type: recall_at_100 value: 93.372 - type: recall_at_1000 value: 93.372 - type: recall_at_3 value: 86.33 - type: recall_at_5 value: 90.347 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 31.158 - type: map_at_10 value: 52.017 - type: map_at_100 value: 54.259 - type: map_at_1000 value: 54.367 - type: map_at_3 value: 45.738 - type: map_at_5 value: 49.283 - type: mrr_at_1 value: 57.87 - type: mrr_at_10 value: 66.215 - type: mrr_at_100 value: 66.735 - type: mrr_at_1000 value: 66.75 - type: mrr_at_3 value: 64.043 - type: mrr_at_5 value: 65.116 - type: ndcg_at_1 value: 57.87 - type: ndcg_at_10 value: 59.946999999999996 - type: ndcg_at_100 value: 66.31099999999999 - type: ndcg_at_1000 value: 67.75999999999999 - type: ndcg_at_3 value: 55.483000000000004 - type: ndcg_at_5 value: 56.891000000000005 - type: precision_at_1 value: 57.87 - type: precision_at_10 value: 16.497 - type: precision_at_100 value: 2.321 - type: precision_at_1000 value: 0.258 - type: precision_at_3 value: 37.14 - type: precision_at_5 value: 27.067999999999998 - type: recall_at_1 value: 31.158 - type: recall_at_10 value: 67.381 - type: recall_at_100 value: 89.464 - type: recall_at_1000 value: 97.989 - type: recall_at_3 value: 50.553000000000004 - type: recall_at_5 value: 57.824 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 42.073 - type: map_at_10 value: 72.418 - type: map_at_100 value: 73.175 - type: map_at_1000 value: 73.215 - type: map_at_3 value: 68.791 - type: map_at_5 value: 71.19 - type: mrr_at_1 value: 84.146 - type: mrr_at_10 value: 88.994 - type: mrr_at_100 value: 89.116 - type: mrr_at_1000 value: 89.12 - type: mrr_at_3 value: 88.373 - type: mrr_at_5 value: 88.82 - type: ndcg_at_1 value: 84.146 - type: ndcg_at_10 value: 79.404 - type: ndcg_at_100 value: 81.83200000000001 - type: ndcg_at_1000 value: 82.524 - type: ndcg_at_3 value: 74.595 - type: ndcg_at_5 value: 77.474 - type: precision_at_1 value: 84.146 - type: precision_at_10 value: 16.753999999999998 - type: precision_at_100 value: 1.8599999999999999 - type: precision_at_1000 value: 0.19499999999999998 - type: precision_at_3 value: 48.854 - type: precision_at_5 value: 31.579 - type: recall_at_1 value: 42.073 - type: recall_at_10 value: 83.768 - type: recall_at_100 value: 93.018 - type: recall_at_1000 value: 97.481 - type: recall_at_3 value: 73.282 - type: recall_at_5 value: 78.947 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 94.9968 - type: ap value: 92.93892195862824 - type: f1 value: 94.99327998213761 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 21.698 - type: map_at_10 value: 34.585 - type: map_at_100 value: 35.782000000000004 - type: map_at_1000 value: 35.825 - type: map_at_3 value: 30.397999999999996 - type: map_at_5 value: 32.72 - type: mrr_at_1 value: 22.192 - type: mrr_at_10 value: 35.085 - type: mrr_at_100 value: 36.218 - type: mrr_at_1000 value: 36.256 - type: mrr_at_3 value: 30.986000000000004 - type: mrr_at_5 value: 33.268 - type: ndcg_at_1 value: 22.192 - type: ndcg_at_10 value: 41.957 - type: ndcg_at_100 value: 47.658 - type: ndcg_at_1000 value: 48.697 - type: ndcg_at_3 value: 33.433 - type: ndcg_at_5 value: 37.551 - type: precision_at_1 value: 22.192 - type: precision_at_10 value: 6.781 - type: precision_at_100 value: 0.963 - type: precision_at_1000 value: 0.105 - type: precision_at_3 value: 14.365 - type: precision_at_5 value: 10.713000000000001 - type: recall_at_1 value: 21.698 - type: recall_at_10 value: 64.79 - type: recall_at_100 value: 91.071 - type: recall_at_1000 value: 98.883 - type: recall_at_3 value: 41.611 - type: recall_at_5 value: 51.459999999999994 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 96.15823073415413 - type: f1 value: 96.00362034963248 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 87.12722298221614 - type: f1 value: 70.46888967516227 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 80.77673167451245 - type: f1 value: 77.60202561132175 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 82.09145931405514 - type: f1 value: 81.7701921473406 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 36.52153488185864 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 36.80090398444147 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.807141746058605 - type: mrr value: 32.85025611455029 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.920999999999999 - type: map_at_10 value: 16.049 - type: map_at_100 value: 16.049 - type: map_at_1000 value: 16.049 - type: map_at_3 value: 11.865 - type: map_at_5 value: 13.657 - type: mrr_at_1 value: 53.87 - type: mrr_at_10 value: 62.291 - type: mrr_at_100 value: 62.291 - type: mrr_at_1000 value: 62.291 - type: mrr_at_3 value: 60.681 - type: mrr_at_5 value: 61.61 - type: ndcg_at_1 value: 51.23799999999999 - type: ndcg_at_10 value: 40.892 - type: ndcg_at_100 value: 26.951999999999998 - type: ndcg_at_1000 value: 26.474999999999998 - type: ndcg_at_3 value: 46.821 - type: ndcg_at_5 value: 44.333 - type: precision_at_1 value: 53.251000000000005 - type: precision_at_10 value: 30.124000000000002 - type: precision_at_100 value: 3.012 - type: precision_at_1000 value: 0.301 - type: precision_at_3 value: 43.55 - type: precision_at_5 value: 38.266 - type: recall_at_1 value: 6.920999999999999 - type: recall_at_10 value: 20.852 - type: recall_at_100 value: 20.852 - type: recall_at_1000 value: 20.852 - type: recall_at_3 value: 13.628000000000002 - type: recall_at_5 value: 16.273 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 46.827999999999996 - type: map_at_10 value: 63.434000000000005 - type: map_at_100 value: 63.434000000000005 - type: map_at_1000 value: 63.434000000000005 - type: map_at_3 value: 59.794000000000004 - type: map_at_5 value: 62.08 - type: mrr_at_1 value: 52.288999999999994 - type: mrr_at_10 value: 65.95 - type: mrr_at_100 value: 65.95 - type: mrr_at_1000 value: 65.95 - type: mrr_at_3 value: 63.413 - type: mrr_at_5 value: 65.08 - type: ndcg_at_1 value: 52.288999999999994 - type: ndcg_at_10 value: 70.301 - type: ndcg_at_100 value: 70.301 - type: ndcg_at_1000 value: 70.301 - type: ndcg_at_3 value: 63.979 - type: ndcg_at_5 value: 67.582 - type: precision_at_1 value: 52.288999999999994 - type: precision_at_10 value: 10.576 - type: precision_at_100 value: 1.058 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 28.177000000000003 - type: precision_at_5 value: 19.073 - type: recall_at_1 value: 46.827999999999996 - type: recall_at_10 value: 88.236 - type: recall_at_100 value: 88.236 - type: recall_at_1000 value: 88.236 - type: recall_at_3 value: 72.371 - type: recall_at_5 value: 80.56 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 71.652 - type: map_at_10 value: 85.953 - type: map_at_100 value: 85.953 - type: map_at_1000 value: 85.953 - type: map_at_3 value: 83.05399999999999 - type: map_at_5 value: 84.89 - type: mrr_at_1 value: 82.42 - type: mrr_at_10 value: 88.473 - type: mrr_at_100 value: 88.473 - type: mrr_at_1000 value: 88.473 - type: mrr_at_3 value: 87.592 - type: mrr_at_5 value: 88.211 - type: ndcg_at_1 value: 82.44 - type: ndcg_at_10 value: 89.467 - type: ndcg_at_100 value: 89.33 - type: ndcg_at_1000 value: 89.33 - type: ndcg_at_3 value: 86.822 - type: ndcg_at_5 value: 88.307 - type: precision_at_1 value: 82.44 - type: precision_at_10 value: 13.616 - type: precision_at_100 value: 1.362 - type: precision_at_1000 value: 0.136 - type: precision_at_3 value: 38.117000000000004 - type: precision_at_5 value: 25.05 - type: recall_at_1 value: 71.652 - type: recall_at_10 value: 96.224 - type: recall_at_100 value: 96.224 - type: recall_at_1000 value: 96.224 - type: recall_at_3 value: 88.571 - type: recall_at_5 value: 92.812 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 61.295010338050474 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 67.26380819328142 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 5.683 - type: map_at_10 value: 14.924999999999999 - type: map_at_100 value: 17.532 - type: map_at_1000 value: 17.875 - type: map_at_3 value: 10.392 - type: map_at_5 value: 12.592 - type: mrr_at_1 value: 28.000000000000004 - type: mrr_at_10 value: 39.951 - type: mrr_at_100 value: 41.025 - type: mrr_at_1000 value: 41.056 - type: mrr_at_3 value: 36.317 - type: mrr_at_5 value: 38.412 - type: ndcg_at_1 value: 28.000000000000004 - type: ndcg_at_10 value: 24.410999999999998 - type: ndcg_at_100 value: 33.79 - type: ndcg_at_1000 value: 39.035 - type: ndcg_at_3 value: 22.845 - type: ndcg_at_5 value: 20.080000000000002 - type: precision_at_1 value: 28.000000000000004 - type: precision_at_10 value: 12.790000000000001 - type: precision_at_100 value: 2.633 - type: precision_at_1000 value: 0.388 - type: precision_at_3 value: 21.367 - type: precision_at_5 value: 17.7 - type: recall_at_1 value: 5.683 - type: recall_at_10 value: 25.91 - type: recall_at_100 value: 53.443 - type: recall_at_1000 value: 78.73 - type: recall_at_3 value: 13.003 - type: recall_at_5 value: 17.932000000000002 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.677978681023 - type: cos_sim_spearman value: 83.13093441058189 - type: euclidean_pearson value: 83.35535759341572 - type: euclidean_spearman value: 83.42583744219611 - type: manhattan_pearson value: 83.2243124045889 - type: manhattan_spearman value: 83.39801618652632 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 81.68960206569666 - type: cos_sim_spearman value: 77.3368966488535 - type: euclidean_pearson value: 77.62828980560303 - type: euclidean_spearman value: 76.77951481444651 - type: manhattan_pearson value: 77.88637240839041 - type: manhattan_spearman value: 77.22157841466188 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 84.18745821650724 - type: cos_sim_spearman value: 85.04423285574542 - type: euclidean_pearson value: 85.46604816931023 - type: euclidean_spearman value: 85.5230593932974 - type: manhattan_pearson value: 85.57912805986261 - type: manhattan_spearman value: 85.65955905111873 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 83.6715333300355 - type: cos_sim_spearman value: 82.9058522514908 - type: euclidean_pearson value: 83.9640357424214 - type: euclidean_spearman value: 83.60415457472637 - type: manhattan_pearson value: 84.05621005853469 - type: manhattan_spearman value: 83.87077724707746 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.82422928098886 - type: cos_sim_spearman value: 88.12660311894628 - type: euclidean_pearson value: 87.50974805056555 - type: euclidean_spearman value: 87.91957275596677 - type: manhattan_pearson value: 87.74119404878883 - type: manhattan_spearman value: 88.2808922165719 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.80605838552093 - type: cos_sim_spearman value: 86.24123388765678 - type: euclidean_pearson value: 85.32648347339814 - type: euclidean_spearman value: 85.60046671950158 - type: manhattan_pearson value: 85.53800168487811 - type: manhattan_spearman value: 85.89542420480763 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 89.87540978988132 - type: cos_sim_spearman value: 90.12715295099461 - type: euclidean_pearson value: 91.61085993525275 - type: euclidean_spearman value: 91.31835942311758 - type: manhattan_pearson value: 91.57500202032934 - type: manhattan_spearman value: 91.1790925526635 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 69.87136205329556 - type: cos_sim_spearman value: 68.6253154635078 - type: euclidean_pearson value: 68.91536015034222 - type: euclidean_spearman value: 67.63744649352542 - type: manhattan_pearson value: 69.2000713045275 - type: manhattan_spearman value: 68.16002901587316 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.21849551039082 - type: cos_sim_spearman value: 85.6392959372461 - type: euclidean_pearson value: 85.92050852609488 - type: euclidean_spearman value: 85.97205649009734 - type: manhattan_pearson value: 86.1031154802254 - type: manhattan_spearman value: 86.26791155517466 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 86.83953958636627 - type: mrr value: 96.71167612344082 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 64.994 - type: map_at_10 value: 74.763 - type: map_at_100 value: 75.127 - type: map_at_1000 value: 75.143 - type: map_at_3 value: 71.824 - type: map_at_5 value: 73.71 - type: mrr_at_1 value: 68.333 - type: mrr_at_10 value: 75.749 - type: mrr_at_100 value: 75.922 - type: mrr_at_1000 value: 75.938 - type: mrr_at_3 value: 73.556 - type: mrr_at_5 value: 74.739 - type: ndcg_at_1 value: 68.333 - type: ndcg_at_10 value: 79.174 - type: ndcg_at_100 value: 80.41 - type: ndcg_at_1000 value: 80.804 - type: ndcg_at_3 value: 74.361 - type: ndcg_at_5 value: 76.861 - type: precision_at_1 value: 68.333 - type: precision_at_10 value: 10.333 - type: precision_at_100 value: 1.0999999999999999 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 28.778 - type: precision_at_5 value: 19.067 - type: recall_at_1 value: 64.994 - type: recall_at_10 value: 91.822 - type: recall_at_100 value: 97.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 78.878 - type: recall_at_5 value: 85.172 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.72079207920792 - type: cos_sim_ap value: 93.00265215525152 - type: cos_sim_f1 value: 85.06596306068602 - type: cos_sim_precision value: 90.05586592178771 - type: cos_sim_recall value: 80.60000000000001 - type: dot_accuracy value: 99.66039603960397 - type: dot_ap value: 91.22371407479089 - type: dot_f1 value: 82.34693877551021 - type: dot_precision value: 84.0625 - type: dot_recall value: 80.7 - type: euclidean_accuracy value: 99.71881188118812 - type: euclidean_ap value: 92.88449963304728 - type: euclidean_f1 value: 85.19480519480518 - type: euclidean_precision value: 88.64864864864866 - type: euclidean_recall value: 82.0 - type: manhattan_accuracy value: 99.73267326732673 - type: manhattan_ap value: 93.23055393056883 - type: manhattan_f1 value: 85.88957055214725 - type: manhattan_precision value: 87.86610878661088 - type: manhattan_recall value: 84.0 - type: max_accuracy value: 99.73267326732673 - type: max_ap value: 93.23055393056883 - type: max_f1 value: 85.88957055214725 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 77.3305735900358 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 41.32967136540674 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.95514866379359 - type: mrr value: 56.95423245055598 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.783007208997144 - type: cos_sim_spearman value: 30.373444721540533 - type: dot_pearson value: 29.210604111143905 - type: dot_spearman value: 29.98809758085659 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.234 - type: map_at_10 value: 1.894 - type: map_at_100 value: 1.894 - type: map_at_1000 value: 1.894 - type: map_at_3 value: 0.636 - type: map_at_5 value: 1.0 - type: mrr_at_1 value: 88.0 - type: mrr_at_10 value: 93.667 - type: mrr_at_100 value: 93.667 - type: mrr_at_1000 value: 93.667 - type: mrr_at_3 value: 93.667 - type: mrr_at_5 value: 93.667 - type: ndcg_at_1 value: 85.0 - type: ndcg_at_10 value: 74.798 - type: ndcg_at_100 value: 16.462 - type: ndcg_at_1000 value: 7.0889999999999995 - type: ndcg_at_3 value: 80.754 - type: ndcg_at_5 value: 77.319 - type: precision_at_1 value: 88.0 - type: precision_at_10 value: 78.0 - type: precision_at_100 value: 7.8 - type: precision_at_1000 value: 0.7799999999999999 - type: precision_at_3 value: 83.333 - type: precision_at_5 value: 80.80000000000001 - type: recall_at_1 value: 0.234 - type: recall_at_10 value: 2.093 - type: recall_at_100 value: 2.093 - type: recall_at_1000 value: 2.093 - type: recall_at_3 value: 0.662 - type: recall_at_5 value: 1.0739999999999998 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.703 - type: map_at_10 value: 10.866000000000001 - type: map_at_100 value: 10.866000000000001 - type: map_at_1000 value: 10.866000000000001 - type: map_at_3 value: 5.909 - type: map_at_5 value: 7.35 - type: mrr_at_1 value: 36.735 - type: mrr_at_10 value: 53.583000000000006 - type: mrr_at_100 value: 53.583000000000006 - type: mrr_at_1000 value: 53.583000000000006 - type: mrr_at_3 value: 49.32 - type: mrr_at_5 value: 51.769 - type: ndcg_at_1 value: 34.694 - type: ndcg_at_10 value: 27.926000000000002 - type: ndcg_at_100 value: 22.701 - type: ndcg_at_1000 value: 22.701 - type: ndcg_at_3 value: 32.073 - type: ndcg_at_5 value: 28.327999999999996 - type: precision_at_1 value: 36.735 - type: precision_at_10 value: 24.694 - type: precision_at_100 value: 2.469 - type: precision_at_1000 value: 0.247 - type: precision_at_3 value: 31.973000000000003 - type: precision_at_5 value: 26.939 - type: recall_at_1 value: 2.703 - type: recall_at_10 value: 17.702 - type: recall_at_100 value: 17.702 - type: recall_at_1000 value: 17.702 - type: recall_at_3 value: 7.208 - type: recall_at_5 value: 9.748999999999999 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.79960000000001 - type: ap value: 15.467565415565815 - type: f1 value: 55.28639823443618 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 64.7792869269949 - type: f1 value: 65.08597154774318 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 55.70352297774293 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 88.27561542588067 - type: cos_sim_ap value: 81.08262141256193 - type: cos_sim_f1 value: 73.82341501361338 - type: cos_sim_precision value: 72.5720112159062 - type: cos_sim_recall value: 75.11873350923483 - type: dot_accuracy value: 86.66030875603504 - type: dot_ap value: 76.6052349228621 - type: dot_f1 value: 70.13897280966768 - type: dot_precision value: 64.70457079152732 - type: dot_recall value: 76.56992084432717 - type: euclidean_accuracy value: 88.37098408535495 - type: euclidean_ap value: 81.12515230092113 - type: euclidean_f1 value: 74.10338225909379 - type: euclidean_precision value: 71.76761433868974 - type: euclidean_recall value: 76.59630606860158 - type: manhattan_accuracy value: 88.34118137926924 - type: manhattan_ap value: 80.95751834536561 - type: manhattan_f1 value: 73.9119496855346 - type: manhattan_precision value: 70.625 - type: manhattan_recall value: 77.5197889182058 - type: max_accuracy value: 88.37098408535495 - type: max_ap value: 81.12515230092113 - type: max_f1 value: 74.10338225909379 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.79896767182831 - type: cos_sim_ap value: 87.40071784061065 - type: cos_sim_f1 value: 79.87753144712087 - type: cos_sim_precision value: 76.67304015296367 - type: cos_sim_recall value: 83.3615645210964 - type: dot_accuracy value: 88.95486474948578 - type: dot_ap value: 86.00227979119943 - type: dot_f1 value: 78.54601474525914 - type: dot_precision value: 75.00525394045535 - type: dot_recall value: 82.43763473975977 - type: euclidean_accuracy value: 89.7892653393876 - type: euclidean_ap value: 87.42174706480819 - type: euclidean_f1 value: 80.07283321194465 - type: euclidean_precision value: 75.96738529574351 - type: euclidean_recall value: 84.6473668001232 - type: manhattan_accuracy value: 89.8474793340319 - type: manhattan_ap value: 87.47814292587448 - type: manhattan_f1 value: 80.15461150280949 - type: manhattan_precision value: 74.88798234468 - type: manhattan_recall value: 86.21804742839544 - type: max_accuracy value: 89.8474793340319 - type: max_ap value: 87.47814292587448 - type: max_f1 value: 80.15461150280949 --- # Model Summary > GritLM is a generative representational instruction tuned language model. It unifies text representation (embedding) and text generation into a single model achieving state-of-the-art performance on both types of tasks. - **Repository:** [ContextualAI/gritlm](https://github.com/ContextualAI/gritlm) - **Paper:** https://arxiv.org/abs/2402.09906 - **Logs:** https://wandb.ai/muennighoff/gritlm/runs/0uui712t/overview - **Script:** https://github.com/ContextualAI/gritlm/blob/main/scripts/training/train_gritlm_7b.sh | Model | Description | |-------|-------------| | [GritLM 7B](https://hf.co/GritLM/GritLM-7B) | Mistral 7B finetuned using GRIT | | [GritLM 8x7B](https://hf.co/GritLM/GritLM-8x7B) | Mixtral 8x7B finetuned using GRIT | # Use The model usage is documented [here](https://github.com/ContextualAI/gritlm?tab=readme-ov-file#inference). # Citation ```bibtex @misc{muennighoff2024generative, title={Generative Representational Instruction Tuning}, author={Niklas Muennighoff and Hongjin Su and Liang Wang and Nan Yang and Furu Wei and Tao Yu and Amanpreet Singh and Douwe Kiela}, year={2024}, eprint={2402.09906}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{}
RichardErkhov/GritLM_-_GritLM-7B-4bits
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "custom_code", "arxiv:2402.09906", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-05-03T17:05:23+00:00
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tuned-model This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7582 - Accuracy: 0.7424 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.7206 | 0.9880 | 62 | 0.7582 | 0.7424 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-tiny-patch4-window7-224", "model-index": [{"name": "fine-tuned-model", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.7423864203694458, "name": "Accuracy"}]}]}]}
carvalhaes/fine-tuned-model
null
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T17:06:09+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_3-seqsight_65536_512_47M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5571 - F1 Score: 0.6983 - Accuracy: 0.701 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6152 | 0.93 | 200 | 0.5646 | 0.7015 | 0.706 | | 0.5923 | 1.87 | 400 | 0.5661 | 0.6898 | 0.69 | | 0.5867 | 2.8 | 600 | 0.5547 | 0.7215 | 0.723 | | 0.5784 | 3.74 | 800 | 0.5597 | 0.7026 | 0.703 | | 0.5757 | 4.67 | 1000 | 0.5493 | 0.7194 | 0.72 | | 0.5707 | 5.61 | 1200 | 0.5421 | 0.7228 | 0.726 | | 0.5658 | 6.54 | 1400 | 0.5426 | 0.7299 | 0.731 | | 0.5638 | 7.48 | 1600 | 0.5426 | 0.7274 | 0.728 | | 0.5608 | 8.41 | 1800 | 0.5390 | 0.7224 | 0.723 | | 0.5628 | 9.35 | 2000 | 0.5391 | 0.7248 | 0.726 | | 0.5553 | 10.28 | 2200 | 0.5445 | 0.7101 | 0.71 | | 0.5525 | 11.21 | 2400 | 0.5418 | 0.7222 | 0.724 | | 0.5518 | 12.15 | 2600 | 0.5403 | 0.7232 | 0.726 | | 0.5459 | 13.08 | 2800 | 0.5447 | 0.7220 | 0.729 | | 0.5457 | 14.02 | 3000 | 0.5390 | 0.7207 | 0.723 | | 0.5439 | 14.95 | 3200 | 0.5381 | 0.7278 | 0.731 | | 0.5425 | 15.89 | 3400 | 0.5380 | 0.7297 | 0.732 | | 0.5397 | 16.82 | 3600 | 0.5406 | 0.7242 | 0.727 | | 0.5351 | 17.76 | 3800 | 0.5399 | 0.7233 | 0.726 | | 0.536 | 18.69 | 4000 | 0.5452 | 0.7218 | 0.722 | | 0.534 | 19.63 | 4200 | 0.5418 | 0.7201 | 0.722 | | 0.5342 | 20.56 | 4400 | 0.5423 | 0.7244 | 0.726 | | 0.5274 | 21.5 | 4600 | 0.5477 | 0.7100 | 0.71 | | 0.5269 | 22.43 | 4800 | 0.5466 | 0.7142 | 0.716 | | 0.5285 | 23.36 | 5000 | 0.5517 | 0.7051 | 0.705 | | 0.5224 | 24.3 | 5200 | 0.5521 | 0.6986 | 0.699 | | 0.5194 | 25.23 | 5400 | 0.5508 | 0.7193 | 0.722 | | 0.5245 | 26.17 | 5600 | 0.5442 | 0.7108 | 0.712 | | 0.5155 | 27.1 | 5800 | 0.5491 | 0.7044 | 0.705 | | 0.5161 | 28.04 | 6000 | 0.5447 | 0.7041 | 0.705 | | 0.5114 | 28.97 | 6200 | 0.5540 | 0.7019 | 0.702 | | 0.5161 | 29.91 | 6400 | 0.5514 | 0.7166 | 0.719 | | 0.5109 | 30.84 | 6600 | 0.5514 | 0.7116 | 0.714 | | 0.5064 | 31.78 | 6800 | 0.5529 | 0.7160 | 0.717 | | 0.509 | 32.71 | 7000 | 0.5523 | 0.7072 | 0.709 | | 0.5095 | 33.64 | 7200 | 0.5537 | 0.7158 | 0.717 | | 0.5019 | 34.58 | 7400 | 0.5588 | 0.6950 | 0.695 | | 0.5042 | 35.51 | 7600 | 0.5562 | 0.692 | 0.692 | | 0.5029 | 36.45 | 7800 | 0.5594 | 0.7062 | 0.707 | | 0.5029 | 37.38 | 8000 | 0.5603 | 0.6975 | 0.698 | | 0.4968 | 38.32 | 8200 | 0.5590 | 0.7049 | 0.706 | | 0.4992 | 39.25 | 8400 | 0.5634 | 0.7008 | 0.702 | | 0.4965 | 40.19 | 8600 | 0.5624 | 0.7002 | 0.701 | | 0.4974 | 41.12 | 8800 | 0.5622 | 0.7025 | 0.703 | | 0.4989 | 42.06 | 9000 | 0.5610 | 0.7072 | 0.708 | | 0.4962 | 42.99 | 9200 | 0.5612 | 0.6988 | 0.699 | | 0.4983 | 43.93 | 9400 | 0.5612 | 0.7018 | 0.702 | | 0.4954 | 44.86 | 9600 | 0.5618 | 0.7024 | 0.703 | | 0.4947 | 45.79 | 9800 | 0.5622 | 0.7033 | 0.704 | | 0.4901 | 46.73 | 10000 | 0.5631 | 0.6995 | 0.7 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_3-seqsight_65536_512_47M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_3-seqsight_65536_512_47M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T17:06:32+00:00
null
null
{}
nielsrolf/finetuned-1
null
[ "region:us" ]
null
2024-05-03T17:07:51+00:00
null
null
{}
supalun/thailaw_full_finetune_gemma
null
[ "region:us" ]
null
2024-05-03T17:09:07+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_2-seqsight_65536_512_47M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.4711 - F1 Score: 0.7738 - Accuracy: 0.774 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5857 | 1.34 | 200 | 0.5446 | 0.7070 | 0.709 | | 0.5463 | 2.68 | 400 | 0.5318 | 0.7309 | 0.731 | | 0.5395 | 4.03 | 600 | 0.5247 | 0.736 | 0.736 | | 0.5355 | 5.37 | 800 | 0.5259 | 0.7386 | 0.739 | | 0.5311 | 6.71 | 1000 | 0.5198 | 0.7450 | 0.746 | | 0.5268 | 8.05 | 1200 | 0.5181 | 0.7480 | 0.748 | | 0.5236 | 9.4 | 1400 | 0.5168 | 0.7421 | 0.743 | | 0.5227 | 10.74 | 1600 | 0.5135 | 0.7477 | 0.748 | | 0.5221 | 12.08 | 1800 | 0.5155 | 0.7539 | 0.754 | | 0.5201 | 13.42 | 2000 | 0.5100 | 0.7530 | 0.753 | | 0.5189 | 14.77 | 2200 | 0.5123 | 0.7507 | 0.751 | | 0.5132 | 16.11 | 2400 | 0.5106 | 0.7530 | 0.753 | | 0.5175 | 17.45 | 2600 | 0.5099 | 0.7506 | 0.751 | | 0.5124 | 18.79 | 2800 | 0.5082 | 0.7589 | 0.759 | | 0.5106 | 20.13 | 3000 | 0.5086 | 0.7589 | 0.759 | | 0.5117 | 21.48 | 3200 | 0.5107 | 0.7508 | 0.751 | | 0.5132 | 22.82 | 3400 | 0.5076 | 0.7541 | 0.755 | | 0.5099 | 24.16 | 3600 | 0.5068 | 0.7520 | 0.753 | | 0.5063 | 25.5 | 3800 | 0.5087 | 0.7474 | 0.749 | | 0.5105 | 26.85 | 4000 | 0.5084 | 0.7454 | 0.747 | | 0.5057 | 28.19 | 4200 | 0.5059 | 0.7545 | 0.755 | | 0.5064 | 29.53 | 4400 | 0.5066 | 0.7580 | 0.758 | | 0.5029 | 30.87 | 4600 | 0.5057 | 0.7548 | 0.755 | | 0.5057 | 32.21 | 4800 | 0.5065 | 0.7517 | 0.752 | | 0.507 | 33.56 | 5000 | 0.5040 | 0.7580 | 0.758 | | 0.5037 | 34.9 | 5200 | 0.5061 | 0.7559 | 0.756 | | 0.4995 | 36.24 | 5400 | 0.5060 | 0.7500 | 0.751 | | 0.5053 | 37.58 | 5600 | 0.5038 | 0.7556 | 0.756 | | 0.504 | 38.93 | 5800 | 0.5037 | 0.7535 | 0.754 | | 0.5014 | 40.27 | 6000 | 0.5029 | 0.7578 | 0.758 | | 0.4999 | 41.61 | 6200 | 0.5034 | 0.7555 | 0.756 | | 0.5055 | 42.95 | 6400 | 0.5043 | 0.7485 | 0.749 | | 0.5003 | 44.3 | 6600 | 0.5036 | 0.7550 | 0.755 | | 0.4994 | 45.64 | 6800 | 0.5039 | 0.7539 | 0.754 | | 0.4994 | 46.98 | 7000 | 0.5054 | 0.7457 | 0.746 | | 0.4982 | 48.32 | 7200 | 0.5044 | 0.7539 | 0.754 | | 0.4983 | 49.66 | 7400 | 0.5045 | 0.7507 | 0.751 | | 0.4981 | 51.01 | 7600 | 0.5038 | 0.7456 | 0.746 | | 0.4961 | 52.35 | 7800 | 0.5042 | 0.7477 | 0.748 | | 0.4979 | 53.69 | 8000 | 0.5052 | 0.7482 | 0.749 | | 0.4952 | 55.03 | 8200 | 0.5036 | 0.7457 | 0.746 | | 0.4982 | 56.38 | 8400 | 0.5028 | 0.7469 | 0.747 | | 0.497 | 57.72 | 8600 | 0.5038 | 0.7483 | 0.749 | | 0.4963 | 59.06 | 8800 | 0.5029 | 0.7483 | 0.749 | | 0.4952 | 60.4 | 9000 | 0.5030 | 0.7448 | 0.745 | | 0.4966 | 61.74 | 9200 | 0.5034 | 0.7483 | 0.749 | | 0.5011 | 63.09 | 9400 | 0.5028 | 0.7475 | 0.748 | | 0.4959 | 64.43 | 9600 | 0.5032 | 0.7465 | 0.747 | | 0.4991 | 65.77 | 9800 | 0.5031 | 0.7466 | 0.747 | | 0.4941 | 67.11 | 10000 | 0.5033 | 0.7475 | 0.748 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_2-seqsight_65536_512_47M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_2-seqsight_65536_512_47M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T17:09:11+00:00
null
null
{}
ThunderBeee/Octopus-v4-gguf-NexaAIDev
null
[ "region:us" ]
null
2024-05-03T17:09:18+00:00
null
null
{"language": ["en"], "tags": ["not-for-all-audiences", "racist", "nsfw"], "datasets": ["DuckyBlender/racist-inputoutput"]}
DuckyBlender/racist-phi3
null
[ "not-for-all-audiences", "racist", "nsfw", "en", "dataset:DuckyBlender/racist-inputoutput", "region:us" ]
null
2024-05-03T17:09:29+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cilantro9246/hq6ceip
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T17:09:36+00:00
text2text-generation
transformers
{}
Aryan0310/flan-t5-small-finetuned-xsum
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T17:09:41+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_2-seqsight_65536_512_47M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.4753 - F1 Score: 0.7840 - Accuracy: 0.785 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5645 | 1.34 | 200 | 0.5230 | 0.7288 | 0.73 | | 0.5304 | 2.68 | 400 | 0.5167 | 0.746 | 0.746 | | 0.5202 | 4.03 | 600 | 0.5055 | 0.7610 | 0.761 | | 0.5133 | 5.37 | 800 | 0.5065 | 0.7580 | 0.758 | | 0.5071 | 6.71 | 1000 | 0.5095 | 0.7491 | 0.75 | | 0.4993 | 8.05 | 1200 | 0.5024 | 0.7465 | 0.747 | | 0.4951 | 9.4 | 1400 | 0.5096 | 0.7463 | 0.748 | | 0.4902 | 10.74 | 1600 | 0.4929 | 0.752 | 0.752 | | 0.4884 | 12.08 | 1800 | 0.4936 | 0.7540 | 0.754 | | 0.4826 | 13.42 | 2000 | 0.4938 | 0.7550 | 0.755 | | 0.4799 | 14.77 | 2200 | 0.5030 | 0.7482 | 0.751 | | 0.4717 | 16.11 | 2400 | 0.5020 | 0.7490 | 0.749 | | 0.4703 | 17.45 | 2600 | 0.4984 | 0.7536 | 0.755 | | 0.462 | 18.79 | 2800 | 0.4910 | 0.7581 | 0.759 | | 0.4576 | 20.13 | 3000 | 0.4936 | 0.7671 | 0.768 | | 0.4564 | 21.48 | 3200 | 0.5030 | 0.7569 | 0.757 | | 0.4556 | 22.82 | 3400 | 0.4965 | 0.7550 | 0.755 | | 0.4503 | 24.16 | 3600 | 0.4917 | 0.7635 | 0.764 | | 0.4425 | 25.5 | 3800 | 0.5048 | 0.7516 | 0.752 | | 0.444 | 26.85 | 4000 | 0.4995 | 0.7573 | 0.758 | | 0.441 | 28.19 | 4200 | 0.4975 | 0.7599 | 0.76 | | 0.4366 | 29.53 | 4400 | 0.5035 | 0.7527 | 0.753 | | 0.431 | 30.87 | 4600 | 0.4948 | 0.7528 | 0.753 | | 0.4288 | 32.21 | 4800 | 0.5166 | 0.7485 | 0.749 | | 0.4289 | 33.56 | 5000 | 0.5092 | 0.7538 | 0.754 | | 0.4244 | 34.9 | 5200 | 0.5031 | 0.7500 | 0.75 | | 0.4203 | 36.24 | 5400 | 0.4992 | 0.7547 | 0.755 | | 0.4212 | 37.58 | 5600 | 0.4963 | 0.7619 | 0.762 | | 0.4151 | 38.93 | 5800 | 0.5031 | 0.7586 | 0.759 | | 0.4103 | 40.27 | 6000 | 0.5090 | 0.7517 | 0.752 | | 0.4087 | 41.61 | 6200 | 0.5000 | 0.7530 | 0.753 | | 0.413 | 42.95 | 6400 | 0.5046 | 0.7549 | 0.755 | | 0.4031 | 44.3 | 6600 | 0.5112 | 0.7500 | 0.75 | | 0.4049 | 45.64 | 6800 | 0.5135 | 0.7478 | 0.748 | | 0.4038 | 46.98 | 7000 | 0.5129 | 0.7549 | 0.755 | | 0.3993 | 48.32 | 7200 | 0.5133 | 0.7470 | 0.747 | | 0.3966 | 49.66 | 7400 | 0.5064 | 0.7550 | 0.755 | | 0.3959 | 51.01 | 7600 | 0.5116 | 0.7549 | 0.755 | | 0.3894 | 52.35 | 7800 | 0.5182 | 0.7580 | 0.758 | | 0.3944 | 53.69 | 8000 | 0.5128 | 0.7529 | 0.753 | | 0.386 | 55.03 | 8200 | 0.5210 | 0.7460 | 0.746 | | 0.388 | 56.38 | 8400 | 0.5143 | 0.7560 | 0.756 | | 0.3881 | 57.72 | 8600 | 0.5146 | 0.7540 | 0.754 | | 0.3851 | 59.06 | 8800 | 0.5129 | 0.7590 | 0.759 | | 0.3856 | 60.4 | 9000 | 0.5232 | 0.7550 | 0.755 | | 0.3835 | 61.74 | 9200 | 0.5139 | 0.752 | 0.752 | | 0.3853 | 63.09 | 9400 | 0.5165 | 0.7510 | 0.751 | | 0.3805 | 64.43 | 9600 | 0.5156 | 0.7549 | 0.755 | | 0.3854 | 65.77 | 9800 | 0.5193 | 0.7550 | 0.755 | | 0.3776 | 67.11 | 10000 | 0.5180 | 0.756 | 0.756 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_2-seqsight_65536_512_47M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_2-seqsight_65536_512_47M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T17:09:51+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_2-seqsight_65536_512_47M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.4667 - F1 Score: 0.7770 - Accuracy: 0.778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5716 | 1.34 | 200 | 0.5302 | 0.7204 | 0.722 | | 0.5353 | 2.68 | 400 | 0.5215 | 0.7350 | 0.735 | | 0.5271 | 4.03 | 600 | 0.5118 | 0.7479 | 0.748 | | 0.5221 | 5.37 | 800 | 0.5087 | 0.7499 | 0.75 | | 0.5185 | 6.71 | 1000 | 0.5089 | 0.7599 | 0.76 | | 0.5108 | 8.05 | 1200 | 0.5104 | 0.7449 | 0.745 | | 0.5082 | 9.4 | 1400 | 0.5107 | 0.7445 | 0.746 | | 0.5057 | 10.74 | 1600 | 0.5023 | 0.7518 | 0.752 | | 0.505 | 12.08 | 1800 | 0.5077 | 0.7450 | 0.745 | | 0.5005 | 13.42 | 2000 | 0.5044 | 0.7449 | 0.745 | | 0.4996 | 14.77 | 2200 | 0.5082 | 0.7424 | 0.744 | | 0.4936 | 16.11 | 2400 | 0.5090 | 0.7490 | 0.749 | | 0.4946 | 17.45 | 2600 | 0.5053 | 0.7499 | 0.751 | | 0.4885 | 18.79 | 2800 | 0.4999 | 0.7503 | 0.751 | | 0.4859 | 20.13 | 3000 | 0.4994 | 0.7555 | 0.756 | | 0.4861 | 21.48 | 3200 | 0.5075 | 0.7540 | 0.754 | | 0.4876 | 22.82 | 3400 | 0.5025 | 0.7569 | 0.757 | | 0.4833 | 24.16 | 3600 | 0.4986 | 0.7566 | 0.757 | | 0.4774 | 25.5 | 3800 | 0.5025 | 0.7534 | 0.754 | | 0.4819 | 26.85 | 4000 | 0.4993 | 0.7562 | 0.757 | | 0.4783 | 28.19 | 4200 | 0.4959 | 0.762 | 0.762 | | 0.4776 | 29.53 | 4400 | 0.5019 | 0.7580 | 0.758 | | 0.4741 | 30.87 | 4600 | 0.4985 | 0.7639 | 0.764 | | 0.4736 | 32.21 | 4800 | 0.5055 | 0.7564 | 0.757 | | 0.4752 | 33.56 | 5000 | 0.4988 | 0.7518 | 0.752 | | 0.4704 | 34.9 | 5200 | 0.5015 | 0.7589 | 0.759 | | 0.4689 | 36.24 | 5400 | 0.4975 | 0.7686 | 0.769 | | 0.4718 | 37.58 | 5600 | 0.4931 | 0.7547 | 0.755 | | 0.4679 | 38.93 | 5800 | 0.4966 | 0.7587 | 0.759 | | 0.4662 | 40.27 | 6000 | 0.4934 | 0.7608 | 0.761 | | 0.4645 | 41.61 | 6200 | 0.4942 | 0.7520 | 0.752 | | 0.4709 | 42.95 | 6400 | 0.4969 | 0.7609 | 0.761 | | 0.4622 | 44.3 | 6600 | 0.4993 | 0.7540 | 0.754 | | 0.4634 | 45.64 | 6800 | 0.4978 | 0.7520 | 0.752 | | 0.4634 | 46.98 | 7000 | 0.4974 | 0.75 | 0.75 | | 0.4618 | 48.32 | 7200 | 0.4976 | 0.7510 | 0.751 | | 0.4599 | 49.66 | 7400 | 0.4945 | 0.7498 | 0.75 | | 0.4604 | 51.01 | 7600 | 0.4957 | 0.7470 | 0.747 | | 0.4562 | 52.35 | 7800 | 0.4983 | 0.7568 | 0.757 | | 0.4611 | 53.69 | 8000 | 0.4957 | 0.7445 | 0.745 | | 0.4548 | 55.03 | 8200 | 0.4944 | 0.7449 | 0.745 | | 0.4581 | 56.38 | 8400 | 0.4942 | 0.7450 | 0.745 | | 0.4591 | 57.72 | 8600 | 0.4934 | 0.7466 | 0.747 | | 0.4543 | 59.06 | 8800 | 0.4927 | 0.7517 | 0.752 | | 0.4563 | 60.4 | 9000 | 0.4961 | 0.7530 | 0.753 | | 0.4566 | 61.74 | 9200 | 0.4936 | 0.7478 | 0.748 | | 0.4584 | 63.09 | 9400 | 0.4943 | 0.7508 | 0.751 | | 0.4518 | 64.43 | 9600 | 0.4950 | 0.7487 | 0.749 | | 0.4596 | 65.77 | 9800 | 0.4949 | 0.7509 | 0.751 | | 0.452 | 67.11 | 10000 | 0.4949 | 0.7498 | 0.75 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_2-seqsight_65536_512_47M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_2-seqsight_65536_512_47M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T17:09:51+00:00
null
null
{"license": "unknown"}
dayday12/Killer_Mike
null
[ "license:unknown", "region:us" ]
null
2024-05-03T17:10:14+00:00
null
null
## Llamacpp Quantizations of DuckyBlender/racist-phi3 Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2783">b2783</a> for quantization. Original model: https://huggingface.co/DuckyBlender/racist-phi3
{"language": ["en"], "tags": ["racist", "nsfw", "not-for-all-audiences"], "datasets": ["DuckyBlender/racist-inputoutput"]}
DuckyBlender/racist-phi3-GGUF
null
[ "racist", "nsfw", "not-for-all-audiences", "en", "dataset:DuckyBlender/racist-inputoutput", "region:us" ]
null
2024-05-03T17:10:18+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_virus_covid-seqsight_65536_512_47M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset. It achieves the following results on the evaluation set: - Loss: 1.7826 - F1 Score: 0.3230 - Accuracy: 0.3274 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 2.185 | 0.35 | 200 | 2.1838 | 0.0863 | 0.1327 | | 2.1807 | 0.7 | 400 | 2.1804 | 0.0877 | 0.1363 | | 2.1748 | 1.05 | 600 | 2.1744 | 0.1189 | 0.1503 | | 2.1706 | 1.4 | 800 | 2.1665 | 0.0993 | 0.1460 | | 2.1628 | 1.75 | 1000 | 2.1550 | 0.1339 | 0.1621 | | 2.1571 | 2.09 | 1200 | 2.1498 | 0.1296 | 0.1679 | | 2.1487 | 2.44 | 1400 | 2.1426 | 0.1481 | 0.1654 | | 2.1395 | 2.79 | 1600 | 2.1151 | 0.1770 | 0.1963 | | 2.1143 | 3.14 | 1800 | 2.0605 | 0.1929 | 0.2197 | | 2.0738 | 3.49 | 2000 | 2.0280 | 0.2069 | 0.2267 | | 2.056 | 3.84 | 2200 | 1.9958 | 0.2303 | 0.2449 | | 2.0299 | 4.19 | 2400 | 1.9701 | 0.2333 | 0.2469 | | 2.0076 | 4.54 | 2600 | 1.9480 | 0.2426 | 0.2569 | | 2.0016 | 4.89 | 2800 | 1.9330 | 0.2555 | 0.2660 | | 1.9859 | 5.24 | 3000 | 1.9220 | 0.2567 | 0.2687 | | 1.9754 | 5.58 | 3200 | 1.9137 | 0.2599 | 0.2701 | | 1.9647 | 5.93 | 3400 | 1.8988 | 0.2645 | 0.2757 | | 1.9619 | 6.28 | 3600 | 1.8909 | 0.2744 | 0.2805 | | 1.9479 | 6.63 | 3800 | 1.8845 | 0.2699 | 0.2856 | | 1.9448 | 6.98 | 4000 | 1.8778 | 0.2759 | 0.2870 | | 1.9406 | 7.33 | 4200 | 1.8704 | 0.2794 | 0.2935 | | 1.9341 | 7.68 | 4400 | 1.8636 | 0.2925 | 0.2979 | | 1.9291 | 8.03 | 4600 | 1.8638 | 0.2861 | 0.2937 | | 1.9248 | 8.38 | 4800 | 1.8564 | 0.2829 | 0.2965 | | 1.9284 | 8.73 | 5000 | 1.8568 | 0.2824 | 0.2948 | | 1.9183 | 9.08 | 5200 | 1.8473 | 0.2914 | 0.2941 | | 1.9162 | 9.42 | 5400 | 1.8449 | 0.2834 | 0.3003 | | 1.9152 | 9.77 | 5600 | 1.8363 | 0.2969 | 0.3089 | | 1.9113 | 10.12 | 5800 | 1.8348 | 0.3011 | 0.3086 | | 1.9133 | 10.47 | 6000 | 1.8321 | 0.2902 | 0.2989 | | 1.9053 | 10.82 | 6200 | 1.8315 | 0.3019 | 0.3072 | | 1.8974 | 11.17 | 6400 | 1.8236 | 0.3025 | 0.3066 | | 1.9014 | 11.52 | 6600 | 1.8163 | 0.2985 | 0.3068 | | 1.898 | 11.87 | 6800 | 1.8117 | 0.3064 | 0.3160 | | 1.8863 | 12.22 | 7000 | 1.8083 | 0.3052 | 0.3127 | | 1.8874 | 12.57 | 7200 | 1.8044 | 0.3067 | 0.3119 | | 1.8863 | 12.91 | 7400 | 1.8006 | 0.3120 | 0.3189 | | 1.8767 | 13.26 | 7600 | 1.7952 | 0.3067 | 0.3126 | | 1.8833 | 13.61 | 7800 | 1.7948 | 0.3050 | 0.3098 | | 1.8797 | 13.96 | 8000 | 1.7895 | 0.3114 | 0.3176 | | 1.8645 | 14.31 | 8200 | 1.7869 | 0.3120 | 0.3194 | | 1.8744 | 14.66 | 8400 | 1.7856 | 0.3198 | 0.3239 | | 1.8649 | 15.01 | 8600 | 1.7839 | 0.3153 | 0.3206 | | 1.8736 | 15.36 | 8800 | 1.7824 | 0.3191 | 0.3225 | | 1.8607 | 15.71 | 9000 | 1.7825 | 0.3132 | 0.3192 | | 1.8676 | 16.06 | 9200 | 1.7815 | 0.3143 | 0.3202 | | 1.8671 | 16.4 | 9400 | 1.7803 | 0.3181 | 0.3230 | | 1.8645 | 16.75 | 9600 | 1.7794 | 0.3183 | 0.3220 | | 1.8659 | 17.1 | 9800 | 1.7795 | 0.3168 | 0.3220 | | 1.8662 | 17.45 | 10000 | 1.7790 | 0.3174 | 0.3224 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_virus_covid-seqsight_65536_512_47M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_virus_covid-seqsight_65536_512_47M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T17:10:19+00:00
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) GritLM-7B - bnb 8bits - Model creator: https://huggingface.co/GritLM/ - Original model: https://huggingface.co/GritLM/GritLM-7B/ Original model description: --- pipeline_tag: text-generation inference: true license: apache-2.0 datasets: - GritLM/tulu2 tags: - mteb model-index: - name: GritLM-7B results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 81.17910447761194 - type: ap value: 46.26260671758199 - type: f1 value: 75.44565719934167 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 96.5161 - type: ap value: 94.79131981460425 - type: f1 value: 96.51506148413065 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 57.806000000000004 - type: f1 value: 56.78350156257903 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 38.478 - type: map_at_10 value: 54.955 - type: map_at_100 value: 54.955 - type: map_at_1000 value: 54.955 - type: map_at_3 value: 50.888999999999996 - type: map_at_5 value: 53.349999999999994 - type: mrr_at_1 value: 39.757999999999996 - type: mrr_at_10 value: 55.449000000000005 - type: mrr_at_100 value: 55.449000000000005 - type: mrr_at_1000 value: 55.449000000000005 - type: mrr_at_3 value: 51.37500000000001 - type: mrr_at_5 value: 53.822 - type: ndcg_at_1 value: 38.478 - type: ndcg_at_10 value: 63.239999999999995 - type: ndcg_at_100 value: 63.239999999999995 - type: ndcg_at_1000 value: 63.239999999999995 - type: ndcg_at_3 value: 54.935 - type: ndcg_at_5 value: 59.379000000000005 - type: precision_at_1 value: 38.478 - type: precision_at_10 value: 8.933 - type: precision_at_100 value: 0.893 - type: precision_at_1000 value: 0.089 - type: precision_at_3 value: 22.214 - type: precision_at_5 value: 15.491 - type: recall_at_1 value: 38.478 - type: recall_at_10 value: 89.331 - type: recall_at_100 value: 89.331 - type: recall_at_1000 value: 89.331 - type: recall_at_3 value: 66.643 - type: recall_at_5 value: 77.45400000000001 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 51.67144081472449 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 48.11256154264126 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 67.33801955487878 - type: mrr value: 80.71549487754474 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 88.1935203751726 - type: cos_sim_spearman value: 86.35497970498659 - type: euclidean_pearson value: 85.46910708503744 - type: euclidean_spearman value: 85.13928935405485 - type: manhattan_pearson value: 85.68373836333303 - type: manhattan_spearman value: 85.40013867117746 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 88.46753246753248 - type: f1 value: 88.43006344981134 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 40.86793640310432 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 39.80291334130727 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 38.421 - type: map_at_10 value: 52.349000000000004 - type: map_at_100 value: 52.349000000000004 - type: map_at_1000 value: 52.349000000000004 - type: map_at_3 value: 48.17 - type: map_at_5 value: 50.432 - type: mrr_at_1 value: 47.353 - type: mrr_at_10 value: 58.387 - type: mrr_at_100 value: 58.387 - type: mrr_at_1000 value: 58.387 - type: mrr_at_3 value: 56.199 - type: mrr_at_5 value: 57.487 - type: ndcg_at_1 value: 47.353 - type: ndcg_at_10 value: 59.202 - type: ndcg_at_100 value: 58.848 - type: ndcg_at_1000 value: 58.831999999999994 - type: ndcg_at_3 value: 54.112 - type: ndcg_at_5 value: 56.312 - type: precision_at_1 value: 47.353 - type: precision_at_10 value: 11.459 - type: precision_at_100 value: 1.146 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 26.133 - type: precision_at_5 value: 18.627 - type: recall_at_1 value: 38.421 - type: recall_at_10 value: 71.89 - type: recall_at_100 value: 71.89 - type: recall_at_1000 value: 71.89 - type: recall_at_3 value: 56.58 - type: recall_at_5 value: 63.125 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 38.025999999999996 - type: map_at_10 value: 50.590999999999994 - type: map_at_100 value: 51.99700000000001 - type: map_at_1000 value: 52.11599999999999 - type: map_at_3 value: 47.435 - type: map_at_5 value: 49.236000000000004 - type: mrr_at_1 value: 48.28 - type: mrr_at_10 value: 56.814 - type: mrr_at_100 value: 57.446 - type: mrr_at_1000 value: 57.476000000000006 - type: mrr_at_3 value: 54.958 - type: mrr_at_5 value: 56.084999999999994 - type: ndcg_at_1 value: 48.28 - type: ndcg_at_10 value: 56.442 - type: ndcg_at_100 value: 60.651999999999994 - type: ndcg_at_1000 value: 62.187000000000005 - type: ndcg_at_3 value: 52.866 - type: ndcg_at_5 value: 54.515 - type: precision_at_1 value: 48.28 - type: precision_at_10 value: 10.586 - type: precision_at_100 value: 1.6310000000000002 - type: precision_at_1000 value: 0.20600000000000002 - type: precision_at_3 value: 25.945 - type: precision_at_5 value: 18.076 - type: recall_at_1 value: 38.025999999999996 - type: recall_at_10 value: 66.11399999999999 - type: recall_at_100 value: 83.339 - type: recall_at_1000 value: 92.413 - type: recall_at_3 value: 54.493 - type: recall_at_5 value: 59.64699999999999 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 47.905 - type: map_at_10 value: 61.58 - type: map_at_100 value: 62.605 - type: map_at_1000 value: 62.637 - type: map_at_3 value: 58.074000000000005 - type: map_at_5 value: 60.260000000000005 - type: mrr_at_1 value: 54.42 - type: mrr_at_10 value: 64.847 - type: mrr_at_100 value: 65.403 - type: mrr_at_1000 value: 65.41900000000001 - type: mrr_at_3 value: 62.675000000000004 - type: mrr_at_5 value: 64.101 - type: ndcg_at_1 value: 54.42 - type: ndcg_at_10 value: 67.394 - type: ndcg_at_100 value: 70.846 - type: ndcg_at_1000 value: 71.403 - type: ndcg_at_3 value: 62.025 - type: ndcg_at_5 value: 65.032 - type: precision_at_1 value: 54.42 - type: precision_at_10 value: 10.646 - type: precision_at_100 value: 1.325 - type: precision_at_1000 value: 0.13999999999999999 - type: precision_at_3 value: 27.398 - type: precision_at_5 value: 18.796 - type: recall_at_1 value: 47.905 - type: recall_at_10 value: 80.84599999999999 - type: recall_at_100 value: 95.078 - type: recall_at_1000 value: 98.878 - type: recall_at_3 value: 67.05600000000001 - type: recall_at_5 value: 74.261 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.745 - type: map_at_10 value: 41.021 - type: map_at_100 value: 41.021 - type: map_at_1000 value: 41.021 - type: map_at_3 value: 37.714999999999996 - type: map_at_5 value: 39.766 - type: mrr_at_1 value: 33.559 - type: mrr_at_10 value: 43.537 - type: mrr_at_100 value: 43.537 - type: mrr_at_1000 value: 43.537 - type: mrr_at_3 value: 40.546 - type: mrr_at_5 value: 42.439 - type: ndcg_at_1 value: 33.559 - type: ndcg_at_10 value: 46.781 - type: ndcg_at_100 value: 46.781 - type: ndcg_at_1000 value: 46.781 - type: ndcg_at_3 value: 40.516000000000005 - type: ndcg_at_5 value: 43.957 - type: precision_at_1 value: 33.559 - type: precision_at_10 value: 7.198 - type: precision_at_100 value: 0.72 - type: precision_at_1000 value: 0.07200000000000001 - type: precision_at_3 value: 17.1 - type: precision_at_5 value: 12.316 - type: recall_at_1 value: 30.745 - type: recall_at_10 value: 62.038000000000004 - type: recall_at_100 value: 62.038000000000004 - type: recall_at_1000 value: 62.038000000000004 - type: recall_at_3 value: 45.378 - type: recall_at_5 value: 53.580000000000005 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 19.637999999999998 - type: map_at_10 value: 31.05 - type: map_at_100 value: 31.05 - type: map_at_1000 value: 31.05 - type: map_at_3 value: 27.628000000000004 - type: map_at_5 value: 29.767 - type: mrr_at_1 value: 25.0 - type: mrr_at_10 value: 36.131 - type: mrr_at_100 value: 36.131 - type: mrr_at_1000 value: 36.131 - type: mrr_at_3 value: 33.333 - type: mrr_at_5 value: 35.143 - type: ndcg_at_1 value: 25.0 - type: ndcg_at_10 value: 37.478 - type: ndcg_at_100 value: 37.469 - type: ndcg_at_1000 value: 37.469 - type: ndcg_at_3 value: 31.757999999999996 - type: ndcg_at_5 value: 34.821999999999996 - type: precision_at_1 value: 25.0 - type: precision_at_10 value: 7.188999999999999 - type: precision_at_100 value: 0.719 - type: precision_at_1000 value: 0.07200000000000001 - type: precision_at_3 value: 15.837000000000002 - type: precision_at_5 value: 11.841 - type: recall_at_1 value: 19.637999999999998 - type: recall_at_10 value: 51.836000000000006 - type: recall_at_100 value: 51.836000000000006 - type: recall_at_1000 value: 51.836000000000006 - type: recall_at_3 value: 36.384 - type: recall_at_5 value: 43.964 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 34.884 - type: map_at_10 value: 47.88 - type: map_at_100 value: 47.88 - type: map_at_1000 value: 47.88 - type: map_at_3 value: 43.85 - type: map_at_5 value: 46.414 - type: mrr_at_1 value: 43.022 - type: mrr_at_10 value: 53.569 - type: mrr_at_100 value: 53.569 - type: mrr_at_1000 value: 53.569 - type: mrr_at_3 value: 51.075 - type: mrr_at_5 value: 52.725 - type: ndcg_at_1 value: 43.022 - type: ndcg_at_10 value: 54.461000000000006 - type: ndcg_at_100 value: 54.388000000000005 - type: ndcg_at_1000 value: 54.388000000000005 - type: ndcg_at_3 value: 48.864999999999995 - type: ndcg_at_5 value: 52.032000000000004 - type: precision_at_1 value: 43.022 - type: precision_at_10 value: 9.885 - type: precision_at_100 value: 0.988 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 23.612 - type: precision_at_5 value: 16.997 - type: recall_at_1 value: 34.884 - type: recall_at_10 value: 68.12899999999999 - type: recall_at_100 value: 68.12899999999999 - type: recall_at_1000 value: 68.12899999999999 - type: recall_at_3 value: 52.428 - type: recall_at_5 value: 60.662000000000006 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 31.588 - type: map_at_10 value: 43.85 - type: map_at_100 value: 45.317 - type: map_at_1000 value: 45.408 - type: map_at_3 value: 39.73 - type: map_at_5 value: 42.122 - type: mrr_at_1 value: 38.927 - type: mrr_at_10 value: 49.582 - type: mrr_at_100 value: 50.39 - type: mrr_at_1000 value: 50.426 - type: mrr_at_3 value: 46.518 - type: mrr_at_5 value: 48.271 - type: ndcg_at_1 value: 38.927 - type: ndcg_at_10 value: 50.605999999999995 - type: ndcg_at_100 value: 56.22200000000001 - type: ndcg_at_1000 value: 57.724 - type: ndcg_at_3 value: 44.232 - type: ndcg_at_5 value: 47.233999999999995 - type: precision_at_1 value: 38.927 - type: precision_at_10 value: 9.429 - type: precision_at_100 value: 1.435 - type: precision_at_1000 value: 0.172 - type: precision_at_3 value: 21.271 - type: precision_at_5 value: 15.434000000000001 - type: recall_at_1 value: 31.588 - type: recall_at_10 value: 64.836 - type: recall_at_100 value: 88.066 - type: recall_at_1000 value: 97.748 - type: recall_at_3 value: 47.128 - type: recall_at_5 value: 54.954 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 31.956083333333336 - type: map_at_10 value: 43.33483333333333 - type: map_at_100 value: 44.64883333333333 - type: map_at_1000 value: 44.75 - type: map_at_3 value: 39.87741666666666 - type: map_at_5 value: 41.86766666666667 - type: mrr_at_1 value: 38.06341666666667 - type: mrr_at_10 value: 47.839666666666666 - type: mrr_at_100 value: 48.644000000000005 - type: mrr_at_1000 value: 48.68566666666667 - type: mrr_at_3 value: 45.26358333333334 - type: mrr_at_5 value: 46.790000000000006 - type: ndcg_at_1 value: 38.06341666666667 - type: ndcg_at_10 value: 49.419333333333334 - type: ndcg_at_100 value: 54.50166666666667 - type: ndcg_at_1000 value: 56.161166666666674 - type: ndcg_at_3 value: 43.982416666666666 - type: ndcg_at_5 value: 46.638083333333334 - type: precision_at_1 value: 38.06341666666667 - type: precision_at_10 value: 8.70858333333333 - type: precision_at_100 value: 1.327 - type: precision_at_1000 value: 0.165 - type: precision_at_3 value: 20.37816666666667 - type: precision_at_5 value: 14.516333333333334 - type: recall_at_1 value: 31.956083333333336 - type: recall_at_10 value: 62.69458333333334 - type: recall_at_100 value: 84.46433333333334 - type: recall_at_1000 value: 95.58449999999999 - type: recall_at_3 value: 47.52016666666666 - type: recall_at_5 value: 54.36066666666666 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.912 - type: map_at_10 value: 38.291 - type: map_at_100 value: 39.44 - type: map_at_1000 value: 39.528 - type: map_at_3 value: 35.638 - type: map_at_5 value: 37.218 - type: mrr_at_1 value: 32.822 - type: mrr_at_10 value: 41.661 - type: mrr_at_100 value: 42.546 - type: mrr_at_1000 value: 42.603 - type: mrr_at_3 value: 39.238 - type: mrr_at_5 value: 40.726 - type: ndcg_at_1 value: 32.822 - type: ndcg_at_10 value: 43.373 - type: ndcg_at_100 value: 48.638 - type: ndcg_at_1000 value: 50.654999999999994 - type: ndcg_at_3 value: 38.643 - type: ndcg_at_5 value: 41.126000000000005 - type: precision_at_1 value: 32.822 - type: precision_at_10 value: 6.8709999999999996 - type: precision_at_100 value: 1.032 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 16.82 - type: precision_at_5 value: 11.718 - type: recall_at_1 value: 28.912 - type: recall_at_10 value: 55.376999999999995 - type: recall_at_100 value: 79.066 - type: recall_at_1000 value: 93.664 - type: recall_at_3 value: 42.569 - type: recall_at_5 value: 48.719 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.181 - type: map_at_10 value: 31.462 - type: map_at_100 value: 32.73 - type: map_at_1000 value: 32.848 - type: map_at_3 value: 28.57 - type: map_at_5 value: 30.182 - type: mrr_at_1 value: 27.185 - type: mrr_at_10 value: 35.846000000000004 - type: mrr_at_100 value: 36.811 - type: mrr_at_1000 value: 36.873 - type: mrr_at_3 value: 33.437 - type: mrr_at_5 value: 34.813 - type: ndcg_at_1 value: 27.185 - type: ndcg_at_10 value: 36.858000000000004 - type: ndcg_at_100 value: 42.501 - type: ndcg_at_1000 value: 44.945 - type: ndcg_at_3 value: 32.066 - type: ndcg_at_5 value: 34.29 - type: precision_at_1 value: 27.185 - type: precision_at_10 value: 6.752 - type: precision_at_100 value: 1.111 - type: precision_at_1000 value: 0.151 - type: precision_at_3 value: 15.290000000000001 - type: precision_at_5 value: 11.004999999999999 - type: recall_at_1 value: 22.181 - type: recall_at_10 value: 48.513 - type: recall_at_100 value: 73.418 - type: recall_at_1000 value: 90.306 - type: recall_at_3 value: 35.003 - type: recall_at_5 value: 40.876000000000005 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 33.934999999999995 - type: map_at_10 value: 44.727 - type: map_at_100 value: 44.727 - type: map_at_1000 value: 44.727 - type: map_at_3 value: 40.918 - type: map_at_5 value: 42.961 - type: mrr_at_1 value: 39.646 - type: mrr_at_10 value: 48.898 - type: mrr_at_100 value: 48.898 - type: mrr_at_1000 value: 48.898 - type: mrr_at_3 value: 45.896 - type: mrr_at_5 value: 47.514 - type: ndcg_at_1 value: 39.646 - type: ndcg_at_10 value: 50.817 - type: ndcg_at_100 value: 50.803 - type: ndcg_at_1000 value: 50.803 - type: ndcg_at_3 value: 44.507999999999996 - type: ndcg_at_5 value: 47.259 - type: precision_at_1 value: 39.646 - type: precision_at_10 value: 8.759 - type: precision_at_100 value: 0.876 - type: precision_at_1000 value: 0.08800000000000001 - type: precision_at_3 value: 20.274 - type: precision_at_5 value: 14.366000000000001 - type: recall_at_1 value: 33.934999999999995 - type: recall_at_10 value: 65.037 - type: recall_at_100 value: 65.037 - type: recall_at_1000 value: 65.037 - type: recall_at_3 value: 47.439 - type: recall_at_5 value: 54.567 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.058 - type: map_at_10 value: 43.137 - type: map_at_100 value: 43.137 - type: map_at_1000 value: 43.137 - type: map_at_3 value: 39.882 - type: map_at_5 value: 41.379 - type: mrr_at_1 value: 38.933 - type: mrr_at_10 value: 48.344 - type: mrr_at_100 value: 48.344 - type: mrr_at_1000 value: 48.344 - type: mrr_at_3 value: 45.652 - type: mrr_at_5 value: 46.877 - type: ndcg_at_1 value: 38.933 - type: ndcg_at_10 value: 49.964 - type: ndcg_at_100 value: 49.242000000000004 - type: ndcg_at_1000 value: 49.222 - type: ndcg_at_3 value: 44.605 - type: ndcg_at_5 value: 46.501999999999995 - type: precision_at_1 value: 38.933 - type: precision_at_10 value: 9.427000000000001 - type: precision_at_100 value: 0.943 - type: precision_at_1000 value: 0.094 - type: precision_at_3 value: 20.685000000000002 - type: precision_at_5 value: 14.585 - type: recall_at_1 value: 32.058 - type: recall_at_10 value: 63.074 - type: recall_at_100 value: 63.074 - type: recall_at_1000 value: 63.074 - type: recall_at_3 value: 47.509 - type: recall_at_5 value: 52.455 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.029000000000003 - type: map_at_10 value: 34.646 - type: map_at_100 value: 34.646 - type: map_at_1000 value: 34.646 - type: map_at_3 value: 31.456 - type: map_at_5 value: 33.138 - type: mrr_at_1 value: 28.281 - type: mrr_at_10 value: 36.905 - type: mrr_at_100 value: 36.905 - type: mrr_at_1000 value: 36.905 - type: mrr_at_3 value: 34.011 - type: mrr_at_5 value: 35.638 - type: ndcg_at_1 value: 28.281 - type: ndcg_at_10 value: 40.159 - type: ndcg_at_100 value: 40.159 - type: ndcg_at_1000 value: 40.159 - type: ndcg_at_3 value: 33.995 - type: ndcg_at_5 value: 36.836999999999996 - type: precision_at_1 value: 28.281 - type: precision_at_10 value: 6.358999999999999 - type: precision_at_100 value: 0.636 - type: precision_at_1000 value: 0.064 - type: precision_at_3 value: 14.233 - type: precision_at_5 value: 10.314 - type: recall_at_1 value: 26.029000000000003 - type: recall_at_10 value: 55.08 - type: recall_at_100 value: 55.08 - type: recall_at_1000 value: 55.08 - type: recall_at_3 value: 38.487 - type: recall_at_5 value: 45.308 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 12.842999999999998 - type: map_at_10 value: 22.101000000000003 - type: map_at_100 value: 24.319 - type: map_at_1000 value: 24.51 - type: map_at_3 value: 18.372 - type: map_at_5 value: 20.323 - type: mrr_at_1 value: 27.948 - type: mrr_at_10 value: 40.321 - type: mrr_at_100 value: 41.262 - type: mrr_at_1000 value: 41.297 - type: mrr_at_3 value: 36.558 - type: mrr_at_5 value: 38.824999999999996 - type: ndcg_at_1 value: 27.948 - type: ndcg_at_10 value: 30.906 - type: ndcg_at_100 value: 38.986 - type: ndcg_at_1000 value: 42.136 - type: ndcg_at_3 value: 24.911 - type: ndcg_at_5 value: 27.168999999999997 - type: precision_at_1 value: 27.948 - type: precision_at_10 value: 9.798 - type: precision_at_100 value: 1.8399999999999999 - type: precision_at_1000 value: 0.243 - type: precision_at_3 value: 18.328 - type: precision_at_5 value: 14.502 - type: recall_at_1 value: 12.842999999999998 - type: recall_at_10 value: 37.245 - type: recall_at_100 value: 64.769 - type: recall_at_1000 value: 82.055 - type: recall_at_3 value: 23.159 - type: recall_at_5 value: 29.113 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 8.934000000000001 - type: map_at_10 value: 21.915000000000003 - type: map_at_100 value: 21.915000000000003 - type: map_at_1000 value: 21.915000000000003 - type: map_at_3 value: 14.623 - type: map_at_5 value: 17.841 - type: mrr_at_1 value: 71.25 - type: mrr_at_10 value: 78.994 - type: mrr_at_100 value: 78.994 - type: mrr_at_1000 value: 78.994 - type: mrr_at_3 value: 77.208 - type: mrr_at_5 value: 78.55799999999999 - type: ndcg_at_1 value: 60.62499999999999 - type: ndcg_at_10 value: 46.604 - type: ndcg_at_100 value: 35.653 - type: ndcg_at_1000 value: 35.531 - type: ndcg_at_3 value: 50.605 - type: ndcg_at_5 value: 48.730000000000004 - type: precision_at_1 value: 71.25 - type: precision_at_10 value: 37.75 - type: precision_at_100 value: 3.775 - type: precision_at_1000 value: 0.377 - type: precision_at_3 value: 54.417 - type: precision_at_5 value: 48.15 - type: recall_at_1 value: 8.934000000000001 - type: recall_at_10 value: 28.471000000000004 - type: recall_at_100 value: 28.471000000000004 - type: recall_at_1000 value: 28.471000000000004 - type: recall_at_3 value: 16.019 - type: recall_at_5 value: 21.410999999999998 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 52.81 - type: f1 value: 47.987573380720114 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 66.81899999999999 - type: map_at_10 value: 78.034 - type: map_at_100 value: 78.034 - type: map_at_1000 value: 78.034 - type: map_at_3 value: 76.43100000000001 - type: map_at_5 value: 77.515 - type: mrr_at_1 value: 71.542 - type: mrr_at_10 value: 81.638 - type: mrr_at_100 value: 81.638 - type: mrr_at_1000 value: 81.638 - type: mrr_at_3 value: 80.403 - type: mrr_at_5 value: 81.256 - type: ndcg_at_1 value: 71.542 - type: ndcg_at_10 value: 82.742 - type: ndcg_at_100 value: 82.741 - type: ndcg_at_1000 value: 82.741 - type: ndcg_at_3 value: 80.039 - type: ndcg_at_5 value: 81.695 - type: precision_at_1 value: 71.542 - type: precision_at_10 value: 10.387 - type: precision_at_100 value: 1.039 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 31.447999999999997 - type: precision_at_5 value: 19.91 - type: recall_at_1 value: 66.81899999999999 - type: recall_at_10 value: 93.372 - type: recall_at_100 value: 93.372 - type: recall_at_1000 value: 93.372 - type: recall_at_3 value: 86.33 - type: recall_at_5 value: 90.347 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 31.158 - type: map_at_10 value: 52.017 - type: map_at_100 value: 54.259 - type: map_at_1000 value: 54.367 - type: map_at_3 value: 45.738 - type: map_at_5 value: 49.283 - type: mrr_at_1 value: 57.87 - type: mrr_at_10 value: 66.215 - type: mrr_at_100 value: 66.735 - type: mrr_at_1000 value: 66.75 - type: mrr_at_3 value: 64.043 - type: mrr_at_5 value: 65.116 - type: ndcg_at_1 value: 57.87 - type: ndcg_at_10 value: 59.946999999999996 - type: ndcg_at_100 value: 66.31099999999999 - type: ndcg_at_1000 value: 67.75999999999999 - type: ndcg_at_3 value: 55.483000000000004 - type: ndcg_at_5 value: 56.891000000000005 - type: precision_at_1 value: 57.87 - type: precision_at_10 value: 16.497 - type: precision_at_100 value: 2.321 - type: precision_at_1000 value: 0.258 - type: precision_at_3 value: 37.14 - type: precision_at_5 value: 27.067999999999998 - type: recall_at_1 value: 31.158 - type: recall_at_10 value: 67.381 - type: recall_at_100 value: 89.464 - type: recall_at_1000 value: 97.989 - type: recall_at_3 value: 50.553000000000004 - type: recall_at_5 value: 57.824 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 42.073 - type: map_at_10 value: 72.418 - type: map_at_100 value: 73.175 - type: map_at_1000 value: 73.215 - type: map_at_3 value: 68.791 - type: map_at_5 value: 71.19 - type: mrr_at_1 value: 84.146 - type: mrr_at_10 value: 88.994 - type: mrr_at_100 value: 89.116 - type: mrr_at_1000 value: 89.12 - type: mrr_at_3 value: 88.373 - type: mrr_at_5 value: 88.82 - type: ndcg_at_1 value: 84.146 - type: ndcg_at_10 value: 79.404 - type: ndcg_at_100 value: 81.83200000000001 - type: ndcg_at_1000 value: 82.524 - type: ndcg_at_3 value: 74.595 - type: ndcg_at_5 value: 77.474 - type: precision_at_1 value: 84.146 - type: precision_at_10 value: 16.753999999999998 - type: precision_at_100 value: 1.8599999999999999 - type: precision_at_1000 value: 0.19499999999999998 - type: precision_at_3 value: 48.854 - type: precision_at_5 value: 31.579 - type: recall_at_1 value: 42.073 - type: recall_at_10 value: 83.768 - type: recall_at_100 value: 93.018 - type: recall_at_1000 value: 97.481 - type: recall_at_3 value: 73.282 - type: recall_at_5 value: 78.947 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 94.9968 - type: ap value: 92.93892195862824 - type: f1 value: 94.99327998213761 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 21.698 - type: map_at_10 value: 34.585 - type: map_at_100 value: 35.782000000000004 - type: map_at_1000 value: 35.825 - type: map_at_3 value: 30.397999999999996 - type: map_at_5 value: 32.72 - type: mrr_at_1 value: 22.192 - type: mrr_at_10 value: 35.085 - type: mrr_at_100 value: 36.218 - type: mrr_at_1000 value: 36.256 - type: mrr_at_3 value: 30.986000000000004 - type: mrr_at_5 value: 33.268 - type: ndcg_at_1 value: 22.192 - type: ndcg_at_10 value: 41.957 - type: ndcg_at_100 value: 47.658 - type: ndcg_at_1000 value: 48.697 - type: ndcg_at_3 value: 33.433 - type: ndcg_at_5 value: 37.551 - type: precision_at_1 value: 22.192 - type: precision_at_10 value: 6.781 - type: precision_at_100 value: 0.963 - type: precision_at_1000 value: 0.105 - type: precision_at_3 value: 14.365 - type: precision_at_5 value: 10.713000000000001 - type: recall_at_1 value: 21.698 - type: recall_at_10 value: 64.79 - type: recall_at_100 value: 91.071 - type: recall_at_1000 value: 98.883 - type: recall_at_3 value: 41.611 - type: recall_at_5 value: 51.459999999999994 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 96.15823073415413 - type: f1 value: 96.00362034963248 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 87.12722298221614 - type: f1 value: 70.46888967516227 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 80.77673167451245 - type: f1 value: 77.60202561132175 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 82.09145931405514 - type: f1 value: 81.7701921473406 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 36.52153488185864 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 36.80090398444147 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.807141746058605 - type: mrr value: 32.85025611455029 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.920999999999999 - type: map_at_10 value: 16.049 - type: map_at_100 value: 16.049 - type: map_at_1000 value: 16.049 - type: map_at_3 value: 11.865 - type: map_at_5 value: 13.657 - type: mrr_at_1 value: 53.87 - type: mrr_at_10 value: 62.291 - type: mrr_at_100 value: 62.291 - type: mrr_at_1000 value: 62.291 - type: mrr_at_3 value: 60.681 - type: mrr_at_5 value: 61.61 - type: ndcg_at_1 value: 51.23799999999999 - type: ndcg_at_10 value: 40.892 - type: ndcg_at_100 value: 26.951999999999998 - type: ndcg_at_1000 value: 26.474999999999998 - type: ndcg_at_3 value: 46.821 - type: ndcg_at_5 value: 44.333 - type: precision_at_1 value: 53.251000000000005 - type: precision_at_10 value: 30.124000000000002 - type: precision_at_100 value: 3.012 - type: precision_at_1000 value: 0.301 - type: precision_at_3 value: 43.55 - type: precision_at_5 value: 38.266 - type: recall_at_1 value: 6.920999999999999 - type: recall_at_10 value: 20.852 - type: recall_at_100 value: 20.852 - type: recall_at_1000 value: 20.852 - type: recall_at_3 value: 13.628000000000002 - type: recall_at_5 value: 16.273 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 46.827999999999996 - type: map_at_10 value: 63.434000000000005 - type: map_at_100 value: 63.434000000000005 - type: map_at_1000 value: 63.434000000000005 - type: map_at_3 value: 59.794000000000004 - type: map_at_5 value: 62.08 - type: mrr_at_1 value: 52.288999999999994 - type: mrr_at_10 value: 65.95 - type: mrr_at_100 value: 65.95 - type: mrr_at_1000 value: 65.95 - type: mrr_at_3 value: 63.413 - type: mrr_at_5 value: 65.08 - type: ndcg_at_1 value: 52.288999999999994 - type: ndcg_at_10 value: 70.301 - type: ndcg_at_100 value: 70.301 - type: ndcg_at_1000 value: 70.301 - type: ndcg_at_3 value: 63.979 - type: ndcg_at_5 value: 67.582 - type: precision_at_1 value: 52.288999999999994 - type: precision_at_10 value: 10.576 - type: precision_at_100 value: 1.058 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 28.177000000000003 - type: precision_at_5 value: 19.073 - type: recall_at_1 value: 46.827999999999996 - type: recall_at_10 value: 88.236 - type: recall_at_100 value: 88.236 - type: recall_at_1000 value: 88.236 - type: recall_at_3 value: 72.371 - type: recall_at_5 value: 80.56 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 71.652 - type: map_at_10 value: 85.953 - type: map_at_100 value: 85.953 - type: map_at_1000 value: 85.953 - type: map_at_3 value: 83.05399999999999 - type: map_at_5 value: 84.89 - type: mrr_at_1 value: 82.42 - type: mrr_at_10 value: 88.473 - type: mrr_at_100 value: 88.473 - type: mrr_at_1000 value: 88.473 - type: mrr_at_3 value: 87.592 - type: mrr_at_5 value: 88.211 - type: ndcg_at_1 value: 82.44 - type: ndcg_at_10 value: 89.467 - type: ndcg_at_100 value: 89.33 - type: ndcg_at_1000 value: 89.33 - type: ndcg_at_3 value: 86.822 - type: ndcg_at_5 value: 88.307 - type: precision_at_1 value: 82.44 - type: precision_at_10 value: 13.616 - type: precision_at_100 value: 1.362 - type: precision_at_1000 value: 0.136 - type: precision_at_3 value: 38.117000000000004 - type: precision_at_5 value: 25.05 - type: recall_at_1 value: 71.652 - type: recall_at_10 value: 96.224 - type: recall_at_100 value: 96.224 - type: recall_at_1000 value: 96.224 - type: recall_at_3 value: 88.571 - type: recall_at_5 value: 92.812 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 61.295010338050474 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 67.26380819328142 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 5.683 - type: map_at_10 value: 14.924999999999999 - type: map_at_100 value: 17.532 - type: map_at_1000 value: 17.875 - type: map_at_3 value: 10.392 - type: map_at_5 value: 12.592 - type: mrr_at_1 value: 28.000000000000004 - type: mrr_at_10 value: 39.951 - type: mrr_at_100 value: 41.025 - type: mrr_at_1000 value: 41.056 - type: mrr_at_3 value: 36.317 - type: mrr_at_5 value: 38.412 - type: ndcg_at_1 value: 28.000000000000004 - type: ndcg_at_10 value: 24.410999999999998 - type: ndcg_at_100 value: 33.79 - type: ndcg_at_1000 value: 39.035 - type: ndcg_at_3 value: 22.845 - type: ndcg_at_5 value: 20.080000000000002 - type: precision_at_1 value: 28.000000000000004 - type: precision_at_10 value: 12.790000000000001 - type: precision_at_100 value: 2.633 - type: precision_at_1000 value: 0.388 - type: precision_at_3 value: 21.367 - type: precision_at_5 value: 17.7 - type: recall_at_1 value: 5.683 - type: recall_at_10 value: 25.91 - type: recall_at_100 value: 53.443 - type: recall_at_1000 value: 78.73 - type: recall_at_3 value: 13.003 - type: recall_at_5 value: 17.932000000000002 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.677978681023 - type: cos_sim_spearman value: 83.13093441058189 - type: euclidean_pearson value: 83.35535759341572 - type: euclidean_spearman value: 83.42583744219611 - type: manhattan_pearson value: 83.2243124045889 - type: manhattan_spearman value: 83.39801618652632 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 81.68960206569666 - type: cos_sim_spearman value: 77.3368966488535 - type: euclidean_pearson value: 77.62828980560303 - type: euclidean_spearman value: 76.77951481444651 - type: manhattan_pearson value: 77.88637240839041 - type: manhattan_spearman value: 77.22157841466188 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 84.18745821650724 - type: cos_sim_spearman value: 85.04423285574542 - type: euclidean_pearson value: 85.46604816931023 - type: euclidean_spearman value: 85.5230593932974 - type: manhattan_pearson value: 85.57912805986261 - type: manhattan_spearman value: 85.65955905111873 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 83.6715333300355 - type: cos_sim_spearman value: 82.9058522514908 - type: euclidean_pearson value: 83.9640357424214 - type: euclidean_spearman value: 83.60415457472637 - type: manhattan_pearson value: 84.05621005853469 - type: manhattan_spearman value: 83.87077724707746 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.82422928098886 - type: cos_sim_spearman value: 88.12660311894628 - type: euclidean_pearson value: 87.50974805056555 - type: euclidean_spearman value: 87.91957275596677 - type: manhattan_pearson value: 87.74119404878883 - type: manhattan_spearman value: 88.2808922165719 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.80605838552093 - type: cos_sim_spearman value: 86.24123388765678 - type: euclidean_pearson value: 85.32648347339814 - type: euclidean_spearman value: 85.60046671950158 - type: manhattan_pearson value: 85.53800168487811 - type: manhattan_spearman value: 85.89542420480763 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 89.87540978988132 - type: cos_sim_spearman value: 90.12715295099461 - type: euclidean_pearson value: 91.61085993525275 - type: euclidean_spearman value: 91.31835942311758 - type: manhattan_pearson value: 91.57500202032934 - type: manhattan_spearman value: 91.1790925526635 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 69.87136205329556 - type: cos_sim_spearman value: 68.6253154635078 - type: euclidean_pearson value: 68.91536015034222 - type: euclidean_spearman value: 67.63744649352542 - type: manhattan_pearson value: 69.2000713045275 - type: manhattan_spearman value: 68.16002901587316 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.21849551039082 - type: cos_sim_spearman value: 85.6392959372461 - type: euclidean_pearson value: 85.92050852609488 - type: euclidean_spearman value: 85.97205649009734 - type: manhattan_pearson value: 86.1031154802254 - type: manhattan_spearman value: 86.26791155517466 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 86.83953958636627 - type: mrr value: 96.71167612344082 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 64.994 - type: map_at_10 value: 74.763 - type: map_at_100 value: 75.127 - type: map_at_1000 value: 75.143 - type: map_at_3 value: 71.824 - type: map_at_5 value: 73.71 - type: mrr_at_1 value: 68.333 - type: mrr_at_10 value: 75.749 - type: mrr_at_100 value: 75.922 - type: mrr_at_1000 value: 75.938 - type: mrr_at_3 value: 73.556 - type: mrr_at_5 value: 74.739 - type: ndcg_at_1 value: 68.333 - type: ndcg_at_10 value: 79.174 - type: ndcg_at_100 value: 80.41 - type: ndcg_at_1000 value: 80.804 - type: ndcg_at_3 value: 74.361 - type: ndcg_at_5 value: 76.861 - type: precision_at_1 value: 68.333 - type: precision_at_10 value: 10.333 - type: precision_at_100 value: 1.0999999999999999 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 28.778 - type: precision_at_5 value: 19.067 - type: recall_at_1 value: 64.994 - type: recall_at_10 value: 91.822 - type: recall_at_100 value: 97.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 78.878 - type: recall_at_5 value: 85.172 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.72079207920792 - type: cos_sim_ap value: 93.00265215525152 - type: cos_sim_f1 value: 85.06596306068602 - type: cos_sim_precision value: 90.05586592178771 - type: cos_sim_recall value: 80.60000000000001 - type: dot_accuracy value: 99.66039603960397 - type: dot_ap value: 91.22371407479089 - type: dot_f1 value: 82.34693877551021 - type: dot_precision value: 84.0625 - type: dot_recall value: 80.7 - type: euclidean_accuracy value: 99.71881188118812 - type: euclidean_ap value: 92.88449963304728 - type: euclidean_f1 value: 85.19480519480518 - type: euclidean_precision value: 88.64864864864866 - type: euclidean_recall value: 82.0 - type: manhattan_accuracy value: 99.73267326732673 - type: manhattan_ap value: 93.23055393056883 - type: manhattan_f1 value: 85.88957055214725 - type: manhattan_precision value: 87.86610878661088 - type: manhattan_recall value: 84.0 - type: max_accuracy value: 99.73267326732673 - type: max_ap value: 93.23055393056883 - type: max_f1 value: 85.88957055214725 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 77.3305735900358 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 41.32967136540674 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.95514866379359 - type: mrr value: 56.95423245055598 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.783007208997144 - type: cos_sim_spearman value: 30.373444721540533 - type: dot_pearson value: 29.210604111143905 - type: dot_spearman value: 29.98809758085659 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.234 - type: map_at_10 value: 1.894 - type: map_at_100 value: 1.894 - type: map_at_1000 value: 1.894 - type: map_at_3 value: 0.636 - type: map_at_5 value: 1.0 - type: mrr_at_1 value: 88.0 - type: mrr_at_10 value: 93.667 - type: mrr_at_100 value: 93.667 - type: mrr_at_1000 value: 93.667 - type: mrr_at_3 value: 93.667 - type: mrr_at_5 value: 93.667 - type: ndcg_at_1 value: 85.0 - type: ndcg_at_10 value: 74.798 - type: ndcg_at_100 value: 16.462 - type: ndcg_at_1000 value: 7.0889999999999995 - type: ndcg_at_3 value: 80.754 - type: ndcg_at_5 value: 77.319 - type: precision_at_1 value: 88.0 - type: precision_at_10 value: 78.0 - type: precision_at_100 value: 7.8 - type: precision_at_1000 value: 0.7799999999999999 - type: precision_at_3 value: 83.333 - type: precision_at_5 value: 80.80000000000001 - type: recall_at_1 value: 0.234 - type: recall_at_10 value: 2.093 - type: recall_at_100 value: 2.093 - type: recall_at_1000 value: 2.093 - type: recall_at_3 value: 0.662 - type: recall_at_5 value: 1.0739999999999998 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.703 - type: map_at_10 value: 10.866000000000001 - type: map_at_100 value: 10.866000000000001 - type: map_at_1000 value: 10.866000000000001 - type: map_at_3 value: 5.909 - type: map_at_5 value: 7.35 - type: mrr_at_1 value: 36.735 - type: mrr_at_10 value: 53.583000000000006 - type: mrr_at_100 value: 53.583000000000006 - type: mrr_at_1000 value: 53.583000000000006 - type: mrr_at_3 value: 49.32 - type: mrr_at_5 value: 51.769 - type: ndcg_at_1 value: 34.694 - type: ndcg_at_10 value: 27.926000000000002 - type: ndcg_at_100 value: 22.701 - type: ndcg_at_1000 value: 22.701 - type: ndcg_at_3 value: 32.073 - type: ndcg_at_5 value: 28.327999999999996 - type: precision_at_1 value: 36.735 - type: precision_at_10 value: 24.694 - type: precision_at_100 value: 2.469 - type: precision_at_1000 value: 0.247 - type: precision_at_3 value: 31.973000000000003 - type: precision_at_5 value: 26.939 - type: recall_at_1 value: 2.703 - type: recall_at_10 value: 17.702 - type: recall_at_100 value: 17.702 - type: recall_at_1000 value: 17.702 - type: recall_at_3 value: 7.208 - type: recall_at_5 value: 9.748999999999999 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.79960000000001 - type: ap value: 15.467565415565815 - type: f1 value: 55.28639823443618 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 64.7792869269949 - type: f1 value: 65.08597154774318 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 55.70352297774293 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 88.27561542588067 - type: cos_sim_ap value: 81.08262141256193 - type: cos_sim_f1 value: 73.82341501361338 - type: cos_sim_precision value: 72.5720112159062 - type: cos_sim_recall value: 75.11873350923483 - type: dot_accuracy value: 86.66030875603504 - type: dot_ap value: 76.6052349228621 - type: dot_f1 value: 70.13897280966768 - type: dot_precision value: 64.70457079152732 - type: dot_recall value: 76.56992084432717 - type: euclidean_accuracy value: 88.37098408535495 - type: euclidean_ap value: 81.12515230092113 - type: euclidean_f1 value: 74.10338225909379 - type: euclidean_precision value: 71.76761433868974 - type: euclidean_recall value: 76.59630606860158 - type: manhattan_accuracy value: 88.34118137926924 - type: manhattan_ap value: 80.95751834536561 - type: manhattan_f1 value: 73.9119496855346 - type: manhattan_precision value: 70.625 - type: manhattan_recall value: 77.5197889182058 - type: max_accuracy value: 88.37098408535495 - type: max_ap value: 81.12515230092113 - type: max_f1 value: 74.10338225909379 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.79896767182831 - type: cos_sim_ap value: 87.40071784061065 - type: cos_sim_f1 value: 79.87753144712087 - type: cos_sim_precision value: 76.67304015296367 - type: cos_sim_recall value: 83.3615645210964 - type: dot_accuracy value: 88.95486474948578 - type: dot_ap value: 86.00227979119943 - type: dot_f1 value: 78.54601474525914 - type: dot_precision value: 75.00525394045535 - type: dot_recall value: 82.43763473975977 - type: euclidean_accuracy value: 89.7892653393876 - type: euclidean_ap value: 87.42174706480819 - type: euclidean_f1 value: 80.07283321194465 - type: euclidean_precision value: 75.96738529574351 - type: euclidean_recall value: 84.6473668001232 - type: manhattan_accuracy value: 89.8474793340319 - type: manhattan_ap value: 87.47814292587448 - type: manhattan_f1 value: 80.15461150280949 - type: manhattan_precision value: 74.88798234468 - type: manhattan_recall value: 86.21804742839544 - type: max_accuracy value: 89.8474793340319 - type: max_ap value: 87.47814292587448 - type: max_f1 value: 80.15461150280949 --- # Model Summary > GritLM is a generative representational instruction tuned language model. It unifies text representation (embedding) and text generation into a single model achieving state-of-the-art performance on both types of tasks. - **Repository:** [ContextualAI/gritlm](https://github.com/ContextualAI/gritlm) - **Paper:** https://arxiv.org/abs/2402.09906 - **Logs:** https://wandb.ai/muennighoff/gritlm/runs/0uui712t/overview - **Script:** https://github.com/ContextualAI/gritlm/blob/main/scripts/training/train_gritlm_7b.sh | Model | Description | |-------|-------------| | [GritLM 7B](https://hf.co/GritLM/GritLM-7B) | Mistral 7B finetuned using GRIT | | [GritLM 8x7B](https://hf.co/GritLM/GritLM-8x7B) | Mixtral 8x7B finetuned using GRIT | # Use The model usage is documented [here](https://github.com/ContextualAI/gritlm?tab=readme-ov-file#inference). # Citation ```bibtex @misc{muennighoff2024generative, title={Generative Representational Instruction Tuning}, author={Niklas Muennighoff and Hongjin Su and Liang Wang and Nan Yang and Furu Wei and Tao Yu and Amanpreet Singh and Douwe Kiela}, year={2024}, eprint={2402.09906}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{}
RichardErkhov/GritLM_-_GritLM-7B-8bits
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "custom_code", "arxiv:2402.09906", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-05-03T17:10:22+00:00
null
null
{"license": "apache-2.0"}
tghurair/TM_BERT
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-03T17:10:26+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_virus_covid-seqsight_65536_512_47M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset. It achieves the following results on the evaluation set: - Loss: 1.2499 - F1 Score: 0.5474 - Accuracy: 0.5351 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 2.1844 | 0.35 | 200 | 2.1790 | 0.0846 | 0.1394 | | 2.1709 | 0.7 | 400 | 2.1527 | 0.1291 | 0.1645 | | 2.1059 | 1.05 | 600 | 2.0039 | 0.2082 | 0.2292 | | 1.9838 | 1.4 | 800 | 1.8829 | 0.2394 | 0.2690 | | 1.8962 | 1.75 | 1000 | 1.8143 | 0.2801 | 0.2901 | | 1.8521 | 2.09 | 1200 | 1.7949 | 0.2914 | 0.3109 | | 1.8118 | 2.44 | 1400 | 1.7158 | 0.3344 | 0.3469 | | 1.7736 | 2.79 | 1600 | 1.6691 | 0.3469 | 0.3676 | | 1.7313 | 3.14 | 1800 | 1.6238 | 0.3670 | 0.3842 | | 1.6996 | 3.49 | 2000 | 1.6052 | 0.3787 | 0.3851 | | 1.6784 | 3.84 | 2200 | 1.5802 | 0.3885 | 0.3936 | | 1.6454 | 4.19 | 2400 | 1.5568 | 0.3987 | 0.3977 | | 1.6235 | 4.54 | 2600 | 1.5292 | 0.4096 | 0.4159 | | 1.6131 | 4.89 | 2800 | 1.5245 | 0.4141 | 0.4209 | | 1.5953 | 5.24 | 3000 | 1.4982 | 0.4287 | 0.4345 | | 1.5705 | 5.58 | 3200 | 1.4806 | 0.4525 | 0.4462 | | 1.5505 | 5.93 | 3400 | 1.4619 | 0.4395 | 0.4443 | | 1.5402 | 6.28 | 3600 | 1.4492 | 0.4668 | 0.4506 | | 1.5208 | 6.63 | 3800 | 1.4306 | 0.4571 | 0.4609 | | 1.5061 | 6.98 | 4000 | 1.4279 | 0.4644 | 0.4623 | | 1.4925 | 7.33 | 4200 | 1.4147 | 0.4805 | 0.4701 | | 1.4772 | 7.68 | 4400 | 1.4055 | 0.4787 | 0.4696 | | 1.4782 | 8.03 | 4600 | 1.3983 | 0.4738 | 0.4700 | | 1.4524 | 8.38 | 4800 | 1.3893 | 0.4867 | 0.4829 | | 1.4546 | 8.73 | 5000 | 1.3800 | 0.4816 | 0.4738 | | 1.4394 | 9.08 | 5200 | 1.3782 | 0.4942 | 0.4775 | | 1.4326 | 9.42 | 5400 | 1.3631 | 0.4857 | 0.4853 | | 1.4264 | 9.77 | 5600 | 1.3457 | 0.4992 | 0.4932 | | 1.4145 | 10.12 | 5800 | 1.3439 | 0.5071 | 0.4976 | | 1.4115 | 10.47 | 6000 | 1.3366 | 0.5073 | 0.4972 | | 1.3942 | 10.82 | 6200 | 1.3286 | 0.5113 | 0.4964 | | 1.3797 | 11.17 | 6400 | 1.3205 | 0.5109 | 0.5029 | | 1.3778 | 11.52 | 6600 | 1.3173 | 0.5186 | 0.5041 | | 1.3805 | 11.87 | 6800 | 1.3090 | 0.5161 | 0.5040 | | 1.3645 | 12.22 | 7000 | 1.3017 | 0.5267 | 0.5171 | | 1.3628 | 12.57 | 7200 | 1.3015 | 0.5149 | 0.5061 | | 1.3597 | 12.91 | 7400 | 1.2982 | 0.5236 | 0.5075 | | 1.3554 | 13.26 | 7600 | 1.2894 | 0.5229 | 0.5130 | | 1.3392 | 13.61 | 7800 | 1.2850 | 0.5326 | 0.5183 | | 1.3441 | 13.96 | 8000 | 1.2806 | 0.5313 | 0.5182 | | 1.3317 | 14.31 | 8200 | 1.2782 | 0.5332 | 0.5193 | | 1.3369 | 14.66 | 8400 | 1.2731 | 0.5326 | 0.5220 | | 1.3337 | 15.01 | 8600 | 1.2732 | 0.5297 | 0.5226 | | 1.3336 | 15.36 | 8800 | 1.2696 | 0.5409 | 0.5279 | | 1.3161 | 15.71 | 9000 | 1.2714 | 0.5357 | 0.5248 | | 1.3329 | 16.06 | 9200 | 1.2696 | 0.5347 | 0.5242 | | 1.3261 | 16.4 | 9400 | 1.2665 | 0.5363 | 0.5287 | | 1.3228 | 16.75 | 9600 | 1.2668 | 0.5374 | 0.5252 | | 1.3258 | 17.1 | 9800 | 1.2662 | 0.5395 | 0.5280 | | 1.3289 | 17.45 | 10000 | 1.2655 | 0.5392 | 0.5277 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_virus_covid-seqsight_65536_512_47M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_virus_covid-seqsight_65536_512_47M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T17:10:57+00:00
token-classification
spacy
| Feature | Description | | --- | --- | | **Name** | `en_pipeline` | | **Version** | `0.0.0` | | **spaCy** | `>=3.7.4,<3.8.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (11 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `College Name`, `Companies worked at`, `Degree`, `Designation`, `Email Address`, `Graduation Year`, `Location`, `Name`, `Skills`, `UNKNOWN`, `Years of Experience` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 57.19 | | `ENTS_P` | 60.75 | | `ENTS_R` | 54.02 | | `TRANSFORMER_LOSS` | 480458.92 | | `NER_LOSS` | 1538225.13 |
{"language": ["en"], "tags": ["spacy", "token-classification"]}
prof144/en_pipeline
null
[ "spacy", "token-classification", "en", "model-index", "region:us" ]
null
2024-05-03T17:10:59+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_virus_covid-seqsight_65536_512_47M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset. It achieves the following results on the evaluation set: - Loss: 1.4679 - F1 Score: 0.4607 - Accuracy: 0.4564 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 2.1848 | 0.35 | 200 | 2.1827 | 0.0810 | 0.1389 | | 2.1771 | 0.7 | 400 | 2.1699 | 0.1086 | 0.1417 | | 2.1622 | 1.05 | 600 | 2.1455 | 0.1413 | 0.1689 | | 2.1255 | 1.4 | 800 | 2.0468 | 0.1962 | 0.2244 | | 2.0333 | 1.75 | 1000 | 1.9384 | 0.2227 | 0.2599 | | 1.9701 | 2.09 | 1200 | 1.8946 | 0.2381 | 0.2709 | | 1.9252 | 2.44 | 1400 | 1.8326 | 0.2954 | 0.3059 | | 1.8922 | 2.79 | 1600 | 1.7987 | 0.2895 | 0.3096 | | 1.8635 | 3.14 | 1800 | 1.7673 | 0.2891 | 0.3158 | | 1.8315 | 3.49 | 2000 | 1.7372 | 0.3193 | 0.3337 | | 1.8216 | 3.84 | 2200 | 1.7192 | 0.3306 | 0.3421 | | 1.7967 | 4.19 | 2400 | 1.6913 | 0.3757 | 0.3702 | | 1.7827 | 4.54 | 2600 | 1.6755 | 0.3676 | 0.3740 | | 1.7731 | 4.89 | 2800 | 1.6627 | 0.3750 | 0.3800 | | 1.7636 | 5.24 | 3000 | 1.6477 | 0.3810 | 0.3869 | | 1.7467 | 5.58 | 3200 | 1.6318 | 0.3942 | 0.3981 | | 1.7298 | 5.93 | 3400 | 1.6335 | 0.3720 | 0.3806 | | 1.7237 | 6.28 | 3600 | 1.6197 | 0.3891 | 0.3901 | | 1.7099 | 6.63 | 3800 | 1.5950 | 0.4002 | 0.4086 | | 1.6962 | 6.98 | 4000 | 1.5889 | 0.4084 | 0.4094 | | 1.6871 | 7.33 | 4200 | 1.5824 | 0.4067 | 0.4116 | | 1.6794 | 7.68 | 4400 | 1.5680 | 0.4223 | 0.4211 | | 1.6816 | 8.03 | 4600 | 1.5706 | 0.4142 | 0.4126 | | 1.6575 | 8.38 | 4800 | 1.5548 | 0.4110 | 0.4181 | | 1.6688 | 8.73 | 5000 | 1.5507 | 0.4238 | 0.4271 | | 1.6537 | 9.08 | 5200 | 1.5434 | 0.4284 | 0.4202 | | 1.6549 | 9.42 | 5400 | 1.5424 | 0.4228 | 0.4244 | | 1.6383 | 9.77 | 5600 | 1.5232 | 0.4264 | 0.4319 | | 1.6347 | 10.12 | 5800 | 1.5260 | 0.4333 | 0.4294 | | 1.6299 | 10.47 | 6000 | 1.5217 | 0.4366 | 0.4297 | | 1.6276 | 10.82 | 6200 | 1.5146 | 0.4402 | 0.4307 | | 1.6149 | 11.17 | 6400 | 1.5198 | 0.4366 | 0.4309 | | 1.6118 | 11.52 | 6600 | 1.5046 | 0.4404 | 0.4319 | | 1.6157 | 11.87 | 6800 | 1.5022 | 0.4437 | 0.4384 | | 1.6018 | 12.22 | 7000 | 1.4951 | 0.4450 | 0.4370 | | 1.5977 | 12.57 | 7200 | 1.4887 | 0.4440 | 0.4403 | | 1.5986 | 12.91 | 7400 | 1.4909 | 0.4491 | 0.4399 | | 1.5961 | 13.26 | 7600 | 1.4830 | 0.4442 | 0.4374 | | 1.5912 | 13.61 | 7800 | 1.4843 | 0.4468 | 0.4355 | | 1.585 | 13.96 | 8000 | 1.4802 | 0.4520 | 0.4471 | | 1.5771 | 14.31 | 8200 | 1.4751 | 0.4550 | 0.4488 | | 1.584 | 14.66 | 8400 | 1.4684 | 0.4564 | 0.4475 | | 1.5823 | 15.01 | 8600 | 1.4734 | 0.4526 | 0.4475 | | 1.593 | 15.36 | 8800 | 1.4694 | 0.4581 | 0.4493 | | 1.5742 | 15.71 | 9000 | 1.4690 | 0.4541 | 0.4465 | | 1.5807 | 16.06 | 9200 | 1.4676 | 0.4575 | 0.4491 | | 1.5771 | 16.4 | 9400 | 1.4680 | 0.4541 | 0.4472 | | 1.5728 | 16.75 | 9600 | 1.4663 | 0.4590 | 0.4504 | | 1.5805 | 17.1 | 9800 | 1.4656 | 0.4607 | 0.4529 | | 1.5809 | 17.45 | 10000 | 1.4646 | 0.4602 | 0.4528 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_virus_covid-seqsight_65536_512_47M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_virus_covid-seqsight_65536_512_47M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_65536_512_47M", "region:us" ]
null
2024-05-03T17:11:02+00:00
null
null
{}
Litzy619/Phi0503TEST
null
[ "region:us" ]
null
2024-05-03T17:11:09+00:00
null
null
{}
Vsk190703/api
null
[ "region:us" ]
null
2024-05-03T17:11:19+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_tata-seqsight_4096_512_15M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset. It achieves the following results on the evaluation set: - Loss: 0.4633 - F1 Score: 0.8019 - Accuracy: 0.8026 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6143 | 5.13 | 200 | 0.5429 | 0.7210 | 0.7308 | | 0.4939 | 10.26 | 400 | 0.4784 | 0.7652 | 0.7651 | | 0.4597 | 15.38 | 600 | 0.4630 | 0.7815 | 0.7814 | | 0.4424 | 20.51 | 800 | 0.4467 | 0.7942 | 0.7945 | | 0.429 | 25.64 | 1000 | 0.4458 | 0.7912 | 0.7912 | | 0.4193 | 30.77 | 1200 | 0.4435 | 0.8027 | 0.8026 | | 0.4116 | 35.9 | 1400 | 0.4424 | 0.7992 | 0.7993 | | 0.4052 | 41.03 | 1600 | 0.4467 | 0.7970 | 0.7977 | | 0.4001 | 46.15 | 1800 | 0.4446 | 0.7945 | 0.7945 | | 0.3919 | 51.28 | 2000 | 0.4406 | 0.8009 | 0.8010 | | 0.3874 | 56.41 | 2200 | 0.4495 | 0.8066 | 0.8075 | | 0.3804 | 61.54 | 2400 | 0.4465 | 0.8024 | 0.8026 | | 0.3733 | 66.67 | 2600 | 0.4572 | 0.8039 | 0.8042 | | 0.3732 | 71.79 | 2800 | 0.4552 | 0.8054 | 0.8059 | | 0.3721 | 76.92 | 3000 | 0.4549 | 0.7799 | 0.7798 | | 0.3669 | 82.05 | 3200 | 0.4633 | 0.7893 | 0.7896 | | 0.3633 | 87.18 | 3400 | 0.4594 | 0.7881 | 0.7879 | | 0.3587 | 92.31 | 3600 | 0.4601 | 0.7993 | 0.7993 | | 0.3569 | 97.44 | 3800 | 0.4608 | 0.7961 | 0.7961 | | 0.3474 | 102.56 | 4000 | 0.4729 | 0.7912 | 0.7912 | | 0.3523 | 107.69 | 4200 | 0.4651 | 0.7929 | 0.7928 | | 0.3502 | 112.82 | 4400 | 0.4641 | 0.7896 | 0.7896 | | 0.3427 | 117.95 | 4600 | 0.4727 | 0.7896 | 0.7896 | | 0.3428 | 123.08 | 4800 | 0.4731 | 0.7946 | 0.7945 | | 0.3407 | 128.21 | 5000 | 0.4764 | 0.7927 | 0.7928 | | 0.3418 | 133.33 | 5200 | 0.4797 | 0.7893 | 0.7896 | | 0.3346 | 138.46 | 5400 | 0.4938 | 0.7925 | 0.7928 | | 0.3348 | 143.59 | 5600 | 0.4862 | 0.7957 | 0.7961 | | 0.3364 | 148.72 | 5800 | 0.4881 | 0.7908 | 0.7912 | | 0.3329 | 153.85 | 6000 | 0.4877 | 0.7860 | 0.7863 | | 0.3306 | 158.97 | 6200 | 0.4849 | 0.7878 | 0.7879 | | 0.3292 | 164.1 | 6400 | 0.4915 | 0.7939 | 0.7945 | | 0.3262 | 169.23 | 6600 | 0.4810 | 0.7863 | 0.7863 | | 0.3294 | 174.36 | 6800 | 0.4848 | 0.7911 | 0.7912 | | 0.3258 | 179.49 | 7000 | 0.4976 | 0.7908 | 0.7912 | | 0.3258 | 184.62 | 7200 | 0.5007 | 0.7986 | 0.7993 | | 0.3236 | 189.74 | 7400 | 0.4985 | 0.7878 | 0.7879 | | 0.3199 | 194.87 | 7600 | 0.5001 | 0.7878 | 0.7879 | | 0.3197 | 200.0 | 7800 | 0.5024 | 0.7876 | 0.7879 | | 0.3227 | 205.13 | 8000 | 0.4944 | 0.7877 | 0.7879 | | 0.3174 | 210.26 | 8200 | 0.4960 | 0.7863 | 0.7863 | | 0.3199 | 215.38 | 8400 | 0.4989 | 0.7862 | 0.7863 | | 0.3156 | 220.51 | 8600 | 0.5035 | 0.7893 | 0.7896 | | 0.3171 | 225.64 | 8800 | 0.5018 | 0.7879 | 0.7879 | | 0.3179 | 230.77 | 9000 | 0.5001 | 0.7895 | 0.7896 | | 0.3152 | 235.9 | 9200 | 0.4989 | 0.7895 | 0.7896 | | 0.3189 | 241.03 | 9400 | 0.5018 | 0.7911 | 0.7912 | | 0.3144 | 246.15 | 9600 | 0.5024 | 0.7895 | 0.7896 | | 0.3203 | 251.28 | 9800 | 0.5003 | 0.7895 | 0.7896 | | 0.3167 | 256.41 | 10000 | 0.5005 | 0.7895 | 0.7896 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_4096_512_15M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_4096_512_15M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_15M", "region:us" ]
null
2024-05-03T17:11:27+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_tata-seqsight_4096_512_15M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset. It achieves the following results on the evaluation set: - Loss: 0.4525 - F1 Score: 0.8104 - Accuracy: 0.8108 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5575 | 5.13 | 200 | 0.4922 | 0.7656 | 0.7700 | | 0.4488 | 10.26 | 400 | 0.4478 | 0.7979 | 0.7977 | | 0.4178 | 15.38 | 600 | 0.4400 | 0.8073 | 0.8075 | | 0.399 | 20.51 | 800 | 0.4380 | 0.7994 | 0.7993 | | 0.3815 | 25.64 | 1000 | 0.4497 | 0.8044 | 0.8042 | | 0.3664 | 30.77 | 1200 | 0.4466 | 0.8008 | 0.8010 | | 0.3513 | 35.9 | 1400 | 0.4605 | 0.8009 | 0.8010 | | 0.3398 | 41.03 | 1600 | 0.4883 | 0.8029 | 0.8042 | | 0.3316 | 46.15 | 1800 | 0.4697 | 0.7992 | 0.7993 | | 0.3172 | 51.28 | 2000 | 0.4807 | 0.7990 | 0.7993 | | 0.3078 | 56.41 | 2200 | 0.4928 | 0.8010 | 0.8010 | | 0.2977 | 61.54 | 2400 | 0.4936 | 0.8027 | 0.8026 | | 0.2837 | 66.67 | 2600 | 0.5377 | 0.7967 | 0.7977 | | 0.28 | 71.79 | 2800 | 0.5258 | 0.7924 | 0.7928 | | 0.2724 | 76.92 | 3000 | 0.5418 | 0.7943 | 0.7945 | | 0.2668 | 82.05 | 3200 | 0.5509 | 0.7865 | 0.7879 | | 0.256 | 87.18 | 3400 | 0.5541 | 0.8010 | 0.8010 | | 0.2487 | 92.31 | 3600 | 0.5716 | 0.7987 | 0.7993 | | 0.2461 | 97.44 | 3800 | 0.5703 | 0.7832 | 0.7847 | | 0.2357 | 102.56 | 4000 | 0.5745 | 0.7926 | 0.7928 | | 0.2345 | 107.69 | 4200 | 0.5881 | 0.7893 | 0.7896 | | 0.2332 | 112.82 | 4400 | 0.5964 | 0.7787 | 0.7798 | | 0.2222 | 117.95 | 4600 | 0.6121 | 0.7961 | 0.7961 | | 0.2141 | 123.08 | 4800 | 0.6155 | 0.7897 | 0.7896 | | 0.2133 | 128.21 | 5000 | 0.6218 | 0.7945 | 0.7945 | | 0.2121 | 133.33 | 5200 | 0.6485 | 0.7872 | 0.7879 | | 0.2051 | 138.46 | 5400 | 0.6307 | 0.7910 | 0.7912 | | 0.1996 | 143.59 | 5600 | 0.6425 | 0.7929 | 0.7928 | | 0.1976 | 148.72 | 5800 | 0.6696 | 0.7994 | 0.7993 | | 0.1967 | 153.85 | 6000 | 0.6575 | 0.7873 | 0.7879 | | 0.1901 | 158.97 | 6200 | 0.6697 | 0.7816 | 0.7814 | | 0.1896 | 164.1 | 6400 | 0.6617 | 0.7943 | 0.7945 | | 0.1824 | 169.23 | 6600 | 0.6753 | 0.7977 | 0.7977 | | 0.1858 | 174.36 | 6800 | 0.6642 | 0.7959 | 0.7961 | | 0.1762 | 179.49 | 7000 | 0.6973 | 0.7942 | 0.7945 | | 0.1769 | 184.62 | 7200 | 0.7137 | 0.7921 | 0.7928 | | 0.1769 | 189.74 | 7400 | 0.7157 | 0.7911 | 0.7912 | | 0.1709 | 194.87 | 7600 | 0.7214 | 0.7878 | 0.7879 | | 0.1749 | 200.0 | 7800 | 0.7159 | 0.7894 | 0.7896 | | 0.1717 | 205.13 | 8000 | 0.7236 | 0.7863 | 0.7863 | | 0.1698 | 210.26 | 8200 | 0.7168 | 0.7911 | 0.7912 | | 0.1669 | 215.38 | 8400 | 0.7280 | 0.7862 | 0.7863 | | 0.1685 | 220.51 | 8600 | 0.7279 | 0.7843 | 0.7847 | | 0.1626 | 225.64 | 8800 | 0.7365 | 0.7895 | 0.7896 | | 0.1678 | 230.77 | 9000 | 0.7328 | 0.7895 | 0.7896 | | 0.1628 | 235.9 | 9200 | 0.7431 | 0.7912 | 0.7912 | | 0.1676 | 241.03 | 9400 | 0.7286 | 0.7877 | 0.7879 | | 0.1602 | 246.15 | 9600 | 0.7438 | 0.7844 | 0.7847 | | 0.1668 | 251.28 | 9800 | 0.7388 | 0.7894 | 0.7896 | | 0.1609 | 256.41 | 10000 | 0.7400 | 0.7894 | 0.7896 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_4096_512_15M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_4096_512_15M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_15M", "region:us" ]
null
2024-05-03T17:11:52+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_tata-seqsight_4096_512_15M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset. It achieves the following results on the evaluation set: - Loss: 0.4529 - F1 Score: 0.8189 - Accuracy: 0.8189 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.525 | 5.13 | 200 | 0.4598 | 0.7888 | 0.7912 | | 0.4258 | 10.26 | 400 | 0.4665 | 0.7872 | 0.7879 | | 0.3858 | 15.38 | 600 | 0.4548 | 0.7944 | 0.7945 | | 0.3567 | 20.51 | 800 | 0.4949 | 0.7881 | 0.7896 | | 0.3289 | 25.64 | 1000 | 0.4710 | 0.7994 | 0.7993 | | 0.3034 | 30.77 | 1200 | 0.4920 | 0.7770 | 0.7781 | | 0.2771 | 35.9 | 1400 | 0.5520 | 0.7913 | 0.7912 | | 0.2585 | 41.03 | 1600 | 0.5391 | 0.7790 | 0.7798 | | 0.2376 | 46.15 | 1800 | 0.5667 | 0.7839 | 0.7847 | | 0.2163 | 51.28 | 2000 | 0.6376 | 0.7881 | 0.7879 | | 0.2005 | 56.41 | 2200 | 0.6994 | 0.7927 | 0.7928 | | 0.1804 | 61.54 | 2400 | 0.7399 | 0.7848 | 0.7847 | | 0.1633 | 66.67 | 2600 | 0.8005 | 0.7694 | 0.7700 | | 0.1578 | 71.79 | 2800 | 0.8019 | 0.7723 | 0.7732 | | 0.1423 | 76.92 | 3000 | 0.8350 | 0.7480 | 0.7488 | | 0.1326 | 82.05 | 3200 | 0.7942 | 0.7535 | 0.7537 | | 0.1223 | 87.18 | 3400 | 0.9037 | 0.7633 | 0.7635 | | 0.1147 | 92.31 | 3600 | 0.9318 | 0.7583 | 0.7586 | | 0.1092 | 97.44 | 3800 | 0.9013 | 0.7675 | 0.7684 | | 0.1027 | 102.56 | 4000 | 0.9575 | 0.7649 | 0.7651 | | 0.0978 | 107.69 | 4200 | 0.9630 | 0.7778 | 0.7781 | | 0.0934 | 112.82 | 4400 | 0.9373 | 0.7747 | 0.7749 | | 0.0825 | 117.95 | 4600 | 1.0492 | 0.7668 | 0.7667 | | 0.083 | 123.08 | 4800 | 1.1142 | 0.7633 | 0.7635 | | 0.0819 | 128.21 | 5000 | 1.0054 | 0.7750 | 0.7749 | | 0.0807 | 133.33 | 5200 | 1.0625 | 0.7741 | 0.7749 | | 0.0732 | 138.46 | 5400 | 1.0712 | 0.7658 | 0.7667 | | 0.0694 | 143.59 | 5600 | 1.0530 | 0.7679 | 0.7684 | | 0.0686 | 148.72 | 5800 | 1.0695 | 0.7726 | 0.7732 | | 0.0683 | 153.85 | 6000 | 1.0801 | 0.7783 | 0.7781 | | 0.0603 | 158.97 | 6200 | 1.1614 | 0.7749 | 0.7749 | | 0.0629 | 164.1 | 6400 | 1.0910 | 0.7748 | 0.7749 | | 0.0597 | 169.23 | 6600 | 1.0800 | 0.7764 | 0.7765 | | 0.0612 | 174.36 | 6800 | 1.1113 | 0.7620 | 0.7618 | | 0.0553 | 179.49 | 7000 | 1.1382 | 0.7781 | 0.7781 | | 0.0561 | 184.62 | 7200 | 1.1173 | 0.7763 | 0.7765 | | 0.0553 | 189.74 | 7400 | 1.1237 | 0.7701 | 0.7700 | | 0.0503 | 194.87 | 7600 | 1.1780 | 0.7799 | 0.7798 | | 0.05 | 200.0 | 7800 | 1.2119 | 0.7668 | 0.7667 | | 0.0483 | 205.13 | 8000 | 1.2256 | 0.7733 | 0.7732 | | 0.0485 | 210.26 | 8200 | 1.2152 | 0.7797 | 0.7798 | | 0.0508 | 215.38 | 8400 | 1.1864 | 0.7779 | 0.7781 | | 0.0476 | 220.51 | 8600 | 1.2031 | 0.7857 | 0.7863 | | 0.0453 | 225.64 | 8800 | 1.2366 | 0.7830 | 0.7830 | | 0.0451 | 230.77 | 9000 | 1.2441 | 0.7782 | 0.7781 | | 0.0443 | 235.9 | 9200 | 1.2473 | 0.7812 | 0.7814 | | 0.0473 | 241.03 | 9400 | 1.2117 | 0.7764 | 0.7765 | | 0.0442 | 246.15 | 9600 | 1.2430 | 0.7846 | 0.7847 | | 0.0428 | 251.28 | 9800 | 1.2550 | 0.7829 | 0.7830 | | 0.0434 | 256.41 | 10000 | 1.2525 | 0.7829 | 0.7830 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_4096_512_15M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_4096_512_15M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_15M", "region:us" ]
null
2024-05-03T17:12:26+00:00
text-generation
transformers
{}
duydatnguyen/vi-poem-gpt-neo-generation
null
[ "transformers", "tensorboard", "safetensors", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T17:13:56+00:00
null
transformers
# Uploaded model - **Developed by:** jirawan-chro - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
jirawan-chro/lora_model
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-03T17:14:19+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_notata-seqsight_4096_512_15M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.1268 - F1 Score: 0.9512 - Accuracy: 0.9512 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.3546 | 0.6 | 200 | 0.1798 | 0.9257 | 0.9258 | | 0.1903 | 1.2 | 400 | 0.1564 | 0.9370 | 0.9371 | | 0.1734 | 1.81 | 600 | 0.1403 | 0.9463 | 0.9463 | | 0.1569 | 2.41 | 800 | 0.1374 | 0.9468 | 0.9469 | | 0.1515 | 3.01 | 1000 | 0.1296 | 0.9510 | 0.9510 | | 0.1465 | 3.61 | 1200 | 0.1263 | 0.9521 | 0.9521 | | 0.1441 | 4.22 | 1400 | 0.1236 | 0.9529 | 0.9529 | | 0.1372 | 4.82 | 1600 | 0.1209 | 0.9538 | 0.9538 | | 0.1365 | 5.42 | 1800 | 0.1239 | 0.9504 | 0.9504 | | 0.1312 | 6.02 | 2000 | 0.1234 | 0.9520 | 0.9520 | | 0.1302 | 6.63 | 2200 | 0.1171 | 0.9550 | 0.9550 | | 0.1309 | 7.23 | 2400 | 0.1162 | 0.9546 | 0.9546 | | 0.1263 | 7.83 | 2600 | 0.1171 | 0.9533 | 0.9533 | | 0.1285 | 8.43 | 2800 | 0.1189 | 0.9536 | 0.9536 | | 0.1298 | 9.04 | 3000 | 0.1164 | 0.9563 | 0.9563 | | 0.126 | 9.64 | 3200 | 0.1199 | 0.9533 | 0.9533 | | 0.1268 | 10.24 | 3400 | 0.1154 | 0.9585 | 0.9585 | | 0.1243 | 10.84 | 3600 | 0.1142 | 0.9566 | 0.9567 | | 0.1214 | 11.45 | 3800 | 0.1139 | 0.9555 | 0.9555 | | 0.1228 | 12.05 | 4000 | 0.1134 | 0.9576 | 0.9576 | | 0.1223 | 12.65 | 4200 | 0.1137 | 0.9557 | 0.9557 | | 0.1237 | 13.25 | 4400 | 0.1122 | 0.9557 | 0.9557 | | 0.1207 | 13.86 | 4600 | 0.1118 | 0.9563 | 0.9563 | | 0.1225 | 14.46 | 4800 | 0.1122 | 0.9572 | 0.9572 | | 0.1182 | 15.06 | 5000 | 0.1109 | 0.9580 | 0.9580 | | 0.1191 | 15.66 | 5200 | 0.1114 | 0.9565 | 0.9565 | | 0.1218 | 16.27 | 5400 | 0.1102 | 0.9580 | 0.9580 | | 0.1179 | 16.87 | 5600 | 0.1105 | 0.9561 | 0.9561 | | 0.1165 | 17.47 | 5800 | 0.1104 | 0.9563 | 0.9563 | | 0.1219 | 18.07 | 6000 | 0.1094 | 0.9580 | 0.9580 | | 0.1189 | 18.67 | 6200 | 0.1086 | 0.9583 | 0.9584 | | 0.1187 | 19.28 | 6400 | 0.1089 | 0.9576 | 0.9576 | | 0.1128 | 19.88 | 6600 | 0.1102 | 0.9583 | 0.9584 | | 0.1209 | 20.48 | 6800 | 0.1097 | 0.9580 | 0.9580 | | 0.115 | 21.08 | 7000 | 0.1088 | 0.9576 | 0.9576 | | 0.1127 | 21.69 | 7200 | 0.1103 | 0.9565 | 0.9565 | | 0.1147 | 22.29 | 7400 | 0.1116 | 0.9567 | 0.9567 | | 0.1183 | 22.89 | 7600 | 0.1086 | 0.9567 | 0.9567 | | 0.1137 | 23.49 | 7800 | 0.1083 | 0.9576 | 0.9576 | | 0.1158 | 24.1 | 8000 | 0.1084 | 0.9593 | 0.9593 | | 0.1133 | 24.7 | 8200 | 0.1080 | 0.9584 | 0.9584 | | 0.1132 | 25.3 | 8400 | 0.1082 | 0.9580 | 0.9580 | | 0.1129 | 25.9 | 8600 | 0.1081 | 0.9574 | 0.9574 | | 0.1149 | 26.51 | 8800 | 0.1079 | 0.9572 | 0.9572 | | 0.1137 | 27.11 | 9000 | 0.1075 | 0.9578 | 0.9578 | | 0.1135 | 27.71 | 9200 | 0.1078 | 0.9593 | 0.9593 | | 0.1092 | 28.31 | 9400 | 0.1081 | 0.9589 | 0.9589 | | 0.1183 | 28.92 | 9600 | 0.1074 | 0.9576 | 0.9576 | | 0.11 | 29.52 | 9800 | 0.1076 | 0.9578 | 0.9578 | | 0.1162 | 30.12 | 10000 | 0.1075 | 0.9582 | 0.9582 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_4096_512_15M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_4096_512_15M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_15M", "region:us" ]
null
2024-05-03T17:15:04+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_all-seqsight_4096_512_15M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.4321 - F1 Score: 0.7993 - Accuracy: 0.7993 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6099 | 0.54 | 200 | 0.5123 | 0.7516 | 0.7517 | | 0.5027 | 1.08 | 400 | 0.4814 | 0.7731 | 0.7735 | | 0.4759 | 1.62 | 600 | 0.4652 | 0.7825 | 0.7826 | | 0.466 | 2.16 | 800 | 0.4640 | 0.7823 | 0.7823 | | 0.4622 | 2.7 | 1000 | 0.4578 | 0.7863 | 0.7863 | | 0.459 | 3.24 | 1200 | 0.4556 | 0.7879 | 0.7880 | | 0.4549 | 3.78 | 1400 | 0.4535 | 0.7881 | 0.7882 | | 0.452 | 4.32 | 1600 | 0.4580 | 0.7896 | 0.7897 | | 0.4519 | 4.86 | 1800 | 0.4566 | 0.7884 | 0.7887 | | 0.4485 | 5.41 | 2000 | 0.4530 | 0.7923 | 0.7924 | | 0.444 | 5.95 | 2200 | 0.4512 | 0.7893 | 0.7894 | | 0.4465 | 6.49 | 2400 | 0.4478 | 0.7910 | 0.7910 | | 0.4425 | 7.03 | 2600 | 0.4493 | 0.7905 | 0.7909 | | 0.4443 | 7.57 | 2800 | 0.4481 | 0.7972 | 0.7973 | | 0.4371 | 8.11 | 3000 | 0.4472 | 0.7958 | 0.7959 | | 0.4398 | 8.65 | 3200 | 0.4437 | 0.7951 | 0.7951 | | 0.4411 | 9.19 | 3400 | 0.4442 | 0.7939 | 0.7939 | | 0.4368 | 9.73 | 3600 | 0.4501 | 0.7915 | 0.7919 | | 0.4404 | 10.27 | 3800 | 0.4432 | 0.7947 | 0.7948 | | 0.4346 | 10.81 | 4000 | 0.4449 | 0.7969 | 0.7970 | | 0.436 | 11.35 | 4200 | 0.4438 | 0.7951 | 0.7953 | | 0.435 | 11.89 | 4400 | 0.4437 | 0.7951 | 0.7953 | | 0.4315 | 12.43 | 4600 | 0.4426 | 0.7954 | 0.7954 | | 0.4327 | 12.97 | 4800 | 0.4431 | 0.7972 | 0.7973 | | 0.4332 | 13.51 | 5000 | 0.4484 | 0.7893 | 0.7900 | | 0.4322 | 14.05 | 5200 | 0.4408 | 0.7967 | 0.7968 | | 0.4306 | 14.59 | 5400 | 0.4414 | 0.7986 | 0.7986 | | 0.4301 | 15.14 | 5600 | 0.4410 | 0.7986 | 0.7986 | | 0.4322 | 15.68 | 5800 | 0.4412 | 0.7955 | 0.7956 | | 0.4238 | 16.22 | 6000 | 0.4422 | 0.7962 | 0.7963 | | 0.4305 | 16.76 | 6200 | 0.4392 | 0.7962 | 0.7963 | | 0.4333 | 17.3 | 6400 | 0.4398 | 0.7960 | 0.7961 | | 0.4277 | 17.84 | 6600 | 0.4423 | 0.7937 | 0.7939 | | 0.4271 | 18.38 | 6800 | 0.4429 | 0.7940 | 0.7943 | | 0.4266 | 18.92 | 7000 | 0.4394 | 0.7950 | 0.7951 | | 0.4217 | 19.46 | 7200 | 0.4408 | 0.7952 | 0.7953 | | 0.4336 | 20.0 | 7400 | 0.4388 | 0.7985 | 0.7985 | | 0.4299 | 20.54 | 7600 | 0.4405 | 0.7940 | 0.7941 | | 0.4257 | 21.08 | 7800 | 0.4399 | 0.7946 | 0.7948 | | 0.4269 | 21.62 | 8000 | 0.4372 | 0.7964 | 0.7965 | | 0.4254 | 22.16 | 8200 | 0.4375 | 0.7976 | 0.7976 | | 0.4316 | 22.7 | 8400 | 0.4386 | 0.7939 | 0.7941 | | 0.4249 | 23.24 | 8600 | 0.4363 | 0.7980 | 0.7980 | | 0.4243 | 23.78 | 8800 | 0.4377 | 0.7971 | 0.7971 | | 0.42 | 24.32 | 9000 | 0.4383 | 0.7985 | 0.7985 | | 0.426 | 24.86 | 9200 | 0.4372 | 0.7973 | 0.7973 | | 0.4327 | 25.41 | 9400 | 0.4373 | 0.7966 | 0.7966 | | 0.4198 | 25.95 | 9600 | 0.4382 | 0.7973 | 0.7973 | | 0.4274 | 26.49 | 9800 | 0.4382 | 0.7957 | 0.7958 | | 0.4227 | 27.03 | 10000 | 0.4381 | 0.7966 | 0.7966 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_4096_512_15M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_4096_512_15M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_15M", "region:us" ]
null
2024-05-03T17:15:44+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_all-seqsight_4096_512_15M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.4073 - F1 Score: 0.8143 - Accuracy: 0.8144 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.532 | 0.54 | 200 | 0.4612 | 0.7853 | 0.7853 | | 0.458 | 1.08 | 400 | 0.4718 | 0.7885 | 0.7894 | | 0.4427 | 1.62 | 600 | 0.4458 | 0.7929 | 0.7929 | | 0.4349 | 2.16 | 800 | 0.4436 | 0.7970 | 0.7971 | | 0.4323 | 2.7 | 1000 | 0.4393 | 0.7956 | 0.7958 | | 0.4271 | 3.24 | 1200 | 0.4347 | 0.7961 | 0.7965 | | 0.4219 | 3.78 | 1400 | 0.4377 | 0.7934 | 0.7939 | | 0.4204 | 4.32 | 1600 | 0.4353 | 0.8010 | 0.8010 | | 0.4198 | 4.86 | 1800 | 0.4322 | 0.7973 | 0.7976 | | 0.4127 | 5.41 | 2000 | 0.4312 | 0.8001 | 0.8003 | | 0.4133 | 5.95 | 2200 | 0.4320 | 0.8046 | 0.8046 | | 0.4152 | 6.49 | 2400 | 0.4262 | 0.8016 | 0.8017 | | 0.4079 | 7.03 | 2600 | 0.4236 | 0.8015 | 0.8015 | | 0.4079 | 7.57 | 2800 | 0.4268 | 0.8023 | 0.8024 | | 0.404 | 8.11 | 3000 | 0.4295 | 0.8001 | 0.8003 | | 0.404 | 8.65 | 3200 | 0.4209 | 0.8054 | 0.8056 | | 0.4043 | 9.19 | 3400 | 0.4243 | 0.8071 | 0.8071 | | 0.4024 | 9.73 | 3600 | 0.4302 | 0.8033 | 0.8037 | | 0.4022 | 10.27 | 3800 | 0.4269 | 0.8034 | 0.8037 | | 0.4006 | 10.81 | 4000 | 0.4304 | 0.8042 | 0.8042 | | 0.3963 | 11.35 | 4200 | 0.4246 | 0.8036 | 0.8039 | | 0.3959 | 11.89 | 4400 | 0.4254 | 0.8037 | 0.8041 | | 0.3943 | 12.43 | 4600 | 0.4254 | 0.8029 | 0.8029 | | 0.3912 | 12.97 | 4800 | 0.4262 | 0.8036 | 0.8037 | | 0.3924 | 13.51 | 5000 | 0.4351 | 0.7990 | 0.8 | | 0.3908 | 14.05 | 5200 | 0.4232 | 0.8079 | 0.8079 | | 0.3875 | 14.59 | 5400 | 0.4218 | 0.8084 | 0.8084 | | 0.3879 | 15.14 | 5600 | 0.4291 | 0.8074 | 0.8074 | | 0.3881 | 15.68 | 5800 | 0.4278 | 0.8037 | 0.8041 | | 0.3809 | 16.22 | 6000 | 0.4286 | 0.8042 | 0.8044 | | 0.3888 | 16.76 | 6200 | 0.4171 | 0.8088 | 0.8090 | | 0.3879 | 17.3 | 6400 | 0.4229 | 0.8070 | 0.8073 | | 0.3836 | 17.84 | 6600 | 0.4255 | 0.8047 | 0.8049 | | 0.3787 | 18.38 | 6800 | 0.4352 | 0.7976 | 0.7986 | | 0.3789 | 18.92 | 7000 | 0.4214 | 0.8086 | 0.8088 | | 0.376 | 19.46 | 7200 | 0.4231 | 0.8084 | 0.8084 | | 0.3864 | 20.0 | 7400 | 0.4186 | 0.8082 | 0.8083 | | 0.3789 | 20.54 | 7600 | 0.4243 | 0.8051 | 0.8054 | | 0.3781 | 21.08 | 7800 | 0.4221 | 0.8074 | 0.8076 | | 0.3781 | 21.62 | 8000 | 0.4171 | 0.8080 | 0.8081 | | 0.3727 | 22.16 | 8200 | 0.4221 | 0.8067 | 0.8069 | | 0.3811 | 22.7 | 8400 | 0.4233 | 0.8069 | 0.8073 | | 0.3725 | 23.24 | 8600 | 0.4180 | 0.8096 | 0.8096 | | 0.3732 | 23.78 | 8800 | 0.4205 | 0.8070 | 0.8071 | | 0.3704 | 24.32 | 9000 | 0.4216 | 0.8077 | 0.8078 | | 0.3744 | 24.86 | 9200 | 0.4196 | 0.8060 | 0.8061 | | 0.3814 | 25.41 | 9400 | 0.4205 | 0.8075 | 0.8076 | | 0.367 | 25.95 | 9600 | 0.4235 | 0.8074 | 0.8076 | | 0.372 | 26.49 | 9800 | 0.4239 | 0.8061 | 0.8063 | | 0.372 | 27.03 | 10000 | 0.4234 | 0.8076 | 0.8078 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_4096_512_15M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_4096_512_15M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_15M", "region:us" ]
null
2024-05-03T17:15:44+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/u4t42vm
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T17:15:50+00:00
null
null
{}
balenwo/pulla
null
[ "region:us" ]
null
2024-05-03T17:15:59+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama3_on_scigen_fixedprompt_server This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 30 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "Llama3_on_scigen_fixedprompt_server", "results": []}]}
moetezsa/Llama3_on_scigen_fixedprompt_server
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
2024-05-03T17:16:06+00:00
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
stvhuang/rcr-run-5pqr6lwp-90396-master-0_20240402T105012-ep46
null
[ "transformers", "safetensors", "xlm-roberta", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T17:16:13+00:00
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) GritLM-7B - GGUF - Model creator: https://huggingface.co/GritLM/ - Original model: https://huggingface.co/GritLM/GritLM-7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [GritLM-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q2_K.gguf) | Q2_K | 2.53GB | | [GritLM-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [GritLM-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.IQ3_S.gguf) | IQ3_S | 2.96GB | | [GritLM-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [GritLM-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.IQ3_M.gguf) | IQ3_M | 3.06GB | | [GritLM-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q3_K.gguf) | Q3_K | 3.28GB | | [GritLM-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [GritLM-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [GritLM-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [GritLM-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q4_0.gguf) | Q4_0 | 3.83GB | | [GritLM-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [GritLM-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [GritLM-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q4_K.gguf) | Q4_K | 4.07GB | | [GritLM-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [GritLM-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q4_1.gguf) | Q4_1 | 4.24GB | | [GritLM-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q5_0.gguf) | Q5_0 | 4.65GB | | [GritLM-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [GritLM-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q5_K.gguf) | Q5_K | 4.78GB | | [GritLM-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [GritLM-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q5_1.gguf) | Q5_1 | 5.07GB | | [GritLM-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q6_K.gguf) | Q6_K | 5.53GB | Original model description: --- pipeline_tag: text-generation inference: true license: apache-2.0 datasets: - GritLM/tulu2 tags: - mteb model-index: - name: GritLM-7B results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 81.17910447761194 - type: ap value: 46.26260671758199 - type: f1 value: 75.44565719934167 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 96.5161 - type: ap value: 94.79131981460425 - type: f1 value: 96.51506148413065 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 57.806000000000004 - type: f1 value: 56.78350156257903 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 38.478 - type: map_at_10 value: 54.955 - type: map_at_100 value: 54.955 - type: map_at_1000 value: 54.955 - type: map_at_3 value: 50.888999999999996 - type: map_at_5 value: 53.349999999999994 - type: mrr_at_1 value: 39.757999999999996 - type: mrr_at_10 value: 55.449000000000005 - type: mrr_at_100 value: 55.449000000000005 - type: mrr_at_1000 value: 55.449000000000005 - type: mrr_at_3 value: 51.37500000000001 - type: mrr_at_5 value: 53.822 - type: ndcg_at_1 value: 38.478 - type: ndcg_at_10 value: 63.239999999999995 - type: ndcg_at_100 value: 63.239999999999995 - type: ndcg_at_1000 value: 63.239999999999995 - type: ndcg_at_3 value: 54.935 - type: ndcg_at_5 value: 59.379000000000005 - type: precision_at_1 value: 38.478 - type: precision_at_10 value: 8.933 - type: precision_at_100 value: 0.893 - type: precision_at_1000 value: 0.089 - type: precision_at_3 value: 22.214 - type: precision_at_5 value: 15.491 - type: recall_at_1 value: 38.478 - type: recall_at_10 value: 89.331 - type: recall_at_100 value: 89.331 - type: recall_at_1000 value: 89.331 - type: recall_at_3 value: 66.643 - type: recall_at_5 value: 77.45400000000001 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 51.67144081472449 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 48.11256154264126 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 67.33801955487878 - type: mrr value: 80.71549487754474 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 88.1935203751726 - type: cos_sim_spearman value: 86.35497970498659 - type: euclidean_pearson value: 85.46910708503744 - type: euclidean_spearman value: 85.13928935405485 - type: manhattan_pearson value: 85.68373836333303 - type: manhattan_spearman value: 85.40013867117746 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 88.46753246753248 - type: f1 value: 88.43006344981134 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 40.86793640310432 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 39.80291334130727 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 38.421 - type: map_at_10 value: 52.349000000000004 - type: map_at_100 value: 52.349000000000004 - type: map_at_1000 value: 52.349000000000004 - type: map_at_3 value: 48.17 - type: map_at_5 value: 50.432 - type: mrr_at_1 value: 47.353 - type: mrr_at_10 value: 58.387 - type: mrr_at_100 value: 58.387 - type: mrr_at_1000 value: 58.387 - type: mrr_at_3 value: 56.199 - type: mrr_at_5 value: 57.487 - type: ndcg_at_1 value: 47.353 - type: ndcg_at_10 value: 59.202 - type: ndcg_at_100 value: 58.848 - type: ndcg_at_1000 value: 58.831999999999994 - type: ndcg_at_3 value: 54.112 - type: ndcg_at_5 value: 56.312 - type: precision_at_1 value: 47.353 - type: precision_at_10 value: 11.459 - type: precision_at_100 value: 1.146 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 26.133 - type: precision_at_5 value: 18.627 - type: recall_at_1 value: 38.421 - type: recall_at_10 value: 71.89 - type: recall_at_100 value: 71.89 - type: recall_at_1000 value: 71.89 - type: recall_at_3 value: 56.58 - type: recall_at_5 value: 63.125 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 38.025999999999996 - type: map_at_10 value: 50.590999999999994 - type: map_at_100 value: 51.99700000000001 - type: map_at_1000 value: 52.11599999999999 - type: map_at_3 value: 47.435 - type: map_at_5 value: 49.236000000000004 - type: mrr_at_1 value: 48.28 - type: mrr_at_10 value: 56.814 - type: mrr_at_100 value: 57.446 - type: mrr_at_1000 value: 57.476000000000006 - type: mrr_at_3 value: 54.958 - type: mrr_at_5 value: 56.084999999999994 - type: ndcg_at_1 value: 48.28 - type: ndcg_at_10 value: 56.442 - type: ndcg_at_100 value: 60.651999999999994 - type: ndcg_at_1000 value: 62.187000000000005 - type: ndcg_at_3 value: 52.866 - type: ndcg_at_5 value: 54.515 - type: precision_at_1 value: 48.28 - type: precision_at_10 value: 10.586 - type: precision_at_100 value: 1.6310000000000002 - type: precision_at_1000 value: 0.20600000000000002 - type: precision_at_3 value: 25.945 - type: precision_at_5 value: 18.076 - type: recall_at_1 value: 38.025999999999996 - type: recall_at_10 value: 66.11399999999999 - type: recall_at_100 value: 83.339 - type: recall_at_1000 value: 92.413 - type: recall_at_3 value: 54.493 - type: recall_at_5 value: 59.64699999999999 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 47.905 - type: map_at_10 value: 61.58 - type: map_at_100 value: 62.605 - type: map_at_1000 value: 62.637 - type: map_at_3 value: 58.074000000000005 - type: map_at_5 value: 60.260000000000005 - type: mrr_at_1 value: 54.42 - type: mrr_at_10 value: 64.847 - type: mrr_at_100 value: 65.403 - type: mrr_at_1000 value: 65.41900000000001 - type: mrr_at_3 value: 62.675000000000004 - type: mrr_at_5 value: 64.101 - type: ndcg_at_1 value: 54.42 - type: ndcg_at_10 value: 67.394 - type: ndcg_at_100 value: 70.846 - type: ndcg_at_1000 value: 71.403 - type: ndcg_at_3 value: 62.025 - type: ndcg_at_5 value: 65.032 - type: precision_at_1 value: 54.42 - type: precision_at_10 value: 10.646 - type: precision_at_100 value: 1.325 - type: precision_at_1000 value: 0.13999999999999999 - type: precision_at_3 value: 27.398 - type: precision_at_5 value: 18.796 - type: recall_at_1 value: 47.905 - type: recall_at_10 value: 80.84599999999999 - type: recall_at_100 value: 95.078 - type: recall_at_1000 value: 98.878 - type: recall_at_3 value: 67.05600000000001 - type: recall_at_5 value: 74.261 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.745 - type: map_at_10 value: 41.021 - type: map_at_100 value: 41.021 - type: map_at_1000 value: 41.021 - type: map_at_3 value: 37.714999999999996 - type: map_at_5 value: 39.766 - type: mrr_at_1 value: 33.559 - type: mrr_at_10 value: 43.537 - type: mrr_at_100 value: 43.537 - type: mrr_at_1000 value: 43.537 - type: mrr_at_3 value: 40.546 - type: mrr_at_5 value: 42.439 - type: ndcg_at_1 value: 33.559 - type: ndcg_at_10 value: 46.781 - type: ndcg_at_100 value: 46.781 - type: ndcg_at_1000 value: 46.781 - type: ndcg_at_3 value: 40.516000000000005 - type: ndcg_at_5 value: 43.957 - type: precision_at_1 value: 33.559 - type: precision_at_10 value: 7.198 - type: precision_at_100 value: 0.72 - type: precision_at_1000 value: 0.07200000000000001 - type: precision_at_3 value: 17.1 - type: precision_at_5 value: 12.316 - type: recall_at_1 value: 30.745 - type: recall_at_10 value: 62.038000000000004 - type: recall_at_100 value: 62.038000000000004 - type: recall_at_1000 value: 62.038000000000004 - type: recall_at_3 value: 45.378 - type: recall_at_5 value: 53.580000000000005 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 19.637999999999998 - type: map_at_10 value: 31.05 - type: map_at_100 value: 31.05 - type: map_at_1000 value: 31.05 - type: map_at_3 value: 27.628000000000004 - type: map_at_5 value: 29.767 - type: mrr_at_1 value: 25.0 - type: mrr_at_10 value: 36.131 - type: mrr_at_100 value: 36.131 - type: mrr_at_1000 value: 36.131 - type: mrr_at_3 value: 33.333 - type: mrr_at_5 value: 35.143 - type: ndcg_at_1 value: 25.0 - type: ndcg_at_10 value: 37.478 - type: ndcg_at_100 value: 37.469 - type: ndcg_at_1000 value: 37.469 - type: ndcg_at_3 value: 31.757999999999996 - type: ndcg_at_5 value: 34.821999999999996 - type: precision_at_1 value: 25.0 - type: precision_at_10 value: 7.188999999999999 - type: precision_at_100 value: 0.719 - type: precision_at_1000 value: 0.07200000000000001 - type: precision_at_3 value: 15.837000000000002 - type: precision_at_5 value: 11.841 - type: recall_at_1 value: 19.637999999999998 - type: recall_at_10 value: 51.836000000000006 - type: recall_at_100 value: 51.836000000000006 - type: recall_at_1000 value: 51.836000000000006 - type: recall_at_3 value: 36.384 - type: recall_at_5 value: 43.964 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 34.884 - type: map_at_10 value: 47.88 - type: map_at_100 value: 47.88 - type: map_at_1000 value: 47.88 - type: map_at_3 value: 43.85 - type: map_at_5 value: 46.414 - type: mrr_at_1 value: 43.022 - type: mrr_at_10 value: 53.569 - type: mrr_at_100 value: 53.569 - type: mrr_at_1000 value: 53.569 - type: mrr_at_3 value: 51.075 - type: mrr_at_5 value: 52.725 - type: ndcg_at_1 value: 43.022 - type: ndcg_at_10 value: 54.461000000000006 - type: ndcg_at_100 value: 54.388000000000005 - type: ndcg_at_1000 value: 54.388000000000005 - type: ndcg_at_3 value: 48.864999999999995 - type: ndcg_at_5 value: 52.032000000000004 - type: precision_at_1 value: 43.022 - type: precision_at_10 value: 9.885 - type: precision_at_100 value: 0.988 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 23.612 - type: precision_at_5 value: 16.997 - type: recall_at_1 value: 34.884 - type: recall_at_10 value: 68.12899999999999 - type: recall_at_100 value: 68.12899999999999 - type: recall_at_1000 value: 68.12899999999999 - type: recall_at_3 value: 52.428 - type: recall_at_5 value: 60.662000000000006 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 31.588 - type: map_at_10 value: 43.85 - type: map_at_100 value: 45.317 - type: map_at_1000 value: 45.408 - type: map_at_3 value: 39.73 - type: map_at_5 value: 42.122 - type: mrr_at_1 value: 38.927 - type: mrr_at_10 value: 49.582 - type: mrr_at_100 value: 50.39 - type: mrr_at_1000 value: 50.426 - type: mrr_at_3 value: 46.518 - type: mrr_at_5 value: 48.271 - type: ndcg_at_1 value: 38.927 - type: ndcg_at_10 value: 50.605999999999995 - type: ndcg_at_100 value: 56.22200000000001 - type: ndcg_at_1000 value: 57.724 - type: ndcg_at_3 value: 44.232 - type: ndcg_at_5 value: 47.233999999999995 - type: precision_at_1 value: 38.927 - type: precision_at_10 value: 9.429 - type: precision_at_100 value: 1.435 - type: precision_at_1000 value: 0.172 - type: precision_at_3 value: 21.271 - type: precision_at_5 value: 15.434000000000001 - type: recall_at_1 value: 31.588 - type: recall_at_10 value: 64.836 - type: recall_at_100 value: 88.066 - type: recall_at_1000 value: 97.748 - type: recall_at_3 value: 47.128 - type: recall_at_5 value: 54.954 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 31.956083333333336 - type: map_at_10 value: 43.33483333333333 - type: map_at_100 value: 44.64883333333333 - type: map_at_1000 value: 44.75 - type: map_at_3 value: 39.87741666666666 - type: map_at_5 value: 41.86766666666667 - type: mrr_at_1 value: 38.06341666666667 - type: mrr_at_10 value: 47.839666666666666 - type: mrr_at_100 value: 48.644000000000005 - type: mrr_at_1000 value: 48.68566666666667 - type: mrr_at_3 value: 45.26358333333334 - type: mrr_at_5 value: 46.790000000000006 - type: ndcg_at_1 value: 38.06341666666667 - type: ndcg_at_10 value: 49.419333333333334 - type: ndcg_at_100 value: 54.50166666666667 - type: ndcg_at_1000 value: 56.161166666666674 - type: ndcg_at_3 value: 43.982416666666666 - type: ndcg_at_5 value: 46.638083333333334 - type: precision_at_1 value: 38.06341666666667 - type: precision_at_10 value: 8.70858333333333 - type: precision_at_100 value: 1.327 - type: precision_at_1000 value: 0.165 - type: precision_at_3 value: 20.37816666666667 - type: precision_at_5 value: 14.516333333333334 - type: recall_at_1 value: 31.956083333333336 - type: recall_at_10 value: 62.69458333333334 - type: recall_at_100 value: 84.46433333333334 - type: recall_at_1000 value: 95.58449999999999 - type: recall_at_3 value: 47.52016666666666 - type: recall_at_5 value: 54.36066666666666 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.912 - type: map_at_10 value: 38.291 - type: map_at_100 value: 39.44 - type: map_at_1000 value: 39.528 - type: map_at_3 value: 35.638 - type: map_at_5 value: 37.218 - type: mrr_at_1 value: 32.822 - type: mrr_at_10 value: 41.661 - type: mrr_at_100 value: 42.546 - type: mrr_at_1000 value: 42.603 - type: mrr_at_3 value: 39.238 - type: mrr_at_5 value: 40.726 - type: ndcg_at_1 value: 32.822 - type: ndcg_at_10 value: 43.373 - type: ndcg_at_100 value: 48.638 - type: ndcg_at_1000 value: 50.654999999999994 - type: ndcg_at_3 value: 38.643 - type: ndcg_at_5 value: 41.126000000000005 - type: precision_at_1 value: 32.822 - type: precision_at_10 value: 6.8709999999999996 - type: precision_at_100 value: 1.032 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 16.82 - type: precision_at_5 value: 11.718 - type: recall_at_1 value: 28.912 - type: recall_at_10 value: 55.376999999999995 - type: recall_at_100 value: 79.066 - type: recall_at_1000 value: 93.664 - type: recall_at_3 value: 42.569 - type: recall_at_5 value: 48.719 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.181 - type: map_at_10 value: 31.462 - type: map_at_100 value: 32.73 - type: map_at_1000 value: 32.848 - type: map_at_3 value: 28.57 - type: map_at_5 value: 30.182 - type: mrr_at_1 value: 27.185 - type: mrr_at_10 value: 35.846000000000004 - type: mrr_at_100 value: 36.811 - type: mrr_at_1000 value: 36.873 - type: mrr_at_3 value: 33.437 - type: mrr_at_5 value: 34.813 - type: ndcg_at_1 value: 27.185 - type: ndcg_at_10 value: 36.858000000000004 - type: ndcg_at_100 value: 42.501 - type: ndcg_at_1000 value: 44.945 - type: ndcg_at_3 value: 32.066 - type: ndcg_at_5 value: 34.29 - type: precision_at_1 value: 27.185 - type: precision_at_10 value: 6.752 - type: precision_at_100 value: 1.111 - type: precision_at_1000 value: 0.151 - type: precision_at_3 value: 15.290000000000001 - type: precision_at_5 value: 11.004999999999999 - type: recall_at_1 value: 22.181 - type: recall_at_10 value: 48.513 - type: recall_at_100 value: 73.418 - type: recall_at_1000 value: 90.306 - type: recall_at_3 value: 35.003 - type: recall_at_5 value: 40.876000000000005 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 33.934999999999995 - type: map_at_10 value: 44.727 - type: map_at_100 value: 44.727 - type: map_at_1000 value: 44.727 - type: map_at_3 value: 40.918 - type: map_at_5 value: 42.961 - type: mrr_at_1 value: 39.646 - type: mrr_at_10 value: 48.898 - type: mrr_at_100 value: 48.898 - type: mrr_at_1000 value: 48.898 - type: mrr_at_3 value: 45.896 - type: mrr_at_5 value: 47.514 - type: ndcg_at_1 value: 39.646 - type: ndcg_at_10 value: 50.817 - type: ndcg_at_100 value: 50.803 - type: ndcg_at_1000 value: 50.803 - type: ndcg_at_3 value: 44.507999999999996 - type: ndcg_at_5 value: 47.259 - type: precision_at_1 value: 39.646 - type: precision_at_10 value: 8.759 - type: precision_at_100 value: 0.876 - type: precision_at_1000 value: 0.08800000000000001 - type: precision_at_3 value: 20.274 - type: precision_at_5 value: 14.366000000000001 - type: recall_at_1 value: 33.934999999999995 - type: recall_at_10 value: 65.037 - type: recall_at_100 value: 65.037 - type: recall_at_1000 value: 65.037 - type: recall_at_3 value: 47.439 - type: recall_at_5 value: 54.567 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.058 - type: map_at_10 value: 43.137 - type: map_at_100 value: 43.137 - type: map_at_1000 value: 43.137 - type: map_at_3 value: 39.882 - type: map_at_5 value: 41.379 - type: mrr_at_1 value: 38.933 - type: mrr_at_10 value: 48.344 - type: mrr_at_100 value: 48.344 - type: mrr_at_1000 value: 48.344 - type: mrr_at_3 value: 45.652 - type: mrr_at_5 value: 46.877 - type: ndcg_at_1 value: 38.933 - type: ndcg_at_10 value: 49.964 - type: ndcg_at_100 value: 49.242000000000004 - type: ndcg_at_1000 value: 49.222 - type: ndcg_at_3 value: 44.605 - type: ndcg_at_5 value: 46.501999999999995 - type: precision_at_1 value: 38.933 - type: precision_at_10 value: 9.427000000000001 - type: precision_at_100 value: 0.943 - type: precision_at_1000 value: 0.094 - type: precision_at_3 value: 20.685000000000002 - type: precision_at_5 value: 14.585 - type: recall_at_1 value: 32.058 - type: recall_at_10 value: 63.074 - type: recall_at_100 value: 63.074 - type: recall_at_1000 value: 63.074 - type: recall_at_3 value: 47.509 - type: recall_at_5 value: 52.455 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.029000000000003 - type: map_at_10 value: 34.646 - type: map_at_100 value: 34.646 - type: map_at_1000 value: 34.646 - type: map_at_3 value: 31.456 - type: map_at_5 value: 33.138 - type: mrr_at_1 value: 28.281 - type: mrr_at_10 value: 36.905 - type: mrr_at_100 value: 36.905 - type: mrr_at_1000 value: 36.905 - type: mrr_at_3 value: 34.011 - type: mrr_at_5 value: 35.638 - type: ndcg_at_1 value: 28.281 - type: ndcg_at_10 value: 40.159 - type: ndcg_at_100 value: 40.159 - type: ndcg_at_1000 value: 40.159 - type: ndcg_at_3 value: 33.995 - type: ndcg_at_5 value: 36.836999999999996 - type: precision_at_1 value: 28.281 - type: precision_at_10 value: 6.358999999999999 - type: precision_at_100 value: 0.636 - type: precision_at_1000 value: 0.064 - type: precision_at_3 value: 14.233 - type: precision_at_5 value: 10.314 - type: recall_at_1 value: 26.029000000000003 - type: recall_at_10 value: 55.08 - type: recall_at_100 value: 55.08 - type: recall_at_1000 value: 55.08 - type: recall_at_3 value: 38.487 - type: recall_at_5 value: 45.308 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 12.842999999999998 - type: map_at_10 value: 22.101000000000003 - type: map_at_100 value: 24.319 - type: map_at_1000 value: 24.51 - type: map_at_3 value: 18.372 - type: map_at_5 value: 20.323 - type: mrr_at_1 value: 27.948 - type: mrr_at_10 value: 40.321 - type: mrr_at_100 value: 41.262 - type: mrr_at_1000 value: 41.297 - type: mrr_at_3 value: 36.558 - type: mrr_at_5 value: 38.824999999999996 - type: ndcg_at_1 value: 27.948 - type: ndcg_at_10 value: 30.906 - type: ndcg_at_100 value: 38.986 - type: ndcg_at_1000 value: 42.136 - type: ndcg_at_3 value: 24.911 - type: ndcg_at_5 value: 27.168999999999997 - type: precision_at_1 value: 27.948 - type: precision_at_10 value: 9.798 - type: precision_at_100 value: 1.8399999999999999 - type: precision_at_1000 value: 0.243 - type: precision_at_3 value: 18.328 - type: precision_at_5 value: 14.502 - type: recall_at_1 value: 12.842999999999998 - type: recall_at_10 value: 37.245 - type: recall_at_100 value: 64.769 - type: recall_at_1000 value: 82.055 - type: recall_at_3 value: 23.159 - type: recall_at_5 value: 29.113 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 8.934000000000001 - type: map_at_10 value: 21.915000000000003 - type: map_at_100 value: 21.915000000000003 - type: map_at_1000 value: 21.915000000000003 - type: map_at_3 value: 14.623 - type: map_at_5 value: 17.841 - type: mrr_at_1 value: 71.25 - type: mrr_at_10 value: 78.994 - type: mrr_at_100 value: 78.994 - type: mrr_at_1000 value: 78.994 - type: mrr_at_3 value: 77.208 - type: mrr_at_5 value: 78.55799999999999 - type: ndcg_at_1 value: 60.62499999999999 - type: ndcg_at_10 value: 46.604 - type: ndcg_at_100 value: 35.653 - type: ndcg_at_1000 value: 35.531 - type: ndcg_at_3 value: 50.605 - type: ndcg_at_5 value: 48.730000000000004 - type: precision_at_1 value: 71.25 - type: precision_at_10 value: 37.75 - type: precision_at_100 value: 3.775 - type: precision_at_1000 value: 0.377 - type: precision_at_3 value: 54.417 - type: precision_at_5 value: 48.15 - type: recall_at_1 value: 8.934000000000001 - type: recall_at_10 value: 28.471000000000004 - type: recall_at_100 value: 28.471000000000004 - type: recall_at_1000 value: 28.471000000000004 - type: recall_at_3 value: 16.019 - type: recall_at_5 value: 21.410999999999998 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 52.81 - type: f1 value: 47.987573380720114 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 66.81899999999999 - type: map_at_10 value: 78.034 - type: map_at_100 value: 78.034 - type: map_at_1000 value: 78.034 - type: map_at_3 value: 76.43100000000001 - type: map_at_5 value: 77.515 - type: mrr_at_1 value: 71.542 - type: mrr_at_10 value: 81.638 - type: mrr_at_100 value: 81.638 - type: mrr_at_1000 value: 81.638 - type: mrr_at_3 value: 80.403 - type: mrr_at_5 value: 81.256 - type: ndcg_at_1 value: 71.542 - type: ndcg_at_10 value: 82.742 - type: ndcg_at_100 value: 82.741 - type: ndcg_at_1000 value: 82.741 - type: ndcg_at_3 value: 80.039 - type: ndcg_at_5 value: 81.695 - type: precision_at_1 value: 71.542 - type: precision_at_10 value: 10.387 - type: precision_at_100 value: 1.039 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 31.447999999999997 - type: precision_at_5 value: 19.91 - type: recall_at_1 value: 66.81899999999999 - type: recall_at_10 value: 93.372 - type: recall_at_100 value: 93.372 - type: recall_at_1000 value: 93.372 - type: recall_at_3 value: 86.33 - type: recall_at_5 value: 90.347 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 31.158 - type: map_at_10 value: 52.017 - type: map_at_100 value: 54.259 - type: map_at_1000 value: 54.367 - type: map_at_3 value: 45.738 - type: map_at_5 value: 49.283 - type: mrr_at_1 value: 57.87 - type: mrr_at_10 value: 66.215 - type: mrr_at_100 value: 66.735 - type: mrr_at_1000 value: 66.75 - type: mrr_at_3 value: 64.043 - type: mrr_at_5 value: 65.116 - type: ndcg_at_1 value: 57.87 - type: ndcg_at_10 value: 59.946999999999996 - type: ndcg_at_100 value: 66.31099999999999 - type: ndcg_at_1000 value: 67.75999999999999 - type: ndcg_at_3 value: 55.483000000000004 - type: ndcg_at_5 value: 56.891000000000005 - type: precision_at_1 value: 57.87 - type: precision_at_10 value: 16.497 - type: precision_at_100 value: 2.321 - type: precision_at_1000 value: 0.258 - type: precision_at_3 value: 37.14 - type: precision_at_5 value: 27.067999999999998 - type: recall_at_1 value: 31.158 - type: recall_at_10 value: 67.381 - type: recall_at_100 value: 89.464 - type: recall_at_1000 value: 97.989 - type: recall_at_3 value: 50.553000000000004 - type: recall_at_5 value: 57.824 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 42.073 - type: map_at_10 value: 72.418 - type: map_at_100 value: 73.175 - type: map_at_1000 value: 73.215 - type: map_at_3 value: 68.791 - type: map_at_5 value: 71.19 - type: mrr_at_1 value: 84.146 - type: mrr_at_10 value: 88.994 - type: mrr_at_100 value: 89.116 - type: mrr_at_1000 value: 89.12 - type: mrr_at_3 value: 88.373 - type: mrr_at_5 value: 88.82 - type: ndcg_at_1 value: 84.146 - type: ndcg_at_10 value: 79.404 - type: ndcg_at_100 value: 81.83200000000001 - type: ndcg_at_1000 value: 82.524 - type: ndcg_at_3 value: 74.595 - type: ndcg_at_5 value: 77.474 - type: precision_at_1 value: 84.146 - type: precision_at_10 value: 16.753999999999998 - type: precision_at_100 value: 1.8599999999999999 - type: precision_at_1000 value: 0.19499999999999998 - type: precision_at_3 value: 48.854 - type: precision_at_5 value: 31.579 - type: recall_at_1 value: 42.073 - type: recall_at_10 value: 83.768 - type: recall_at_100 value: 93.018 - type: recall_at_1000 value: 97.481 - type: recall_at_3 value: 73.282 - type: recall_at_5 value: 78.947 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 94.9968 - type: ap value: 92.93892195862824 - type: f1 value: 94.99327998213761 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 21.698 - type: map_at_10 value: 34.585 - type: map_at_100 value: 35.782000000000004 - type: map_at_1000 value: 35.825 - type: map_at_3 value: 30.397999999999996 - type: map_at_5 value: 32.72 - type: mrr_at_1 value: 22.192 - type: mrr_at_10 value: 35.085 - type: mrr_at_100 value: 36.218 - type: mrr_at_1000 value: 36.256 - type: mrr_at_3 value: 30.986000000000004 - type: mrr_at_5 value: 33.268 - type: ndcg_at_1 value: 22.192 - type: ndcg_at_10 value: 41.957 - type: ndcg_at_100 value: 47.658 - type: ndcg_at_1000 value: 48.697 - type: ndcg_at_3 value: 33.433 - type: ndcg_at_5 value: 37.551 - type: precision_at_1 value: 22.192 - type: precision_at_10 value: 6.781 - type: precision_at_100 value: 0.963 - type: precision_at_1000 value: 0.105 - type: precision_at_3 value: 14.365 - type: precision_at_5 value: 10.713000000000001 - type: recall_at_1 value: 21.698 - type: recall_at_10 value: 64.79 - type: recall_at_100 value: 91.071 - type: recall_at_1000 value: 98.883 - type: recall_at_3 value: 41.611 - type: recall_at_5 value: 51.459999999999994 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 96.15823073415413 - type: f1 value: 96.00362034963248 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 87.12722298221614 - type: f1 value: 70.46888967516227 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 80.77673167451245 - type: f1 value: 77.60202561132175 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 82.09145931405514 - type: f1 value: 81.7701921473406 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 36.52153488185864 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 36.80090398444147 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.807141746058605 - type: mrr value: 32.85025611455029 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.920999999999999 - type: map_at_10 value: 16.049 - type: map_at_100 value: 16.049 - type: map_at_1000 value: 16.049 - type: map_at_3 value: 11.865 - type: map_at_5 value: 13.657 - type: mrr_at_1 value: 53.87 - type: mrr_at_10 value: 62.291 - type: mrr_at_100 value: 62.291 - type: mrr_at_1000 value: 62.291 - type: mrr_at_3 value: 60.681 - type: mrr_at_5 value: 61.61 - type: ndcg_at_1 value: 51.23799999999999 - type: ndcg_at_10 value: 40.892 - type: ndcg_at_100 value: 26.951999999999998 - type: ndcg_at_1000 value: 26.474999999999998 - type: ndcg_at_3 value: 46.821 - type: ndcg_at_5 value: 44.333 - type: precision_at_1 value: 53.251000000000005 - type: precision_at_10 value: 30.124000000000002 - type: precision_at_100 value: 3.012 - type: precision_at_1000 value: 0.301 - type: precision_at_3 value: 43.55 - type: precision_at_5 value: 38.266 - type: recall_at_1 value: 6.920999999999999 - type: recall_at_10 value: 20.852 - type: recall_at_100 value: 20.852 - type: recall_at_1000 value: 20.852 - type: recall_at_3 value: 13.628000000000002 - type: recall_at_5 value: 16.273 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 46.827999999999996 - type: map_at_10 value: 63.434000000000005 - type: map_at_100 value: 63.434000000000005 - type: map_at_1000 value: 63.434000000000005 - type: map_at_3 value: 59.794000000000004 - type: map_at_5 value: 62.08 - type: mrr_at_1 value: 52.288999999999994 - type: mrr_at_10 value: 65.95 - type: mrr_at_100 value: 65.95 - type: mrr_at_1000 value: 65.95 - type: mrr_at_3 value: 63.413 - type: mrr_at_5 value: 65.08 - type: ndcg_at_1 value: 52.288999999999994 - type: ndcg_at_10 value: 70.301 - type: ndcg_at_100 value: 70.301 - type: ndcg_at_1000 value: 70.301 - type: ndcg_at_3 value: 63.979 - type: ndcg_at_5 value: 67.582 - type: precision_at_1 value: 52.288999999999994 - type: precision_at_10 value: 10.576 - type: precision_at_100 value: 1.058 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 28.177000000000003 - type: precision_at_5 value: 19.073 - type: recall_at_1 value: 46.827999999999996 - type: recall_at_10 value: 88.236 - type: recall_at_100 value: 88.236 - type: recall_at_1000 value: 88.236 - type: recall_at_3 value: 72.371 - type: recall_at_5 value: 80.56 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 71.652 - type: map_at_10 value: 85.953 - type: map_at_100 value: 85.953 - type: map_at_1000 value: 85.953 - type: map_at_3 value: 83.05399999999999 - type: map_at_5 value: 84.89 - type: mrr_at_1 value: 82.42 - type: mrr_at_10 value: 88.473 - type: mrr_at_100 value: 88.473 - type: mrr_at_1000 value: 88.473 - type: mrr_at_3 value: 87.592 - type: mrr_at_5 value: 88.211 - type: ndcg_at_1 value: 82.44 - type: ndcg_at_10 value: 89.467 - type: ndcg_at_100 value: 89.33 - type: ndcg_at_1000 value: 89.33 - type: ndcg_at_3 value: 86.822 - type: ndcg_at_5 value: 88.307 - type: precision_at_1 value: 82.44 - type: precision_at_10 value: 13.616 - type: precision_at_100 value: 1.362 - type: precision_at_1000 value: 0.136 - type: precision_at_3 value: 38.117000000000004 - type: precision_at_5 value: 25.05 - type: recall_at_1 value: 71.652 - type: recall_at_10 value: 96.224 - type: recall_at_100 value: 96.224 - type: recall_at_1000 value: 96.224 - type: recall_at_3 value: 88.571 - type: recall_at_5 value: 92.812 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 61.295010338050474 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 67.26380819328142 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 5.683 - type: map_at_10 value: 14.924999999999999 - type: map_at_100 value: 17.532 - type: map_at_1000 value: 17.875 - type: map_at_3 value: 10.392 - type: map_at_5 value: 12.592 - type: mrr_at_1 value: 28.000000000000004 - type: mrr_at_10 value: 39.951 - type: mrr_at_100 value: 41.025 - type: mrr_at_1000 value: 41.056 - type: mrr_at_3 value: 36.317 - type: mrr_at_5 value: 38.412 - type: ndcg_at_1 value: 28.000000000000004 - type: ndcg_at_10 value: 24.410999999999998 - type: ndcg_at_100 value: 33.79 - type: ndcg_at_1000 value: 39.035 - type: ndcg_at_3 value: 22.845 - type: ndcg_at_5 value: 20.080000000000002 - type: precision_at_1 value: 28.000000000000004 - type: precision_at_10 value: 12.790000000000001 - type: precision_at_100 value: 2.633 - type: precision_at_1000 value: 0.388 - type: precision_at_3 value: 21.367 - type: precision_at_5 value: 17.7 - type: recall_at_1 value: 5.683 - type: recall_at_10 value: 25.91 - type: recall_at_100 value: 53.443 - type: recall_at_1000 value: 78.73 - type: recall_at_3 value: 13.003 - type: recall_at_5 value: 17.932000000000002 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.677978681023 - type: cos_sim_spearman value: 83.13093441058189 - type: euclidean_pearson value: 83.35535759341572 - type: euclidean_spearman value: 83.42583744219611 - type: manhattan_pearson value: 83.2243124045889 - type: manhattan_spearman value: 83.39801618652632 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 81.68960206569666 - type: cos_sim_spearman value: 77.3368966488535 - type: euclidean_pearson value: 77.62828980560303 - type: euclidean_spearman value: 76.77951481444651 - type: manhattan_pearson value: 77.88637240839041 - type: manhattan_spearman value: 77.22157841466188 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 84.18745821650724 - type: cos_sim_spearman value: 85.04423285574542 - type: euclidean_pearson value: 85.46604816931023 - type: euclidean_spearman value: 85.5230593932974 - type: manhattan_pearson value: 85.57912805986261 - type: manhattan_spearman value: 85.65955905111873 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 83.6715333300355 - type: cos_sim_spearman value: 82.9058522514908 - type: euclidean_pearson value: 83.9640357424214 - type: euclidean_spearman value: 83.60415457472637 - type: manhattan_pearson value: 84.05621005853469 - type: manhattan_spearman value: 83.87077724707746 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.82422928098886 - type: cos_sim_spearman value: 88.12660311894628 - type: euclidean_pearson value: 87.50974805056555 - type: euclidean_spearman value: 87.91957275596677 - type: manhattan_pearson value: 87.74119404878883 - type: manhattan_spearman value: 88.2808922165719 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.80605838552093 - type: cos_sim_spearman value: 86.24123388765678 - type: euclidean_pearson value: 85.32648347339814 - type: euclidean_spearman value: 85.60046671950158 - type: manhattan_pearson value: 85.53800168487811 - type: manhattan_spearman value: 85.89542420480763 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 89.87540978988132 - type: cos_sim_spearman value: 90.12715295099461 - type: euclidean_pearson value: 91.61085993525275 - type: euclidean_spearman value: 91.31835942311758 - type: manhattan_pearson value: 91.57500202032934 - type: manhattan_spearman value: 91.1790925526635 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 69.87136205329556 - type: cos_sim_spearman value: 68.6253154635078 - type: euclidean_pearson value: 68.91536015034222 - type: euclidean_spearman value: 67.63744649352542 - type: manhattan_pearson value: 69.2000713045275 - type: manhattan_spearman value: 68.16002901587316 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.21849551039082 - type: cos_sim_spearman value: 85.6392959372461 - type: euclidean_pearson value: 85.92050852609488 - type: euclidean_spearman value: 85.97205649009734 - type: manhattan_pearson value: 86.1031154802254 - type: manhattan_spearman value: 86.26791155517466 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 86.83953958636627 - type: mrr value: 96.71167612344082 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 64.994 - type: map_at_10 value: 74.763 - type: map_at_100 value: 75.127 - type: map_at_1000 value: 75.143 - type: map_at_3 value: 71.824 - type: map_at_5 value: 73.71 - type: mrr_at_1 value: 68.333 - type: mrr_at_10 value: 75.749 - type: mrr_at_100 value: 75.922 - type: mrr_at_1000 value: 75.938 - type: mrr_at_3 value: 73.556 - type: mrr_at_5 value: 74.739 - type: ndcg_at_1 value: 68.333 - type: ndcg_at_10 value: 79.174 - type: ndcg_at_100 value: 80.41 - type: ndcg_at_1000 value: 80.804 - type: ndcg_at_3 value: 74.361 - type: ndcg_at_5 value: 76.861 - type: precision_at_1 value: 68.333 - type: precision_at_10 value: 10.333 - type: precision_at_100 value: 1.0999999999999999 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 28.778 - type: precision_at_5 value: 19.067 - type: recall_at_1 value: 64.994 - type: recall_at_10 value: 91.822 - type: recall_at_100 value: 97.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 78.878 - type: recall_at_5 value: 85.172 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.72079207920792 - type: cos_sim_ap value: 93.00265215525152 - type: cos_sim_f1 value: 85.06596306068602 - type: cos_sim_precision value: 90.05586592178771 - type: cos_sim_recall value: 80.60000000000001 - type: dot_accuracy value: 99.66039603960397 - type: dot_ap value: 91.22371407479089 - type: dot_f1 value: 82.34693877551021 - type: dot_precision value: 84.0625 - type: dot_recall value: 80.7 - type: euclidean_accuracy value: 99.71881188118812 - type: euclidean_ap value: 92.88449963304728 - type: euclidean_f1 value: 85.19480519480518 - type: euclidean_precision value: 88.64864864864866 - type: euclidean_recall value: 82.0 - type: manhattan_accuracy value: 99.73267326732673 - type: manhattan_ap value: 93.23055393056883 - type: manhattan_f1 value: 85.88957055214725 - type: manhattan_precision value: 87.86610878661088 - type: manhattan_recall value: 84.0 - type: max_accuracy value: 99.73267326732673 - type: max_ap value: 93.23055393056883 - type: max_f1 value: 85.88957055214725 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 77.3305735900358 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 41.32967136540674 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.95514866379359 - type: mrr value: 56.95423245055598 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.783007208997144 - type: cos_sim_spearman value: 30.373444721540533 - type: dot_pearson value: 29.210604111143905 - type: dot_spearman value: 29.98809758085659 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.234 - type: map_at_10 value: 1.894 - type: map_at_100 value: 1.894 - type: map_at_1000 value: 1.894 - type: map_at_3 value: 0.636 - type: map_at_5 value: 1.0 - type: mrr_at_1 value: 88.0 - type: mrr_at_10 value: 93.667 - type: mrr_at_100 value: 93.667 - type: mrr_at_1000 value: 93.667 - type: mrr_at_3 value: 93.667 - type: mrr_at_5 value: 93.667 - type: ndcg_at_1 value: 85.0 - type: ndcg_at_10 value: 74.798 - type: ndcg_at_100 value: 16.462 - type: ndcg_at_1000 value: 7.0889999999999995 - type: ndcg_at_3 value: 80.754 - type: ndcg_at_5 value: 77.319 - type: precision_at_1 value: 88.0 - type: precision_at_10 value: 78.0 - type: precision_at_100 value: 7.8 - type: precision_at_1000 value: 0.7799999999999999 - type: precision_at_3 value: 83.333 - type: precision_at_5 value: 80.80000000000001 - type: recall_at_1 value: 0.234 - type: recall_at_10 value: 2.093 - type: recall_at_100 value: 2.093 - type: recall_at_1000 value: 2.093 - type: recall_at_3 value: 0.662 - type: recall_at_5 value: 1.0739999999999998 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.703 - type: map_at_10 value: 10.866000000000001 - type: map_at_100 value: 10.866000000000001 - type: map_at_1000 value: 10.866000000000001 - type: map_at_3 value: 5.909 - type: map_at_5 value: 7.35 - type: mrr_at_1 value: 36.735 - type: mrr_at_10 value: 53.583000000000006 - type: mrr_at_100 value: 53.583000000000006 - type: mrr_at_1000 value: 53.583000000000006 - type: mrr_at_3 value: 49.32 - type: mrr_at_5 value: 51.769 - type: ndcg_at_1 value: 34.694 - type: ndcg_at_10 value: 27.926000000000002 - type: ndcg_at_100 value: 22.701 - type: ndcg_at_1000 value: 22.701 - type: ndcg_at_3 value: 32.073 - type: ndcg_at_5 value: 28.327999999999996 - type: precision_at_1 value: 36.735 - type: precision_at_10 value: 24.694 - type: precision_at_100 value: 2.469 - type: precision_at_1000 value: 0.247 - type: precision_at_3 value: 31.973000000000003 - type: precision_at_5 value: 26.939 - type: recall_at_1 value: 2.703 - type: recall_at_10 value: 17.702 - type: recall_at_100 value: 17.702 - type: recall_at_1000 value: 17.702 - type: recall_at_3 value: 7.208 - type: recall_at_5 value: 9.748999999999999 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.79960000000001 - type: ap value: 15.467565415565815 - type: f1 value: 55.28639823443618 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 64.7792869269949 - type: f1 value: 65.08597154774318 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 55.70352297774293 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 88.27561542588067 - type: cos_sim_ap value: 81.08262141256193 - type: cos_sim_f1 value: 73.82341501361338 - type: cos_sim_precision value: 72.5720112159062 - type: cos_sim_recall value: 75.11873350923483 - type: dot_accuracy value: 86.66030875603504 - type: dot_ap value: 76.6052349228621 - type: dot_f1 value: 70.13897280966768 - type: dot_precision value: 64.70457079152732 - type: dot_recall value: 76.56992084432717 - type: euclidean_accuracy value: 88.37098408535495 - type: euclidean_ap value: 81.12515230092113 - type: euclidean_f1 value: 74.10338225909379 - type: euclidean_precision value: 71.76761433868974 - type: euclidean_recall value: 76.59630606860158 - type: manhattan_accuracy value: 88.34118137926924 - type: manhattan_ap value: 80.95751834536561 - type: manhattan_f1 value: 73.9119496855346 - type: manhattan_precision value: 70.625 - type: manhattan_recall value: 77.5197889182058 - type: max_accuracy value: 88.37098408535495 - type: max_ap value: 81.12515230092113 - type: max_f1 value: 74.10338225909379 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.79896767182831 - type: cos_sim_ap value: 87.40071784061065 - type: cos_sim_f1 value: 79.87753144712087 - type: cos_sim_precision value: 76.67304015296367 - type: cos_sim_recall value: 83.3615645210964 - type: dot_accuracy value: 88.95486474948578 - type: dot_ap value: 86.00227979119943 - type: dot_f1 value: 78.54601474525914 - type: dot_precision value: 75.00525394045535 - type: dot_recall value: 82.43763473975977 - type: euclidean_accuracy value: 89.7892653393876 - type: euclidean_ap value: 87.42174706480819 - type: euclidean_f1 value: 80.07283321194465 - type: euclidean_precision value: 75.96738529574351 - type: euclidean_recall value: 84.6473668001232 - type: manhattan_accuracy value: 89.8474793340319 - type: manhattan_ap value: 87.47814292587448 - type: manhattan_f1 value: 80.15461150280949 - type: manhattan_precision value: 74.88798234468 - type: manhattan_recall value: 86.21804742839544 - type: max_accuracy value: 89.8474793340319 - type: max_ap value: 87.47814292587448 - type: max_f1 value: 80.15461150280949 --- # Model Summary > GritLM is a generative representational instruction tuned language model. It unifies text representation (embedding) and text generation into a single model achieving state-of-the-art performance on both types of tasks. - **Repository:** [ContextualAI/gritlm](https://github.com/ContextualAI/gritlm) - **Paper:** https://arxiv.org/abs/2402.09906 - **Logs:** https://wandb.ai/muennighoff/gritlm/runs/0uui712t/overview - **Script:** https://github.com/ContextualAI/gritlm/blob/main/scripts/training/train_gritlm_7b.sh | Model | Description | |-------|-------------| | [GritLM 7B](https://hf.co/GritLM/GritLM-7B) | Mistral 7B finetuned using GRIT | | [GritLM 8x7B](https://hf.co/GritLM/GritLM-8x7B) | Mixtral 8x7B finetuned using GRIT | # Use The model usage is documented [here](https://github.com/ContextualAI/gritlm?tab=readme-ov-file#inference). # Citation ```bibtex @misc{muennighoff2024generative, title={Generative Representational Instruction Tuning}, author={Niklas Muennighoff and Hongjin Su and Liang Wang and Nan Yang and Furu Wei and Tao Yu and Amanpreet Singh and Douwe Kiela}, year={2024}, eprint={2402.09906}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{}
RichardErkhov/GritLM_-_GritLM-7B-gguf
null
[ "gguf", "region:us" ]
null
2024-05-03T17:18:16+00:00
multiple-choice
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine_tuned_copa_bert This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0295 - Accuracy: 0.54 - F1: 0.5407 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7066 | 1.0 | 50 | 0.6907 | 0.54 | 0.5411 | | 0.6897 | 2.0 | 100 | 0.6880 | 0.57 | 0.5709 | | 0.6001 | 3.0 | 150 | 0.7025 | 0.55 | 0.5511 | | 0.4629 | 4.0 | 200 | 0.7810 | 0.53 | 0.5310 | | 0.3402 | 5.0 | 250 | 1.0003 | 0.55 | 0.5511 | | 0.2299 | 6.0 | 300 | 1.0220 | 0.55 | 0.5511 | | 0.1874 | 7.0 | 350 | 0.9956 | 0.56 | 0.5611 | | 0.1133 | 8.0 | 400 | 1.0295 | 0.54 | 0.5407 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "fine_tuned_copa_bert", "results": []}]}
lenatr99/fine_tuned_copa_bert
null
[ "transformers", "safetensors", "bert", "multiple-choice", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-03T17:18:56+00:00
null
null
{}
ImSakushi/nas2023
null
[ "region:us" ]
null
2024-05-03T17:20:25+00:00
null
null
{}
nabeeltirmazi/neelamx_LoRA
null
[ "region:us" ]
null
2024-05-03T17:20:44+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/r2igr19
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T17:20:47+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cosmosDPO_CodeTest This model is a fine-tuned version of [ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1](https://huggingface.co/ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5271 - Rewards/chosen: -2.6242 - Rewards/rejected: -6.3552 - Rewards/accuracies: 0.2667 - Rewards/margins: 3.7309 - Logps/rejected: -749.125 - Logps/chosen: -350.9360 - Logits/rejected: -5.2606 - Logits/chosen: -4.5085 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.5159 | 1.3072 | 100 | 0.5241 | -0.9182 | -3.2061 | 0.2676 | 2.2879 | -434.2115 | -180.3287 | -4.0572 | -3.5729 | | 0.5227 | 2.6144 | 200 | 0.5217 | -2.1076 | -5.3791 | 0.2695 | 3.2715 | -651.5153 | -299.2687 | -4.8098 | -4.1931 | | 0.4937 | 3.9216 | 300 | 0.5271 | -2.6242 | -6.3552 | 0.2667 | 3.7309 | -749.125 | -350.9360 | -5.2606 | -4.5085 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1", "model-index": [{"name": "cosmosDPO_v0.1", "results": []}]}
meguzn/cosmosDPO_v0.1
null
[ "peft", "tensorboard", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1", "license:mit", "region:us" ]
null
2024-05-03T17:21:46+00:00