modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
167M
| likes
int64 0
6.49k
| library_name
stringclasses 331
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 51
values | createdAt
unknown | card
stringlengths 1
913k
|
---|---|---|---|---|---|---|---|---|---|
dogssss/Qwen-Qwen1.5-1.8B-1727227989 | dogssss | "2024-09-25T01:33:13Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | "2024-09-25T01:33:09Z" | ---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
Dhurkesh1/tomatoDiseaseClassifier | Dhurkesh1 | "2024-09-25T01:33:24Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
] | image-classification | "2024-09-25T01:33:11Z" | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: tomatoDiseaseClassifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9908371567726135
---
# tomatoDiseaseClassifier
Autogenerated by HuggingPics🤗🖼️
This model is designed for classifying [YOUR TASK] images. It was fine-tuned using PyTorch Lightning and Hugging Face transformers.
## Example Images
#### Tomato_Bacterial_spot
![Tomato_Bacterial_spot](images/Tomato_Bacterial_spot.jpg)
#### Tomato_Early_blight
![Tomato_Early_blight](images/Tomato_Early_blight.jpg)
#### Tomato_Late_blight
![Tomato_Late_blight](images/Tomato_Late_blight.jpg)
#### Tomato_Leaf_Mold
![Tomato_Leaf_Mold](images/Tomato_Leaf_Mold.jpg)
#### Tomato_Septoria_leaf_spot
![Tomato_Septoria_leaf_spot](images/Tomato_Septoria_leaf_spot.jpg)
#### Tomato_Spider_mites_Two_spotted_spider_mite
![Tomato_Spider_mites_Two_spotted_spider_mite](images/Tomato_Spider_mites_Two_spotted_spider_mite.jpg)
#### Tomato__Target_Spot
![Tomato__Target_Spot](images/Tomato__Target_Spot.jpg)
#### Tomato__Tomato_YellowLeaf__Curl_Virus
![Tomato__Tomato_YellowLeaf__Curl_Virus](images/Tomato__Tomato_YellowLeaf__Curl_Virus.jpg)
#### Tomato__Tomato_mosaic_virus
![Tomato__Tomato_mosaic_virus](images/Tomato__Tomato_mosaic_virus.jpg)
#### Tomato_healthy
![Tomato_healthy](images/Tomato_healthy.jpg) |
agamgoy/lora-task-1-group-Entertainment | agamgoy | "2024-09-25T01:33:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-09-25T01:33:13Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** agamgoy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SALUTEASD/Qwen-Qwen1.5-0.5B-1727227998 | SALUTEASD | "2024-09-25T01:33:33Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | "2024-09-25T01:33:19Z" | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
xueyj/Qwen-Qwen1.5-0.5B-1727228010 | xueyj | "2024-09-25T01:33:47Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | "2024-09-25T01:33:30Z" | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
Kudod/roberta-large-ner-ghtk-cs-add-6label-10-new-data-3090-25Sep-1 | Kudod | "2024-09-25T01:33:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-09-25T01:33:33Z" | ---
tags:
- generated_from_trainer
model-index:
- name: roberta-large-ner-ghtk-cs-add-6label-10-new-data-3090-25Sep-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-ner-ghtk-cs-add-6label-10-new-data-3090-25Sep-1
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3247
- Tk: {'precision': 0.6533333333333333, 'recall': 0.4224137931034483, 'f1': 0.513089005235602, 'number': 116}
- A: {'precision': 0.9223946784922394, 'recall': 0.9651972157772621, 'f1': 0.9433106575963719, 'number': 431}
- Gày: {'precision': 0.7272727272727273, 'recall': 0.9411764705882353, 'f1': 0.8205128205128205, 'number': 34}
- Gày trừu tượng: {'precision': 0.9056603773584906, 'recall': 0.8852459016393442, 'f1': 0.8953367875647669, 'number': 488}
- Gân hàng: {'precision': 0.8717948717948718, 'recall': 0.918918918918919, 'f1': 0.8947368421052632, 'number': 37}
- Iền: {'precision': 0.7115384615384616, 'recall': 0.9487179487179487, 'f1': 0.8131868131868132, 'number': 39}
- Iờ: {'precision': 0.6458333333333334, 'recall': 0.8157894736842105, 'f1': 0.7209302325581395, 'number': 38}
- Ã đơn: {'precision': 0.855, 'recall': 0.8423645320197044, 'f1': 0.8486352357320099, 'number': 203}
- Đt: {'precision': 0.9220917822838848, 'recall': 0.9840546697038725, 'f1': 0.9520661157024795, 'number': 878}
- Đt trừu tượng: {'precision': 0.7786259541984732, 'recall': 0.8755364806866953, 'f1': 0.8242424242424242, 'number': 233}
- Ịa chỉ cụ thể: {'precision': 0.4375, 'recall': 0.4883720930232558, 'f1': 0.4615384615384615, 'number': 43}
- Ịa chỉ trừu tượng: {'precision': 0.6933333333333334, 'recall': 0.6842105263157895, 'f1': 0.6887417218543047, 'number': 76}
- Overall Precision: 0.8652
- Overall Recall: 0.8956
- Overall F1: 0.8802
- Overall Accuracy: 0.9466
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Tk | A | Gày | Gày trừu tượng | Gân hàng | Iền | Iờ | Ã đơn | Đt | Đt trừu tượng | Ịa chỉ cụ thể | Ịa chỉ trừu tượng | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| No log | 1.0 | 367 | 0.1996 | {'precision': 0.4666666666666667, 'recall': 0.2413793103448276, 'f1': 0.3181818181818182, 'number': 116} | {'precision': 0.9269911504424779, 'recall': 0.9721577726218097, 'f1': 0.9490373725934316, 'number': 431} | {'precision': 0.7435897435897436, 'recall': 0.8529411764705882, 'f1': 0.7945205479452054, 'number': 34} | {'precision': 0.890495867768595, 'recall': 0.8831967213114754, 'f1': 0.8868312757201646, 'number': 488} | {'precision': 0.8787878787878788, 'recall': 0.7837837837837838, 'f1': 0.8285714285714285, 'number': 37} | {'precision': 0.7560975609756098, 'recall': 0.7948717948717948, 'f1': 0.7749999999999999, 'number': 39} | {'precision': 0.5961538461538461, 'recall': 0.8157894736842105, 'f1': 0.6888888888888889, 'number': 38} | {'precision': 0.6548042704626335, 'recall': 0.9064039408866995, 'f1': 0.7603305785123967, 'number': 203} | {'precision': 0.9236479321314952, 'recall': 0.9920273348519362, 'f1': 0.956617243272927, 'number': 878} | {'precision': 0.753731343283582, 'recall': 0.8669527896995708, 'f1': 0.8063872255489022, 'number': 233} | {'precision': 0.2727272727272727, 'recall': 0.3488372093023256, 'f1': 0.30612244897959184, 'number': 43} | {'precision': 0.7678571428571429, 'recall': 0.5657894736842105, 'f1': 0.6515151515151516, 'number': 76} | 0.8368 | 0.8842 | 0.8599 | 0.9315 |
| 0.2043 | 2.0 | 734 | 0.2190 | {'precision': 0.5594405594405595, 'recall': 0.6896551724137931, 'f1': 0.6177606177606177, 'number': 116} | {'precision': 0.9027484143763214, 'recall': 0.9907192575406032, 'f1': 0.9446902654867256, 'number': 431} | {'precision': 0.7631578947368421, 'recall': 0.8529411764705882, 'f1': 0.8055555555555555, 'number': 34} | {'precision': 0.8893360160965795, 'recall': 0.9057377049180327, 'f1': 0.8974619289340101, 'number': 488} | {'precision': 0.8571428571428571, 'recall': 0.8108108108108109, 'f1': 0.8333333333333334, 'number': 37} | {'precision': 0.7083333333333334, 'recall': 0.8717948717948718, 'f1': 0.7816091954022988, 'number': 39} | {'precision': 0.5714285714285714, 'recall': 0.9473684210526315, 'f1': 0.7128712871287128, 'number': 38} | {'precision': 0.7355371900826446, 'recall': 0.8768472906403941, 'f1': 0.7999999999999999, 'number': 203} | {'precision': 0.9700115340253749, 'recall': 0.9578587699316629, 'f1': 0.9638968481375358, 'number': 878} | {'precision': 0.8292682926829268, 'recall': 0.8755364806866953, 'f1': 0.8517745302713987, 'number': 233} | {'precision': 0.2714285714285714, 'recall': 0.4418604651162791, 'f1': 0.33628318584070793, 'number': 43} | {'precision': 0.7936507936507936, 'recall': 0.6578947368421053, 'f1': 0.7194244604316548, 'number': 76} | 0.8510 | 0.9060 | 0.8776 | 0.9404 |
| 0.1101 | 3.0 | 1101 | 0.2292 | {'precision': 0.608, 'recall': 0.6551724137931034, 'f1': 0.6307053941908715, 'number': 116} | {'precision': 0.9038461538461539, 'recall': 0.9814385150812065, 'f1': 0.9410456062291435, 'number': 431} | {'precision': 0.7209302325581395, 'recall': 0.9117647058823529, 'f1': 0.8051948051948051, 'number': 34} | {'precision': 0.9119496855345912, 'recall': 0.8913934426229508, 'f1': 0.9015544041450777, 'number': 488} | {'precision': 0.825, 'recall': 0.8918918918918919, 'f1': 0.8571428571428571, 'number': 37} | {'precision': 0.75, 'recall': 0.9230769230769231, 'f1': 0.8275862068965517, 'number': 39} | {'precision': 0.5147058823529411, 'recall': 0.9210526315789473, 'f1': 0.660377358490566, 'number': 38} | {'precision': 0.7295081967213115, 'recall': 0.8768472906403941, 'f1': 0.796420581655481, 'number': 203} | {'precision': 0.8975409836065574, 'recall': 0.9977220956719818, 'f1': 0.9449838187702265, 'number': 878} | {'precision': 0.8064516129032258, 'recall': 0.8583690987124464, 'f1': 0.8316008316008315, 'number': 233} | {'precision': 0.3088235294117647, 'recall': 0.4883720930232558, 'f1': 0.3783783783783784, 'number': 43} | {'precision': 0.6865671641791045, 'recall': 0.6052631578947368, 'f1': 0.6433566433566432, 'number': 76} | 0.8322 | 0.9136 | 0.8710 | 0.9398 |
| 0.1101 | 4.0 | 1468 | 0.2232 | {'precision': 0.5283018867924528, 'recall': 0.4827586206896552, 'f1': 0.5045045045045045, 'number': 116} | {'precision': 0.9440559440559441, 'recall': 0.9396751740139211, 'f1': 0.9418604651162792, 'number': 431} | {'precision': 0.717391304347826, 'recall': 0.9705882352941176, 'f1': 0.825, 'number': 34} | {'precision': 0.9216101694915254, 'recall': 0.8913934426229508, 'f1': 0.90625, 'number': 488} | {'precision': 0.8048780487804879, 'recall': 0.8918918918918919, 'f1': 0.8461538461538461, 'number': 37} | {'precision': 0.6551724137931034, 'recall': 0.9743589743589743, 'f1': 0.7835051546391754, 'number': 39} | {'precision': 0.696969696969697, 'recall': 0.6052631578947368, 'f1': 0.6478873239436619, 'number': 38} | {'precision': 0.8, 'recall': 0.8472906403940886, 'f1': 0.8229665071770335, 'number': 203} | {'precision': 0.9160545645330536, 'recall': 0.9943052391799544, 'f1': 0.953577280174768, 'number': 878} | {'precision': 0.7924528301886793, 'recall': 0.9012875536480687, 'f1': 0.8433734939759037, 'number': 233} | {'precision': 0.46, 'recall': 0.5348837209302325, 'f1': 0.4946236559139785, 'number': 43} | {'precision': 0.6071428571428571, 'recall': 0.6710526315789473, 'f1': 0.6374999999999998, 'number': 76} | 0.8547 | 0.8991 | 0.8763 | 0.9413 |
| 0.0712 | 5.0 | 1835 | 0.2456 | {'precision': 0.5662650602409639, 'recall': 0.4051724137931034, 'f1': 0.4723618090452261, 'number': 116} | {'precision': 0.9170305676855895, 'recall': 0.974477958236659, 'f1': 0.9448818897637796, 'number': 431} | {'precision': 0.8048780487804879, 'recall': 0.9705882352941176, 'f1': 0.8800000000000001, 'number': 34} | {'precision': 0.8846153846153846, 'recall': 0.8954918032786885, 'f1': 0.890020366598778, 'number': 488} | {'precision': 0.8888888888888888, 'recall': 0.8648648648648649, 'f1': 0.8767123287671232, 'number': 37} | {'precision': 0.72, 'recall': 0.9230769230769231, 'f1': 0.8089887640449438, 'number': 39} | {'precision': 0.8260869565217391, 'recall': 0.5, 'f1': 0.6229508196721311, 'number': 38} | {'precision': 0.8586387434554974, 'recall': 0.8078817733990148, 'f1': 0.83248730964467, 'number': 203} | {'precision': 0.9177350427350427, 'recall': 0.9783599088838268, 'f1': 0.9470782800441013, 'number': 878} | {'precision': 0.8362068965517241, 'recall': 0.8326180257510729, 'f1': 0.8344086021505375, 'number': 233} | {'precision': 0.475, 'recall': 0.4418604651162791, 'f1': 0.4578313253012048, 'number': 43} | {'precision': 0.6842105263157895, 'recall': 0.6842105263157895, 'f1': 0.6842105263157895, 'number': 76} | 0.8692 | 0.8838 | 0.8764 | 0.9412 |
| 0.0416 | 6.0 | 2202 | 0.2494 | {'precision': 0.7066666666666667, 'recall': 0.45689655172413796, 'f1': 0.5549738219895288, 'number': 116} | {'precision': 0.9122055674518201, 'recall': 0.988399071925754, 'f1': 0.9487750556792873, 'number': 431} | {'precision': 0.6875, 'recall': 0.9705882352941176, 'f1': 0.8048780487804877, 'number': 34} | {'precision': 0.8888888888888888, 'recall': 0.9016393442622951, 'f1': 0.8952187182095626, 'number': 488} | {'precision': 0.8648648648648649, 'recall': 0.8648648648648649, 'f1': 0.8648648648648649, 'number': 37} | {'precision': 0.7450980392156863, 'recall': 0.9743589743589743, 'f1': 0.8444444444444443, 'number': 39} | {'precision': 0.6206896551724138, 'recall': 0.9473684210526315, 'f1': 0.75, 'number': 38} | {'precision': 0.8254716981132075, 'recall': 0.8620689655172413, 'f1': 0.8433734939759036, 'number': 203} | {'precision': 0.9202586206896551, 'recall': 0.9726651480637813, 'f1': 0.945736434108527, 'number': 878} | {'precision': 0.7918367346938775, 'recall': 0.8326180257510729, 'f1': 0.811715481171548, 'number': 233} | {'precision': 0.4117647058823529, 'recall': 0.4883720930232558, 'f1': 0.44680851063829785, 'number': 43} | {'precision': 0.6125, 'recall': 0.6447368421052632, 'f1': 0.6282051282051283, 'number': 76} | 0.8558 | 0.8987 | 0.8767 | 0.9453 |
| 0.0285 | 7.0 | 2569 | 0.2842 | {'precision': 0.6486486486486487, 'recall': 0.41379310344827586, 'f1': 0.5052631578947369, 'number': 116} | {'precision': 0.9227373068432672, 'recall': 0.9698375870069605, 'f1': 0.9457013574660633, 'number': 431} | {'precision': 0.7272727272727273, 'recall': 0.9411764705882353, 'f1': 0.8205128205128205, 'number': 34} | {'precision': 0.882, 'recall': 0.9036885245901639, 'f1': 0.8927125506072875, 'number': 488} | {'precision': 0.8717948717948718, 'recall': 0.918918918918919, 'f1': 0.8947368421052632, 'number': 37} | {'precision': 0.7115384615384616, 'recall': 0.9487179487179487, 'f1': 0.8131868131868132, 'number': 39} | {'precision': 0.6818181818181818, 'recall': 0.7894736842105263, 'f1': 0.7317073170731707, 'number': 38} | {'precision': 0.8624338624338624, 'recall': 0.8029556650246306, 'f1': 0.8316326530612244, 'number': 203} | {'precision': 0.9334787350054525, 'recall': 0.9749430523917996, 'f1': 0.9537604456824513, 'number': 878} | {'precision': 0.7675276752767528, 'recall': 0.8927038626609443, 'f1': 0.8253968253968255, 'number': 233} | {'precision': 0.45098039215686275, 'recall': 0.5348837209302325, 'f1': 0.48936170212765956, 'number': 43} | {'precision': 0.6582278481012658, 'recall': 0.6842105263157895, 'f1': 0.6709677419354839, 'number': 76} | 0.8633 | 0.8953 | 0.8790 | 0.9458 |
| 0.0285 | 8.0 | 2936 | 0.3072 | {'precision': 0.6172839506172839, 'recall': 0.43103448275862066, 'f1': 0.5076142131979695, 'number': 116} | {'precision': 0.9185022026431718, 'recall': 0.9675174013921114, 'f1': 0.9423728813559322, 'number': 431} | {'precision': 0.7272727272727273, 'recall': 0.9411764705882353, 'f1': 0.8205128205128205, 'number': 34} | {'precision': 0.8904665314401623, 'recall': 0.8995901639344263, 'f1': 0.8950050968399593, 'number': 488} | {'precision': 0.8717948717948718, 'recall': 0.918918918918919, 'f1': 0.8947368421052632, 'number': 37} | {'precision': 0.6491228070175439, 'recall': 0.9487179487179487, 'f1': 0.7708333333333334, 'number': 39} | {'precision': 0.6956521739130435, 'recall': 0.8421052631578947, 'f1': 0.761904761904762, 'number': 38} | {'precision': 0.8465346534653465, 'recall': 0.8423645320197044, 'f1': 0.8444444444444443, 'number': 203} | {'precision': 0.9174603174603174, 'recall': 0.9874715261958997, 'f1': 0.9511793746571585, 'number': 878} | {'precision': 0.7928286852589641, 'recall': 0.8540772532188842, 'f1': 0.8223140495867769, 'number': 233} | {'precision': 0.39215686274509803, 'recall': 0.46511627906976744, 'f1': 0.425531914893617, 'number': 43} | {'precision': 0.6091954022988506, 'recall': 0.6973684210526315, 'f1': 0.6503067484662576, 'number': 76} | 0.8549 | 0.8987 | 0.8763 | 0.9453 |
| 0.0168 | 9.0 | 3303 | 0.3166 | {'precision': 0.6447368421052632, 'recall': 0.4224137931034483, 'f1': 0.5104166666666667, 'number': 116} | {'precision': 0.9222222222222223, 'recall': 0.962877030162413, 'f1': 0.94211123723042, 'number': 431} | {'precision': 0.7111111111111111, 'recall': 0.9411764705882353, 'f1': 0.8101265822784811, 'number': 34} | {'precision': 0.896049896049896, 'recall': 0.8831967213114754, 'f1': 0.8895768833849329, 'number': 488} | {'precision': 0.8717948717948718, 'recall': 0.918918918918919, 'f1': 0.8947368421052632, 'number': 37} | {'precision': 0.6851851851851852, 'recall': 0.9487179487179487, 'f1': 0.7956989247311828, 'number': 39} | {'precision': 0.62, 'recall': 0.8157894736842105, 'f1': 0.7045454545454546, 'number': 38} | {'precision': 0.8373205741626795, 'recall': 0.8620689655172413, 'f1': 0.8495145631067961, 'number': 203} | {'precision': 0.9204665959703076, 'recall': 0.9886104783599089, 'f1': 0.9533223503569467, 'number': 878} | {'precision': 0.7876447876447876, 'recall': 0.8755364806866953, 'f1': 0.8292682926829268, 'number': 233} | {'precision': 0.4583333333333333, 'recall': 0.5116279069767442, 'f1': 0.4835164835164835, 'number': 43} | {'precision': 0.6891891891891891, 'recall': 0.6710526315789473, 'f1': 0.68, 'number': 76} | 0.8611 | 0.8979 | 0.8791 | 0.9454 |
| 0.0095 | 10.0 | 3670 | 0.3247 | {'precision': 0.6533333333333333, 'recall': 0.4224137931034483, 'f1': 0.513089005235602, 'number': 116} | {'precision': 0.9223946784922394, 'recall': 0.9651972157772621, 'f1': 0.9433106575963719, 'number': 431} | {'precision': 0.7272727272727273, 'recall': 0.9411764705882353, 'f1': 0.8205128205128205, 'number': 34} | {'precision': 0.9056603773584906, 'recall': 0.8852459016393442, 'f1': 0.8953367875647669, 'number': 488} | {'precision': 0.8717948717948718, 'recall': 0.918918918918919, 'f1': 0.8947368421052632, 'number': 37} | {'precision': 0.7115384615384616, 'recall': 0.9487179487179487, 'f1': 0.8131868131868132, 'number': 39} | {'precision': 0.6458333333333334, 'recall': 0.8157894736842105, 'f1': 0.7209302325581395, 'number': 38} | {'precision': 0.855, 'recall': 0.8423645320197044, 'f1': 0.8486352357320099, 'number': 203} | {'precision': 0.9220917822838848, 'recall': 0.9840546697038725, 'f1': 0.9520661157024795, 'number': 878} | {'precision': 0.7786259541984732, 'recall': 0.8755364806866953, 'f1': 0.8242424242424242, 'number': 233} | {'precision': 0.4375, 'recall': 0.4883720930232558, 'f1': 0.4615384615384615, 'number': 43} | {'precision': 0.6933333333333334, 'recall': 0.6842105263157895, 'f1': 0.6887417218543047, 'number': 76} | 0.8652 | 0.8956 | 0.8802 | 0.9466 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.3.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|