|
--- |
|
license: apache-2.0 |
|
base_model: HuggingFaceM4/idefics2-8b |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: gm-lora-bfloat16-idefics2-8b-xrayvqa-finetuned-medir2 |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# gm-lora-bfloat16-idefics2-8b-xrayvqa-finetuned-medir2 |
|
|
|
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 1.6349 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 5e-05 |
|
- train_batch_size: 8 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- gradient_accumulation_steps: 10 |
|
- total_train_batch_size: 80 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 3 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|:-------------:|:------:|:----:|:---------------:| |
|
| 1.1259 | 0.0764 | 50 | 1.4468 | |
|
| 1.2502 | 0.1529 | 100 | 1.4544 | |
|
| 1.2599 | 0.2293 | 150 | 1.4605 | |
|
| 1.1477 | 0.3058 | 200 | 1.4844 | |
|
| 1.1041 | 0.3822 | 250 | 1.4835 | |
|
| 1.0958 | 0.4586 | 300 | 1.4724 | |
|
| 1.0975 | 0.5351 | 350 | 1.4800 | |
|
| 1.133 | 0.6115 | 400 | 1.4656 | |
|
| 1.1785 | 0.6880 | 450 | 1.4458 | |
|
| 1.3751 | 0.7644 | 500 | 1.4227 | |
|
| 1.3751 | 0.8409 | 550 | 1.4187 | |
|
| 1.3983 | 0.9173 | 600 | 1.4158 | |
|
| 1.4147 | 0.9937 | 650 | 1.4073 | |
|
| 0.9615 | 1.0702 | 700 | 1.4901 | |
|
| 0.9026 | 1.1466 | 750 | 1.5204 | |
|
| 0.8919 | 1.2231 | 800 | 1.4997 | |
|
| 0.917 | 1.2995 | 850 | 1.4994 | |
|
| 0.9149 | 1.3759 | 900 | 1.4998 | |
|
| 0.9342 | 1.4524 | 950 | 1.4971 | |
|
| 0.9363 | 1.5288 | 1000 | 1.5039 | |
|
| 0.9087 | 1.6053 | 1050 | 1.4907 | |
|
| 0.9272 | 1.6817 | 1100 | 1.4920 | |
|
| 0.9195 | 1.7581 | 1150 | 1.4955 | |
|
| 0.9488 | 1.8346 | 1200 | 1.4900 | |
|
| 0.9209 | 1.9110 | 1250 | 1.4887 | |
|
| 0.9463 | 1.9875 | 1300 | 1.4891 | |
|
| 0.7123 | 2.0639 | 1350 | 1.6077 | |
|
| 0.646 | 2.1403 | 1400 | 1.6182 | |
|
| 0.6405 | 2.2168 | 1450 | 1.6390 | |
|
| 0.6481 | 2.2932 | 1500 | 1.6198 | |
|
| 0.6372 | 2.3697 | 1550 | 1.6340 | |
|
| 0.6618 | 2.4461 | 1600 | 1.6311 | |
|
| 0.6499 | 2.5226 | 1650 | 1.6277 | |
|
| 0.6471 | 2.5990 | 1700 | 1.6344 | |
|
| 0.6554 | 2.6754 | 1750 | 1.6303 | |
|
| 0.6475 | 2.7519 | 1800 | 1.6333 | |
|
| 0.641 | 2.8283 | 1850 | 1.6315 | |
|
| 0.6274 | 2.9048 | 1900 | 1.6343 | |
|
| 0.6309 | 2.9812 | 1950 | 1.6349 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.41.0.dev0 |
|
- Pytorch 2.0.1 |
|
- Datasets 2.19.1 |
|
- Tokenizers 0.19.1 |
|
|