|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
base_model: c14kevincardenas/beit-large-patch16-384-limb |
|
tags: |
|
- image-regression |
|
- human-movement |
|
- vision |
|
- generated_from_trainer |
|
model-index: |
|
- name: limbxy_pose_2heads_1layers_8embeddim |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# limbxy_pose_2heads_1layers_8embeddim |
|
|
|
This model is a fine-tuned version of [c14kevincardenas/beit-large-patch16-384-limb](https://huggingface.co/c14kevincardenas/beit-large-patch16-384-limb) on the c14kevincardenas/beta_caller_284_limbxy_pose dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.0077 |
|
- Rmse: 0.0876 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 5e-05 |
|
- train_batch_size: 64 |
|
- eval_batch_size: 64 |
|
- seed: 2014 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- lr_scheduler_warmup_steps: 250 |
|
- num_epochs: 20.0 |
|
- mixed_precision_training: Native AMP |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Rmse | |
|
|:-------------:|:-----:|:----:|:---------------:|:------:| |
|
| 0.174 | 1.0 | 89 | 0.1471 | 0.3835 | |
|
| 0.1559 | 2.0 | 178 | 0.1466 | 0.3829 | |
|
| 0.1485 | 3.0 | 267 | 0.1661 | 0.4075 | |
|
| 0.1624 | 4.0 | 356 | 0.1418 | 0.3765 | |
|
| 0.1457 | 5.0 | 445 | 0.1437 | 0.3790 | |
|
| 0.1635 | 6.0 | 534 | 0.1424 | 0.3773 | |
|
| 0.1428 | 7.0 | 623 | 0.1584 | 0.3980 | |
|
| 0.1481 | 8.0 | 712 | 0.1408 | 0.3753 | |
|
| 0.1494 | 9.0 | 801 | 0.1478 | 0.3845 | |
|
| 0.1417 | 10.0 | 890 | 0.1545 | 0.3930 | |
|
| 0.1421 | 11.0 | 979 | 0.1432 | 0.3785 | |
|
| 0.145 | 12.0 | 1068 | 0.1403 | 0.3745 | |
|
| 0.1466 | 13.0 | 1157 | 0.1443 | 0.3799 | |
|
| 0.0601 | 14.0 | 1246 | 0.0208 | 0.1443 | |
|
| 0.0154 | 15.0 | 1335 | 0.0124 | 0.1115 | |
|
| 0.0102 | 16.0 | 1424 | 0.0128 | 0.1133 | |
|
| 0.0071 | 17.0 | 1513 | 0.0129 | 0.1137 | |
|
| 0.0076 | 18.0 | 1602 | 0.0085 | 0.0920 | |
|
| 0.0057 | 19.0 | 1691 | 0.0079 | 0.0888 | |
|
| 0.0046 | 20.0 | 1780 | 0.0077 | 0.0876 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.45.2 |
|
- Pytorch 2.5.0+cu124 |
|
- Datasets 3.0.1 |
|
- Tokenizers 0.20.1 |
|
|