falcon-clf
This model is a fine-tuned version of Rocketknight1/falcon-rw-1b on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4504
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 20
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.7984 | 0.99 | 77 | 0.7167 |
0.7175 | 1.99 | 155 | 0.6955 |
0.6092 | 2.99 | 233 | 0.6690 |
0.5426 | 3.99 | 311 | 0.6549 |
0.7676 | 5.0 | 389 | 0.6416 |
0.6552 | 6.0 | 467 | 0.6216 |
0.5989 | 7.0 | 545 | 0.6039 |
0.4944 | 8.0 | 623 | 0.5810 |
0.4591 | 8.99 | 700 | 0.5615 |
0.5415 | 9.99 | 778 | 0.5429 |
0.4794 | 10.99 | 856 | 0.5187 |
0.4347 | 11.99 | 934 | 0.4982 |
0.3487 | 13.0 | 1012 | 0.4845 |
0.3229 | 14.0 | 1090 | 0.4724 |
0.3946 | 15.0 | 1168 | 0.4624 |
0.3689 | 16.0 | 1246 | 0.4574 |
0.3191 | 16.99 | 1323 | 0.4529 |
0.2793 | 17.99 | 1401 | 0.4509 |
0.3675 | 18.99 | 1479 | 0.4504 |
0.3215 | 19.78 | 1540 | 0.4504 |
Framework versions
- PEFT 0.7.1
- Transformers 4.36.1
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
- Downloads last month
- 4
Model tree for suneeln-duke/falcon-clf
Base model
Rocketknight1/falcon-rw-1b