File size: 3,013 Bytes
394b5d5 77012a9 394b5d5 e563b5a 77012a9 e563b5a 394b5d5 77012a9 394b5d5 77012a9 394b5d5 77012a9 394b5d5 e563b5a 394b5d5 77012a9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
---
license: mit
base_model: BramVanroy/fietje-2b-instruct
tags:
- trl
- fietje
- alignment-handbook
- dpo
datasets:
- BramVanroy/ultra_feedback_dutch_cleaned
- BramVanroy/orca_dpo_pairs_dutch_cleaned
model-index:
- name: fietje-2b-chat
results: []
pipeline_tag: text-generation
inference: false
language:
- nl
---
<p align="center" style="margin:0;padding:0">
<img src="https://huggingface.co/BramVanroy/fietje-2b/resolve/main/img/fietje-2b-banner.png" alt="Fietje banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>
<div style="margin:auto; text-align:center">
<h1 style="margin-bottom: 0">Fietje 2B Chat</h1>
<em>An open and efficient LLM for Dutch</em>
</div>
<blockquote class="tip">
<p align="center">
<a rel="nofollow" href="https://huggingface.co/BramVanroy/fietje-2b">π±ββοΈ Base version</a> -
<a rel="nofollow" href="https://huggingface.co/BramVanroy/fietje-2b-instruct">π€ Instruct version</a> -
<a rel="nofollow" href="https://huggingface.co/BramVanroy/fietje-2b-chat">π¬ Chat version</a> (this one) -
<a rel="nofollow" href="https://huggingface.co/BramVanroy/fietje-2b-chat-GGUF">π GGUF of chat model</a>
</p>
</blockquote>
This model is a fine-tuned version of [BramVanroy/fietje-2b-sft](https://huggingface.co/BramVanroy/fietje-2b-sft) on the BramVanroy/ultra_feedback_dutch_cleaned and the BramVanroy/orca_dpo_pairs_dutch_cleaned datasets.
It achieves the following results on the evaluation set:
- Loss: 0.2842
- Rewards/chosen: -1.1549
- Rewards/rejected: -3.6363
- Rewards/accuracies: 0.8867
- Rewards/margins: 2.4815
- Logps/rejected: -657.6813
- Logps/chosen: -451.3364
- Logits/rejected: -1.2868
- Logits/chosen: -1.3528
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-07
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.2515 | 1.0 | 1166 | 0.2842 | -1.1549 | -3.6363 | 0.8867 | 2.4815 | -657.6813 | -451.3364 | -1.2868 | -1.3528 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |