File size: 554 Bytes
0121069 9bf875b 775458a 0121069 9bf875b 0121069 fea0be1 7a433bb 0121069 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
This model is trained with Iterative DPO in OpenRLHF
Datasets and Hyperparameters
```
Reward Model:https://huggingface.co/OpenLLMAI/Llama-3-8b-rm-700k
SFT Model: https://huggingface.co/OpenLLMAI/Llama-3-8b-sft-mixture
Prompt Dataset: https://huggingface.co/datasets/OpenLLMAI/prompt-collection-v0.1
Max Prompt Length: 2048
Max Response Length: 2048
best_of_n: 2 (2 samples for each prompt)
Learning Rate: 5e-7
Beta: 0.1
Scheduler: Cosine with Warmup (0.03) and MinLR (0.1)
Rollout Batch Size: 20000
Training Batch Size: 256
Number of Iterations: 9
``` |