--- license: apache-2.0 base_model: tsavage68/Hyponatremia_M2_150steps_1e6rate_SFT tags: - trl - dpo - generated_from_trainer model-index: - name: Hyponatremia_M2_400steps_1e8rate_03beta_DPO results: [] --- # Hyponatremia_M2_400steps_1e8rate_03beta_DPO This model is a fine-tuned version of [tsavage68/Hyponatremia_M2_150steps_1e6rate_SFT](https://huggingface.co/tsavage68/Hyponatremia_M2_150steps_1e6rate_SFT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6276 - Rewards/chosen: 0.0345 - Rewards/rejected: -0.1027 - Rewards/accuracies: 0.9600 - Rewards/margins: 0.1372 - Logps/rejected: -71.2238 - Logps/chosen: -36.6121 - Logits/rejected: -2.2455 - Logits/chosen: -2.2404 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-08 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.691 | 0.2667 | 50 | 0.6959 | 0.0081 | 0.0125 | 0.4400 | -0.0044 | -70.8398 | -36.7002 | -2.2478 | -2.2427 | | 0.684 | 0.5333 | 100 | 0.6850 | 0.0066 | -0.0109 | 0.6200 | 0.0175 | -70.9177 | -36.7050 | -2.2462 | -2.2410 | | 0.6496 | 0.8 | 150 | 0.6433 | 0.0267 | -0.0772 | 0.9000 | 0.1039 | -71.1387 | -36.6380 | -2.2462 | -2.2412 | | 0.6234 | 1.0667 | 200 | 0.6319 | 0.0320 | -0.0962 | 0.9700 | 0.1281 | -71.2020 | -36.6205 | -2.2465 | -2.2415 | | 0.6262 | 1.3333 | 250 | 0.6336 | 0.0321 | -0.0927 | 0.9500 | 0.1248 | -71.1904 | -36.6199 | -2.2462 | -2.2411 | | 0.6215 | 1.6 | 300 | 0.6288 | 0.0386 | -0.0965 | 0.9600 | 0.1352 | -71.2033 | -36.5983 | -2.2455 | -2.2404 | | 0.6129 | 1.8667 | 350 | 0.6276 | 0.0345 | -0.1027 | 0.9600 | 0.1372 | -71.2238 | -36.6121 | -2.2455 | -2.2404 | | 0.6301 | 2.1333 | 400 | 0.6276 | 0.0345 | -0.1027 | 0.9600 | 0.1372 | -71.2238 | -36.6121 | -2.2455 | -2.2404 | ### Framework versions - Transformers 4.42.3 - Pytorch 2.0.0+cu117 - Datasets 2.20.0 - Tokenizers 0.19.1