Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
RedaAlami
/
zephyr-7b-gemma-dpo
like
0
PEFT
TensorBoard
Safetensors
RedaAlami/PKU-SafeRLHF-Processed
gemma
alignment-handbook
trl
dpo
Generated from Trainer
4-bit precision
bitsandbytes
License:
other
Model card
Files
Files and versions
Metrics
Training metrics
Community
Train
Use this model
ea18965
zephyr-7b-gemma-dpo
Commit History
End of training
ea18965
verified
RedaAlami
commited on
Jul 31
Model save
7ae3c7a
verified
RedaAlami
commited on
Jul 31
initial commit
3da9ff5
verified
RedaAlami
commited on
Jul 31