Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
geonmin-kim
/
zephyr-7b-dpo-qlora
like
0
PEFT
TensorBoard
Safetensors
trl
dpo
Generated from Trainer
License:
apache-2.0
Model card
Files
Files and versions
Metrics
Training metrics
Community
Use this model
a51ba51
zephyr-7b-dpo-qlora
/
adapter_config.json
Commit History
Training in progress, step 200
a51ba51
verified
geonmin-kim
commited on
Apr 8
Training in progress, step 100
8d8fc45
verified
geonmin-kim
commited on
Apr 8