Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
shenxq
/
zephyr-7b-dpo-qlora
like
0
PEFT
TensorBoard
Safetensors
snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset
mistral
alignment-handbook
Generated from Trainer
trl
dpo
4-bit precision
bitsandbytes
License:
apache-2.0
Model card
Files
Files and versions
Metrics
Training metrics
Community
Train
Use this model
6bf8544
zephyr-7b-dpo-qlora
Commit History
Training in progress, step 100
6bf8544
verified
shenxq
commited on
Mar 16
Training in progress, step 100
a532658
verified
shenxq
commited on
Mar 15
initial commit
0f9f3ed
verified
shenxq
commited on
Mar 15