Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
bartowski
/
Senzu-7B-v0.1-DPO-exl2
like
1
Text Generation
practical-dreamer/RPGPT_PublicDomain-alpaca
shuyuej/metamath_gsm8k
NeuralNovel/Neural-DPO
Generated from Trainer
License:
apache-2.0
Model card
Files
Files and versions
Community
22522ff
Senzu-7B-v0.1-DPO-exl2
Commit History
measurement.json
22522ff
verified
bartowski
commited on
Feb 27
initial commit
4f5aeb3
verified
bartowski
commited on
Feb 27