Mistral-Nemo-Gutenberg-Doppel-12B
mistralai/Mistral-Nemo-Instruct-2407 finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.
Method
ORPO tuned with an RTX 3090 for 3 epochs.
- Downloads last month
- 15
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.