gguf:
- q4_k_m
- 16-bit
This model is based on Meta Llama 3.1 8b, and is governed by the Llama 3.1 license.
Fine-tune using ORPO
Training Details
Training Data
- dataset: reciperesearch/dolphin-sft-v0.1-preference
Training Procedure
ORPO techniques
Training Hyperparameters
- Training regime: {{ training_regime | default("[More Information Needed]", true)}}
TrainOutput(global_step=30, training_loss=4.25380277633667, metrics={'train_runtime': 679.3467, 'train_samples_per_second': 0.353, 'train_steps_per_second': 0.044, 'total_flos': 0.0, 'train_loss': 4.25380277633667, 'epoch': 0.015})
- Downloads last month
- 286
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for EpistemeAI/Dolphin-Llama-3.1-8B-orpo-v0.1-4bit-gguf
Base model
meta-llama/Llama-3.1-8B
Quantized
unsloth/Meta-Llama-3.1-8B-bnb-4bit