Edit model card

Finetuning Overview:

Model Used: meta-llama/Meta-Llama-3.1-8B-Instruct
Dataset: Intel/orca_dpo_pairs

Dataset Insights:

The Intel Orca dataset is a specialized version of the OpenOrca dataset, which includes ~1M GPT-4 completions and ~3.2M GPT-3.5 completions. This dataset is tabularized to align with the distributions in the ORCA paper and focuses on preference optimization by clearly indicating which responses are good and which are bad. It is primarily used in natural language processing for training and evaluation.

Finetuning Details:

This finetuning run was performed using MonsterAPI's LLM finetuner with ORPO (Optimized Response Preference Optimization) for enhancing preference optimization.

  • Completed in a total duration of 1 hour and 39 minutes for 1 epoch.
  • Costed $2.69 for the entire process.

Hyperparameters & Additional Details:

  • Epochs: 1
  • Cost Per Epoch: $2.69
  • Total Finetuning Cost: $2.69
  • Model Path: meta-llama/Meta-Llama-3.1-8B-Instruct
  • Learning Rate: 0.001
  • Data Split: 90% train 10% validation
  • Gradient Accumulation Steps: 16
Downloads last month
30
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for monsterapi/Llama-3_1-8B-Instruct-orca-ORPO

Adapter
(461)
this model

Dataset used to train monsterapi/Llama-3_1-8B-Instruct-orca-ORPO