Model Details
Reasoning Llama model series fine-tuned on microsoft/orca-math-word-problems-200k using GRPO(Group Relative Policy Optimization) reinforcement learning technique.
Base model: meta-llama/Llama-3.1-8B-Instruct
Parameters
- learning_rate = 5e-6,
- adam_beta1 = 0.9,
- adam_beta2 = 0.99,
- weight_decay = 0.1,
- warmup_ratio = 0.1,
- lr_scheduler_type = "cosine",
- optim = "paged_adamw_8bit",
Suggested system prompt for reasoning
Respond in the following format:
<reasoning>
...
</reasoning>
<answer>
...
</answer>
Do not forget <reasoning></reasoning><answer></answer> tags.
Support:
- Downloads last month
- 24
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.