Model Card for Qwen2.5-1.5B-Open-R1-Distill

This model is a fine-tuned version of Qwen/Qwen2.5-1.5B-Instruct. It has been trained using TRL.

Quick start

from transformers import pipeline

generator = pipeline("text-generation", model="Mingsmilet/Qwen2.5-1.5B-Open-R1-Distill", device="cuda")

question = "The fraction\n\\[\\frac{\\left(3^{2008}\\right)^2-\\left(3^{2006}\\right)^2}{\\left(3^{2007}\\right)^2-\\left(3^{2005}\\right)^2}\\]\nsimplifies to which of the following?\n$\\mathrm{(A)}\\ 1\\qquad\\mathrm{(B)}\\ \\frac{9}{4}\\qquad\\mathrm{(C)}\\ 3\\qquad\\mathrm{(D)}\\ \\frac{9}{2}\\qquad\\mathrm{(E)}\\ 9$"
output = generator([{"role": "user", "content": question}], max_new_tokens=5000, return_full_text=False)[0]
print(output["generated_text"])

Training procedure

Visualize in Weights & Biases

This model was trained with SFT.

Framework versions

  • TRL: 0.16.0.dev0
  • Transformers: 4.49.0.dev0
  • Pytorch: 2.5.1
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citations

Cite TRL as:

@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
    year         = 2020,
    journal      = {GitHub repository},
    publisher    = {GitHub},
    howpublished = {\url{https://github.com/huggingface/trl}}
}
Downloads last month
56
Safetensors
Model size
1.54B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Mingsmilet/Qwen2.5-1.5B-Open-R1-Distill

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(397)
this model

Dataset used to train Mingsmilet/Qwen2.5-1.5B-Open-R1-Distill