Model Card for SmolLM2-135M-Instruct-thinking-function_calling-V0

This model is a fine-tuned version of HuggingFaceTB/SmolLM2-135M-Instruct. It has been trained using TRL.

Quick start

from transformers import pipeline

question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="emredeveloper/SmolLM2-135M-Instruct-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])

Training procedure

This model was trained with SFT.

Framework versions

  • TRL: 0.15.1
  • Transformers: 4.48.3
  • Pytorch: 2.5.1+cu124
  • Datasets: 3.3.1
  • Tokenizers: 0.21.0

Citations

Cite TRL as:

@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Cihat Emre Karataş},
    year         = 2025
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for emredeveloper/SmolLM2-135M-Instruct-thinking-function_calling-V0

Finetuned
(102)
this model