Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of HuggingFaceTB/SmolLM2-135M. It has been trained using TRL.
Quick start
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="riswanahamed/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
Training procedure
Training Methods
What I Did
I fine-tuned a pre-trained language model using the Hugging Face transformers
library. The base model was adapted to perform better on specific task by training it on a domain-specific dataset.
How I Did It
Fine-Tuning Setup:
- Configured the model training parameters, including the learning rate, batch size, and number of steps.
- Used
SFTTrainer
from Hugging Face for seamless training with built-in evaluation capabilities. - Trained the model for 1 epoch to prevent overfitting, as the dataset was relatively small and hardware resources were limited.
Training Environment:
- The training was performed in Google Colab using a CPU/GPU environment.
- Adjusted batch sizes and learning rates to balance between performance and available resources.
Evaluation:
- Monitored training loss and validation loss at regular intervals to ensure the model was learning effectively.
- Evaluated the model using metrics like [accuracy, F1 score, or other task-specific metrics].
Saving the Model:
- The fine-tuned model was saved to a specified output directory for reuse.
What the User Should Do
Use the Model:
- Load the model using the Hugging Face
transformers
library. - Tokenize your inputs and pass them to the model for inference.
- If your task or domain differs, fine-tune the model further on your dataset.
- Follow the same process: prepare the dataset, set training configurations, and monitor evaluation metrics.
- Load the model using the Hugging Face
Experiment with Parameters:
- If you have access to better hardware, experiment with larger batch sizes or additional epochs to improve results.
- Use hyperparameter tuning to find the best configuration for your use case.
Framework versions
- TRL: 0.13.0
- Transformers: 4.47.1
- Pytorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citations
Cite TRL as:
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
- Downloads last month
- 16
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for riswanahamed/SmolLM2-FT-MyDataset
Base model
HuggingFaceTB/SmolLM2-135M