Qwen2.5-1.5B-Instruct Fine-Tuned on GSM8K with DeepSeek Augmentation
Model Overview
This model is a fine-tuned version of Qwen2.5-1.5B-Instruct, designed for mathematical problem-solving and structured reasoning. It is trained on an enhanced GSM8K dataset incorporating Chain-of-Thought (CoT) reasoning augmented by DeepSeek AI.
Key Features
- Base Model: Qwen2.5-1.5B-Instruct
- Fine-Tuned On: GSM8K enhanced with DeepSeek-V3
- Optimized for: Logical problem-solving and math reasoning
- Fine-tuning method: LoRA (Low-Rank Adaptation)
- Inference-ready: Available on Hugging Face and compatible with
llama.cpp
- Supports GGUF: Optimized versions for Q4_K_M, Q8_0, Q5_K_M, and FP16
Model Details
- Developed by: [Your Name or Organization]
- Model Type: Causal Language Model (Text Generation)
- Languages: English (
en
) - License: MIT License
- Fine-tuned from:
Qwen/Qwen2.5-1.5B-Instruct
- Training Library:
transformers
+unsloth
+trl
- Quantization: GGUF (
Q4_K_M, Q8_0, Q5_K_M, f16
)
🔗 Hugging Face Repository:
👉 Fine-tuned Qwen2.5-1.5B-Instruct
How to Use the Model
Using transformers
in Python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load model and tokenizer
model_name = "eagle0504/qwen-2_5-1_5b-instruct-using-openai-gsm8k-data-enhanced-with-deepseek-v"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Move model to GPU if available
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
# Example inference
question = "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"
inputs = tokenizer(question, return_tensors="pt").to(device)
output = model.generate(**inputs, max_length=200)
# Decode response
print(tokenizer.decode(output[0], skip_special_tokens=True))
Running the Model with llama.cpp
Step 1: Install llama.cpp
brew install llama.cpp
Step 2: Download the Model
mkdir -p ~/llama_models && cd ~/llama_models
wget https://huggingface.co/eagle0504/qwen-2_5-1_5b-instruct-using-openai-gsm8k-data-enhanced-with-deepseek-v1/resolve/main/q8_0.gguf
Step 3: Run the Model
llama-cli -m ~/llama_models/q8_0.gguf --interactive
Or you can use the following:
llama-cli -hf eagle0504/qwen-2-5-3b-instruct-using-openai-gsm8k-gguf-data-enhanced-with-deepseek-v3-small:Q8_0
Step 4: Test with a Prompt
llama-cli -m ~/llama_models/q8_0.gguf -p "Explain quantum computing in simple terms."
Training Details
Dataset Used
The model was fine-tuned on:
🔹 eagle0504/openai-gsm8k-enhanced-using-together-ai-deepseek-train8k-test1k-v1
This dataset contains:
- 8K training samples
- 1K testing samples
- Features:
question
,answer
,cot
(Chain-of-Thought)
Training Configuration
- Framework:
transformers
+unsloth
+trl
- Optimization: LoRA applied to QKV projections
- Learning Rate:
1e-6
- AdamW Optimizer (8-bit)
- Mixed Precision (
bf16
orfp16
) - Batch Size:
8
- Max Sequence Length:
1024
Model Performance
Training Loss
Step | Training Loss |
---|---|
10 | 1.1335 |
100 | 0.9770 |
3100 | 0.1722 |
9340 | 0.1553 |
Bias, Risks, and Limitations
Potential Risks
- May hallucinate incorrect reasoning steps if prompts are unclear.
- Could struggle with complex mathematical problems outside its training data.
- Limited generalization to non-math reasoning tasks.
Recommendations
- If using this model for critical applications, verify outputs with human review.
- For better performance, fine-tune on larger datasets with real-world numerical reasoning.
Environmental Impact
Estimated Carbon Emissions:
- Hardware Used: NVIDIA A100 GPU
- Training Time: ~5 hours
- Estimated CO2 Emitted: ~8.2 kg CO2eq (via ML Impact Calculator)
Citation
If you use this model in your research, please cite it as:
@misc{coming,
title={Fine-Tuned Qwen2.5-1.5B-Instruct on GSM8K with DeepSeek Augmentation},
author={Your Name},
year={2024},
url={https://huggingface.co/eagle0504/qwen-2_5-1_5b-instruct-using-openai-gsm8k-data-enhanced-with-deepseek-v1}
}
Contact
For questions, suggestions, or issues, reach out via Hugging Face Discussions.
🎉 Thank you for using this model! If you find it useful, please ⭐ it on Hugging Face! 🚀🔥
- Downloads last month
- 135
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.