yixinsong's picture
minor
38ae50e
|
raw
history blame
1.64 kB
metadata
{}

Introducing SmallThinker-3B-alpha: A Small Model Fine-tuned on QwQ Synthetic Data

We introduce SmallThinker-3B-alpha, a new model fine-tuned from the Qwen2.5-3b-Instruct model using synthetic data generated by QwQ-32B-Preview.

Benchmark Performance

Model AIME24 AMC23 GAOKAO2024_I GAOKAO2024_II MMLU_STEM AMPS_Hard math_comp
Qwen2.5-3B-Instruct 6.67 45 50 35.8 59.8 - -
SmallThinker 16.667 57.5 64.2 57.1 68.2 70 46.8
GPT-4o 9.3 - - - 64.2 57 50

Intended Use Cases

SmallThinker is designed for the following use cases:

  1. Edge Deployment: Its small size makes it ideal for deployment on resource-constrained devices.
  2. Draft Model for QwQ-32B-Preview: QwQ can serve as a fast and efficient draft model for the larger QwQ-32B-Preview model.

Limitations & Disclaimer

Please be aware of the following limitations:

  • Language Limitation: The model has only been trained on English-language datasets, hence its capabilities in other languages are still lacking.
  • Unpredictable Outputs: The model may produce unexpected outputs due to its size and probabilistic generation paradigm. Users should exercise caution and validate the model's responses.
  • Repetition Issue: The model tends to repeat itself when answering high-difficulty questions. Please increase the repetition_penalty to mitigate this issue.