base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- code
- superthoughts
- cot
- reasoning
license: apache-2.0
language:
- en
pipeline_tag: text-generation
new_version: Pinkstack/Superthoughts-lite-v1
Information
Advanced, high-quality and lite reasoning for a tiny size that you can run locally in Q8 on your phone! 😲
⚠️This is an experimental version: it may not always answer your question properly or correctly. currently reasoning may not always work on long conversations, as we've trained it on single turn conversations only. SmolLM2-1.7B-Instruct on an advanced reasoning pattern dataset (half synthetic, half written manually by us.) to create this model. Supposed to output like this:
<|im_start|>user
What are you<|im_end|>
<|im_start|>assistant
<think>
Alright, the user just asked 'What are you', meaning they want to know who I am. I think my name is Superthoughts (lite version), created by Pinkstack on January 2025. I'm ready to answer their question.
</think>
Welcome! I'm Superthoughts (lite) created by Pinkstack in January 2025. Ready to help you with whatever you need!<|im_end|>
Examples:
all responses below generated with no system prompt, 400 maximum tokens and a temperature of 0.7 (not recommended, 0.3 - 0.5 is better):
Generated inside the android application, Pocketpal via GGUF Q8, using the model's prompt format.
1)
2)
3)
4)
Uploaded model
- Developed by: Pinkstack
- License: apache-2.0
- Finetuned from model : HuggingFaceTB/SmolLM2-1.7B-Instruct
This smollm2 model was trained with Unsloth and Huggingface's TRL library.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!