File size: 2,839 Bytes
744af1d
921b7e4
744af1d
 
 
 
 
 
7ceb94f
921b7e4
 
 
 
744af1d
 
 
921b7e4
b082b0e
744af1d
131852a
921b7e4
 
91544bf
6760346
8a43602
8bfb2f0
 
 
 
 
a0bc2d2
8bfb2f0
 
 
66283c9
921b7e4
 
8373b4f
e2a21ef
921b7e4
e2a21ef
921b7e4
8ed92b4
921b7e4
8ed92b4
e2a21ef
744af1d
 
 
 
 
921b7e4
744af1d
8e94b1e
 
 
2f100ec
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- code
- superthoughts
- cot
- reasoning
license: apache-2.0
language:
- en
pipeline_tag: text-generation
new_version: Pinkstack/Superthoughts-lite-v1
---
![superthoughtslight.png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/2LuPB_ZPCGni3-PyCkL0-.png)
# Information
Advanced, high-quality and lite reasoning for a tiny size that you can run locally in Q8 on your phone! 😲

⚠️This is an experimental version: it may not always answer your question properly or correctly. currently reasoning may not always work on long conversations, as we've trained it on single turn conversations only.
SmolLM2-1.7B-Instruct on an advanced reasoning pattern dataset (half synthetic, half written manually by us.) to create this model. Supposed to output like this:
```
<|im_start|>user
What are you<|im_end|>
<|im_start|>assistant
<think>
Alright, the user just asked 'What are you', meaning they want to know who I am. I think my name is Superthoughts (lite version), created by Pinkstack on January 2025. I'm ready to answer their question.
</think>
Welcome! I'm Superthoughts (lite) created by Pinkstack in January 2025. Ready to help you with whatever you need!<|im_end|>
```

# Examples:
all responses below generated with no system prompt, 400 maximum tokens and a temperature of 0.7 (not recommended, 0.3 - 0.5 is better):
Generated inside the android application, Pocketpal via GGUF Q8, using the model's prompt format.
1)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/wh33o-vjxIePfPqoN3q1z.png)
2)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/Y8optw73kTgqMnZKj3wKj.png)
3)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/6lywy3IYEIgzPnUIJ5RvF.png)
4)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/0K2rR9osmT20JrDvZuptV.png)

# Uploaded  model

- **Developed by:** Pinkstack
- **License:** apache-2.0
- **Finetuned from model :** HuggingFaceTB/SmolLM2-1.7B-Instruct

This smollm2 model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Pinkstack__Superthoughts-lite-1.8B-experimental-o1-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=Pinkstack%2FSuperthoughts-lite-1.8B-experimental-o1&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!