Update README.md
Browse files
README.md
CHANGED
@@ -19,21 +19,23 @@ pipeline_tag: text-generation
|
|
19 |
# Information
|
20 |
Advanced, high-quality and lite reasoning for a tiny size that you can run locally in Q8 on your phone! 😲
|
21 |
|
22 |
-
⚠️This is an experimental version, it may enter into reasoning loops, and it may not always answer your question properly or correctly. an updated version will be released later on. currently reasoning may
|
23 |

|
24 |
we've continuously pre-trained SmolLM2-1.7B-Instruct on advanced reasoning patterns to create this model.
|
25 |
|
26 |
# Examples:
|
27 |
all responses below generated with no system prompt, 400 maximum tokens and a temperature of 0.7 (not recommended, 0.3 - 0.5 is better):
|
28 |
Generated inside the android application, Pocketpal via GGUF Q8, using the model's prompt format.
|
29 |
-
|
30 |

|
31 |
-
|
32 |

|
33 |
-
|
34 |

|
35 |
-
|
36 |

|
|
|
|
|
37 |
|
38 |
# Uploaded model
|
39 |
|
|
|
19 |
# Information
|
20 |
Advanced, high-quality and lite reasoning for a tiny size that you can run locally in Q8 on your phone! 😲
|
21 |
|
22 |
+
⚠️This is an experimental version, it may enter into reasoning loops, and it may not always answer your question properly or correctly. an updated version will be released later on. currently reasoning may not always work on long conversations, as we've trained it on single turn conversations only.
|
23 |

|
24 |
we've continuously pre-trained SmolLM2-1.7B-Instruct on advanced reasoning patterns to create this model.
|
25 |
|
26 |
# Examples:
|
27 |
all responses below generated with no system prompt, 400 maximum tokens and a temperature of 0.7 (not recommended, 0.3 - 0.5 is better):
|
28 |
Generated inside the android application, Pocketpal via GGUF Q8, using the model's prompt format.
|
29 |
+
1)
|
30 |

|
31 |
+
2)
|
32 |

|
33 |
+
3)
|
34 |

|
35 |
+
4)
|
36 |

|
37 |
+
5)
|
38 |
+

|
39 |
|
40 |
# Uploaded model
|
41 |
|