Update README.md
Browse files
README.md
CHANGED
@@ -9,35 +9,20 @@ Use this frankenbase for training.
|
|
9 |
|
10 |
If you're here from twitter and imatient, get the trained checkpoint file.
|
11 |
|
12 |
-
```
|
13 |
-
|
14 |
```
|
15 |
-
|
16 |
```bash
|
17 |
-
|
18 |
-
|
19 |
-
./llama-cli -n 1024 -fa -b 512 --min-p 0.3 --top-p 0.85 -ctk q8_0 -ctv q8_0 --keep -1 -p "You are a Nasa jpl engineer teaching the user about space and cats. <|im_start|>User: How to build a city on Mars via calculating Aldrin-Cycler orbits?<im_end> /n " -m biggie-smollm-checkpoint-twitter-q8_0.gguf --temp 2 -ngl 0 -t 1 -co -cnv --reverse-prompt "Assistant:"
|
20 |
```
|
21 |
|
22 |
Done via semi-automated continuous merging to figure out the recipe.
|
23 |
Model is more coherent.
|
24 |
|
25 |
-
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6379683a81c1783a4a2ddba8/H6rv3ULQip4sYPpGGiZZe.png)
|
26 |
-
|
27 |
-
```bash
|
28 |
-
wget https://huggingface.co/nisten/Biggie-SmoLlm-0.15B-Base/resolve/main/Biggie_SmolLM_0.15B_Base_bf16.gguf
|
29 |
-
```
|
30 |
-
```verilog
|
31 |
-
llama-cli -ngl 99 -co --temp 0 -p "How to build a city on Mars via calculating Aldrin-Cycler orbits?" -m Biggie_SmolLM_0.15B
|
32 |
-
_Base_bf16.gguf
|
33 |
-
```
|
34 |
The temperature settings and min p etc need to be adjusted but even at default temp0 it was coherent for first 100 tokens.
|
35 |
Amazing option for further training. And this is a merge of the base, not the instruct!
|
36 |
|
37 |
-
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6379683a81c1783a4a2ddba8/UK0_mQxy6GOHKxGKBbdhx.png)
|
38 |
-
|
39 |
-
I don't understand how the f a 150mb file can talk but it can
|
40 |
-
|
41 |
## 🧠 What's Really Going Down Here?
|
42 |
|
43 |
We're talking about a convergence of whole bunch of stuff, more papers will be written about this:
|
|
|
9 |
|
10 |
If you're here from twitter and imatient, get the trained checkpoint file.
|
11 |
|
12 |
+
```verilog
|
13 |
+
wget https://huggingface.co/nisten/Biggie-SmoLlm-0.15B-Base/resolve/main/biggie_groked_int8_q8_0.gguf
|
14 |
```
|
15 |
+
The settings are very finicky so be careful with your experimentation
|
16 |
```bash
|
17 |
+
./llama-cli -fa -b 512 -ctv q8_0 -ctk q8_0 --min-p 0.3 --top-p 0.85 --keep -1 -p "You are a NASA JPL Scientists. Human: I want to bring my cat to mars." -m biggie_groked_int8_q8_0.gguf -co -cnv --in-prefix "<|im_start|>Human:" --reverse-prompt "Human:" -c 1024 -n 700 --temp 1.5 -ngl 0 -t 1
|
|
|
|
|
18 |
```
|
19 |
|
20 |
Done via semi-automated continuous merging to figure out the recipe.
|
21 |
Model is more coherent.
|
22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
The temperature settings and min p etc need to be adjusted but even at default temp0 it was coherent for first 100 tokens.
|
24 |
Amazing option for further training. And this is a merge of the base, not the instruct!
|
25 |
|
|
|
|
|
|
|
|
|
26 |
## 🧠 What's Really Going Down Here?
|
27 |
|
28 |
We're talking about a convergence of whole bunch of stuff, more papers will be written about this:
|