--- base_model: HuggingFaceTB/SmolLM-130m --- ###EVEN SMALLER Frankenstein of smolLm-0.13b upped to 0.15b Use this frankenbase for training. Done via semi-automated continuous merging to figure out the recipe. Model is more coherent. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6379683a81c1783a4a2ddba8/H6rv3ULQip4sYPpGGiZZe.png) ```bash wget https://huggingface.co/nisten/Biggie-SmoLlm-0.15B-Base/resolve/main/Biggie_SmolLM_0.15B_Base_bf16.gguf ``` ```verilog llama-cli -ngl 99 -co --temp 0 -p "How to build a city on Mars via calculating Aldrin-Cycler orbits?" -m Biggie_SmolLM_0.15B _Base_bf16.gguf ``` The temperature settings and min p etc need to be adjusted but even at default temp0 it was coherent for first 100 tokens. Amazing option for further training. And this is a merge of the base, not the instruct! ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6379683a81c1783a4a2ddba8/UK0_mQxy6GOHKxGKBbdhx.png) I don't understand how the f a 150mb file can talk but it can