Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Tinyllama-1.5B-Cinder-Test-2 - GGUF

Original model description:

license: mit

This is a depth up scalled model of the 616M cinder model and Cinder v2. This model still needs further training. Putting it up for testing. More information coming. Maybe. Lol. Here is a brief desc of the project: Im mixing a lot of techniques I guess that I found interesting and have been testing, HF Cosmo is not great but decent and was fully trained in 4 days using a mix of more fine tuned directed datasets and some synthetic textbook style datasets. So I used pruning and a similar mix as Cosmo on tinyllama (trained on a ton of data for an extended time for its size) to keep the tinyllama model coherent during pruning. Now I am trying to depth up scale it using my pruned model and an original, Then taking a majority of each and combining them to create a larger model. Then it needs more training, then fine tuning. Then theoretically it will be a well performing 1.5B model (that didn't need full scale training). Test 2, some training, re depth upscalled with cinder reason 1.3B and merged back with 1.5 and slight training. Continuing training from this model for next iteration.

Downloads last month
13
GGUF
Model size
1.5B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .