YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Tinypus-1.5B - AWQ

Original model description:

license: mit datasets: - garage-bAInd/Open-Platypus pipeline_tag: text-generation

*drumroll please*

Introducing Tinypus!

image/jpeg

I passthrough merged base Tiny Llama Chat with itself, then fine-tuned with around 1/3 of Platypus dataset.

Observations:

  • It's smarter (I think?)

  • It sometimes throws "### Instruction:" line. This could be due to the platypus dataset, or the fact that I know jackshit about programming. You can add it to "custom stopping strings" in oobaboga.

  • It may be possible to train very specialized mini experts and merge them???

    Template

    Same with TinyLlama/TinyLlama-1.1B-Chat-v1.0

Merge details

slices:

  • sources:

    • model: E://text-generation-webui//models//TinyLlama

      layer_range: [0, 12]

  • sources:

    • model: E://text-generation-webui//models//TinyLlama

      layer_range: [4, 22]

merge_method: passthrough

dtype: bfloat16

QLoRA Details

Chunk Length: 1152 R/A: 64/128 Epoch: 1 q-k-v-o

Downloads last month
4
Safetensors
Model size
308M params
Tensor type
I32
·
FP16
·
Inference API
Unable to determine this model's library. Check the docs .