KobbleTinyV2-1.1B

This is the GGUF quantization of https://huggingface.co/concedo/KobbleTiny

You can use KoboldCpp to run this model. With only 1B parameters, this model is ideal for running on mobile or low-end devices.

Update: KobbleTiny has been upgraded to V2! The old V1 GGUF is still available at this link.

Try it live now: https://concedo-koboldcpp-kobbletiny.hf.space/

Dataset and Objectives

The Kobble Dataset is a semi-private aggregated dataset made from multiple online sources and web scrapes. It contains content chosen and formatted specifically to work with KoboldAI software and Kobold Lite.

Dataset Categories:

  • Instruct: Single turn instruct examples presented in the Alpaca format, with an emphasis on uncensored and unrestricted responses.
  • Chat: Two participant roleplay conversation logs in a multi-turn raw chat format that KoboldAI uses.
  • Story: Unstructured fiction excerpts, including literature containing various erotic and provocative content.

Prompt template: Alpaca

### Instruction:
{prompt}

### Response:

Note: No assurances will be provided about the origins, safety, or copyright status of this model, or of any content within the Kobble dataset.
If you belong to a country or organization that has strict AI laws or restrictions against unlabelled or unrestricted content, you are advised not to use this model.

Downloads last month
4,208
GGUF
Model size
1.1B params
Architecture
llama

4-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .