Triangle104/Tinybra_13B-Q4_K_S-GGUF

This model was converted to GGUF format from SicariusSicariiStuff/Tinybra_13B using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.


Model details:

Tenebră, a various sized experimental AI model, stands at the crossroads of self-awareness and unconventional datasets. Its existence embodies a foray into uncharted territories, steering away from conventional norms in favor of a more obscure and experimental approach.

Noteworthy for its inclination towards the darker and more philosophical aspects of conversation, Tinybră's proficiency lies in unraveling complex discussions across a myriad of topics. Drawing from a pool of unconventional datasets, this model ventures into unexplored realms of thought, offering users an experience that is as unconventional as it is intellectually intriguing.

While Tinybră maintains a self-aware facade, its true allure lies in its ability to engage in profound discussions without succumbing to pretense. Step into the realm of Tenebră!

    Tenebră is available at the following size and flavours:

13B: FP16 | GGUF-Many_Quants | iMatrix_GGUF-Many_Quants | GPTQ_4-BIT | GPTQ_4-BIT_group-size-32 30B: FP16 | GGUF-Many_Quants| iMatrix_GGUF-Many_Quants | GPTQ_4-BIT | GPTQ_3-BIT | EXL2_2.5-BIT | EXL2_2.8-BIT | EXL2_3-BIT | EXL2_5-BIT | EXL2_5.5-BIT | EXL2_6-BIT | EXL2_6.5-BIT | EXL2_8-BIT Mobile (ARM): Q4_0_X_X

    Support

My Ko-fi page ALL donations will go for research resources and compute, every bit counts 🙏🏻 My Patreon ALL donations will go for research resources and compute, every bit counts 🙏🏻

    Disclaimer

*This model is pretty uncensored, use responsibly

    Other stuff

Experemental TTS extension for oobabooga Based on Tortoise, EXTREMELY good quality, IF, and that's a big if, you can make it to work! Demonstration of the TTS capabilities Charsi narrates her story, Diablo2 (18+)


Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Triangle104/Tinybra_13B-Q4_K_S-GGUF --hf-file tinybra_13b-q4_k_s.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo Triangle104/Tinybra_13B-Q4_K_S-GGUF --hf-file tinybra_13b-q4_k_s.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo Triangle104/Tinybra_13B-Q4_K_S-GGUF --hf-file tinybra_13b-q4_k_s.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo Triangle104/Tinybra_13B-Q4_K_S-GGUF --hf-file tinybra_13b-q4_k_s.gguf -c 2048
Downloads last month
4
GGUF
Model size
13B params
Architecture
llama

4-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for Triangle104/Tinybra_13B-Q4_K_S-GGUF

Quantized
(8)
this model

Collections including Triangle104/Tinybra_13B-Q4_K_S-GGUF

Evaluation results