Edit model card

shafire/talktoai-F16-GGUF

This LoRA adapter was converted to GGUF format from shafire/talktoai via the ggml.ai's GGUF-my-lora space. Refer to the original adapter repository for more details.

Use with llama.cpp

# with cli
llama-cli -m base_model.gguf --lora talktoai-f16.gguf (...other args)

# with server
llama-server -m base_model.gguf --lora talktoai-f16.gguf (...other args)

To know more about LoRA usage with llama.cpp server, refer to the llama.cpp server documentation.

Downloads last month
13
GGUF
Model size
41.9M params
Architecture
llama

16-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for shafire/talktoai-F16-GGUF

Finetuned
shafire/talktoai
Quantized
(1)
this model