Built using unsloth/Meta-Llama-3.1-8B, using unsloth/Meta-Llama-3.1-8B to train, as an experiment to do my own training, and build a gguf.

The alpaca prompt used:

alpaca_prompt = """Below is an instruction that describes a task, it will specify the profession you are using, paired with an input that provides further context. Write a response that appropriately completes the request, afterwards you can give a suggestion for how to improve what is asked.

### Instruction:
{}

### Input:
{}

### Response:
{}"""

Starting status: failure

Downloads last month
5
GGUF
Model size
8.03B params
Architecture
llama
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.