base_model: BEE-spoke-data/smol_llama-220M-open_instruct
datasets:
- VMware/open-instruct
inference: false
license: apache-2.0
model_creator: BEE-spoke-data
model_name: smol_llama-220M-open_instruct
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
widget:
- example_title: burritos
text: >
Below is an instruction that describes a task, paired with an input that
provides further context. Write a response that appropriately completes
the request.
### Instruction:
Write an ode to Chipotle burritos.
### Response:
BEE-spoke-data/smol_llama-220M-open_instruct-GGUF
Quantized GGUF model files for smol_llama-220M-open_instruct from BEE-spoke-data
Name | Quant method | Size |
---|---|---|
smol_llama-220m-open_instruct.fp16.gguf | fp16 | 436.50 MB |
smol_llama-220m-open_instruct.q2_k.gguf | q2_k | 94.43 MB |
smol_llama-220m-open_instruct.q3_k_m.gguf | q3_k_m | 114.65 MB |
smol_llama-220m-open_instruct.q4_k_m.gguf | q4_k_m | 137.58 MB |
smol_llama-220m-open_instruct.q5_k_m.gguf | q5_k_m | 157.91 MB |
smol_llama-220m-open_instruct.q6_k.gguf | q6_k | 179.52 MB |
smol_llama-220m-open_instruct.q8_0.gguf | q8_0 | 232.28 MB |
Original Model Card:
BEE-spoke-data/smol_llama-220M-open_instruct
Please note that this is an experiment, and the model has limitations because it is smol.
prompt format is alpaca.
Below is an instruction that describes a task, paired with an input that
provides further context. Write a response that appropriately completes
the request.
### Instruction:
How can I increase my meme production/output? Currently, I only create them in ancient babylonian which is time consuming.
### Response:
This was not trained using a separate 'inputs' field (as VMware/open-instruct
doesn't use one).
Example
Output on the text above ^. The inference API is set to sample with low temp so you should see (at least slightly) different generations each time.
Note that the inference API parameters used here are an initial educated guess, and may be updated over time:
inference:
parameters:
do_sample: true
renormalize_logits: true
temperature: 0.25
top_p: 0.95
top_k: 50
min_new_tokens: 2
max_new_tokens: 96
repetition_penalty: 1.04
no_repeat_ngram_size: 6
epsilon_cutoff: 0.0006
Feel free to experiment with the parameters using the model in Python and let us know if you have improved results with other params!
Data
This was trained on VMware/open-instruct
so do whatever you want, provided it falls under the base apache-2.0 license :)