|
--- |
|
license: llama2 |
|
datasets: |
|
- teknium/GPT4-LLM-Cleaned |
|
--- |
|
|
|
# Model Card for traclm-v2-7b-instruct-GPTQ |
|
|
|
This repo contains an AWQ quantization of [TRAC-MTRY/traclm-v2-7b-instruct](https://huggingface.co/TRAC-MTRY/traclm-v2-7b-instruct) for utilization of the model on low-resource hardware. |
|
|
|
Read more about AWQ quantization [here](https://arxiv.org/abs/2306.00978). |
|
|
|
Read more about the unquantized model [here](https://huggingface.co/TRAC-MTRY/traclm-v2-7b-instruct). |
|
|
|
## Prompt Format |
|
|
|
This model was fine-tuned with the alpaca prompt format. It is *highly* recommended that you use the same format for any interactions with the model. Failure to do so will degrade performance significantly. |
|
|
|
Standard Alpaca Format: |
|
``` |
|
### System:\nBelow is an instruction that describes a task. Write a response that appropriately completes the request.\n\n\n\n### Instruction:\n{prompt}\n\n### Response:\n " |
|
``` |
|
|
|
Input Field Variant: |
|
``` |
|
### System:\nBelow is an instruction that describes a task. Write a response that appropriately completes the request.\n\n\n\n### Instruction:\n{prompt}\n\n###Input:\n{input}\n\n### Response:\n " |
|
``` |
|
|