Aeala's picture
Create README.md
ea8067c
|
raw
history blame
349 Bytes

4-bit GPTQ quantization of VicUnlocked-alpaca-65b

Important Note: While this is trained on a cleaned ShareGPT dataset like Vicuna used, this was trained in the Alpaca format, so prompting should be something like:

### Instruction:

<prompt> (without the <>)

### Response: