Upload README.md
Browse files
README.md
CHANGED
@@ -5,6 +5,18 @@ license: apache-2.0
|
|
5 |
model_creator: Ziqing Yang
|
6 |
model_name: Chinese Alpaca 2 7B
|
7 |
model_type: llama
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
quantized_by: TheBloke
|
9 |
---
|
10 |
|
@@ -40,6 +52,7 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
|
|
40 |
<!-- repositories-available start -->
|
41 |
## Repositories available
|
42 |
|
|
|
43 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Chinese-Alpaca-2-7B-GPTQ)
|
44 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Chinese-Alpaca-2-7B-GGUF)
|
45 |
* [Ziqing Yang's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ziqingyang/chinese-alpaca-2-7b)
|
|
|
5 |
model_creator: Ziqing Yang
|
6 |
model_name: Chinese Alpaca 2 7B
|
7 |
model_type: llama
|
8 |
+
prompt_template: 'Below is an instruction that describes a task. Write a response
|
9 |
+
that appropriately completes the request.
|
10 |
+
|
11 |
+
|
12 |
+
### Instruction:
|
13 |
+
|
14 |
+
{prompt}
|
15 |
+
|
16 |
+
|
17 |
+
### Response:
|
18 |
+
|
19 |
+
'
|
20 |
quantized_by: TheBloke
|
21 |
---
|
22 |
|
|
|
52 |
<!-- repositories-available start -->
|
53 |
## Repositories available
|
54 |
|
55 |
+
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Chinese-Alpaca-2-7B-AWQ)
|
56 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Chinese-Alpaca-2-7B-GPTQ)
|
57 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Chinese-Alpaca-2-7B-GGUF)
|
58 |
* [Ziqing Yang's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ziqingyang/chinese-alpaca-2-7b)
|