minlik commited on
Commit
250f0cb
1 Parent(s): ff3945d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: chinese-alpaca-plus-33b-merged
3
+ emoji: 📚
4
+ colorFrom: gray
5
+ colorTo: red
6
+ sdk: gradio
7
+ sdk_version: 3.23.0
8
+ app_file: app.py
9
+ pinned: false
10
+ ---
11
+
12
+ 加入中文词表并继续预训练中文Embedding,并在此基础上继续使用指令数据集finetuning,得到的中文Alpaca-plus-33B模型。
13
+
14
+ 模型转换用到的相关base及lora模型如下:
15
+ - base-model: elinas/llama-30b-hf-transformers-4.29
16
+ - lora-model: ziqingyang/chinese-llama-plus-lora-7b, ziqingyang/chinese-alpaca-plus-lora-33b
17
+
18
+ 详情可参考:https://github.com/ymcui/Chinese-LLaMA-Alpaca/releases/tag/v5.0
19
+
20
+
21
+ ### 使用方法参考
22
+ 1. 安装模块包
23
+ ```bash
24
+ pip install sentencepiece
25
+ pip install transformers>=4.28.0
26
+ ```
27
+
28
+ 2. 生成文本
29
+ ```python
30
+ import torch
31
+ import transformers
32
+ from transformers import LlamaTokenizer, LlamaForCausalLM
33
+
34
+ def generate_prompt(text):
35
+ return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
36
+
37
+ ### Instruction:
38
+ {text}
39
+
40
+ ### Response:"""
41
+
42
+
43
+ tokenizer = LlamaTokenizer.from_pretrained('minlik/chinese-alpaca-plus-33b-merged')
44
+ model = LlamaForCausalLM.from_pretrained('minlik/chinese-alpaca-plus-33b-merged').half().to('cuda')
45
+ model.eval()
46
+
47
+ text = '第一个登上月球的人是谁?'
48
+ prompt = generate_prompt(text)
49
+ input_ids = tokenizer.encode(prompt, return_tensors='pt').to('cuda')
50
+
51
+
52
+ with torch.no_grad():
53
+ output_ids = model.generate(
54
+ input_ids=input_ids,
55
+ max_new_tokens=128,
56
+ temperature=1,
57
+ top_k=40,
58
+ top_p=0.9,
59
+ repetition_penalty=1.15
60
+ ).cuda()
61
+ output = tokenizer.decode(output_ids[0], skip_special_tokens=True)
62
+ print(output.replace(prompt, '').strip())
63
+ ```