minlik commited on
Commit
fb675bf
1 Parent(s): d678367

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: chinese-alpaca-plus-7b-merged
3
+ emoji: 📚
4
+ colorFrom: gray
5
+ colorTo: red
6
+ sdk: gradio
7
+ sdk_version: 3.23.0
8
+ app_file: app.py
9
+ pinned: false
10
+ ---
11
+
12
+ 加入中文词表并继续预训练中文Embedding,得到的中文LLaMA-plus模型。
13
+
14
+ 详情可参考:https://github.com/ymcui/Chinese-LLaMA-Alpaca/releases/tag/v3.0
15
+
16
+ ### 使用方法参考
17
+ 1. 安装模块包
18
+ ```bash
19
+ pip install sentencepiece
20
+ pip install transformers>=4.28.0
21
+ ```
22
+
23
+ 2. 生成文本
24
+ ```python
25
+ import torch
26
+ import transformers
27
+ from transformers import LlamaTokenizer, LlamaForCausalLM
28
+
29
+
30
+ tokenizer = LlamaTokenizer.from_pretrained('minlik/chinese-llama-plus-7b-merged')
31
+ model = LlamaForCausalLM.from_pretrained('minlik/chinese-llama-plus-7b-merged').half().to('cuda')
32
+ model.eval()
33
+
34
+ text = '第一个登上月球的人是'
35
+ input_ids = tokenizer.encode(text, return_tensors='pt').to('cuda')
36
+
37
+
38
+ with torch.no_grad():
39
+ output_ids = model.generate(
40
+ input_ids=input_ids,
41
+ max_new_tokens=128,
42
+ temperature=1,
43
+ top_k=40,
44
+ top_p=0.9,
45
+ repetition_penalty=1.15
46
+ ).cuda()
47
+ output = tokenizer.decode(output_ids[0], skip_special_tokens=True)
48
+ print(output.replace('text', '').strip())