raincandy-u
commited on
Commit
•
af68319
1
Parent(s):
e55a4b3
Update README.md
Browse files
README.md
CHANGED
@@ -10,4 +10,58 @@ language:
|
|
10 |
pipeline_tag: text-generation
|
11 |
tags:
|
12 |
- CoT
|
13 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
pipeline_tag: text-generation
|
11 |
tags:
|
12 |
- CoT
|
13 |
+
---
|
14 |
+
<style>
|
15 |
+
|
16 |
+
@font-face {
|
17 |
+
font-family: Zpix;
|
18 |
+
src: url(https://zpix.now.sh/zpix.woff2?v2021-03-21);
|
19 |
+
}
|
20 |
+
* {
|
21 |
+
font-family:Zpix;
|
22 |
+
}
|
23 |
+
|
24 |
+
*:hover {
|
25 |
+
color: red;
|
26 |
+
font-size: 1000px;
|
27 |
+
}
|
28 |
+
</style>
|
29 |
+
<img src="https://pbs.twimg.com/media/GKJ6VOdbIAAo2yr?format=png&name=900x900"></img>
|
30 |
+
|
31 |
+
<h1 style="font-size:48px;margin-bottom:30px;" >ジェルばんは~</h1>
|
32 |
+
|
33 |
+
# 🧬Rain-7B-v0.1
|
34 |
+
|
35 |
+
Rain-7B-v0.1 is a experimental model finetuned on [Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat) with thousands of **chain of thought** conversations.
|
36 |
+
|
37 |
+
# 🧬Evaluation
|
38 |
+
|
39 |
+
|Model Name|MMLU|
|
40 |
+
|---|---|
|
41 |
+
|Qwen1.5-7B-Chat||
|
42 |
+
|**Rain-7B-v0.1**||
|
43 |
+
|
44 |
+
# 🧬Usage
|
45 |
+
|
46 |
+
```python
|
47 |
+
!pip install -qU transformers accelerate
|
48 |
+
|
49 |
+
from transformers import AutoTokenizer
|
50 |
+
import transformers
|
51 |
+
import torch
|
52 |
+
|
53 |
+
model = "raincandy-u/Rain-7B-v0.1"
|
54 |
+
messages = [{"role": "user", "content": "Who is Cho-Tan chan?"}]
|
55 |
+
|
56 |
+
tokenizer = AutoTokenizer.from_pretrained(model)
|
57 |
+
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
58 |
+
pipeline = transformers.pipeline(
|
59 |
+
"text-generation",
|
60 |
+
model=model,
|
61 |
+
torch_dtype=torch.float16,
|
62 |
+
device_map="auto",
|
63 |
+
)
|
64 |
+
|
65 |
+
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
66 |
+
print(outputs[0]["generated_text"])
|
67 |
+
```
|