Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,78 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
2 |
license: cc-by-nc-4.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- ko
|
4 |
+
|
5 |
+
library_name: transformers
|
6 |
+
pipeline_tag: text-generation
|
7 |
license: cc-by-nc-4.0
|
8 |
---
|
9 |
+
|
10 |
+
# **Synatra-V0.2-7B**
|
11 |
+
|
12 |
+
Made by StableFluffy
|
13 |
+
|
14 |
+
[Visit my website! - Currently on consturction..](https://www.stablefluffy.kr/)
|
15 |
+
|
16 |
+
[Join Discord Server](https://discord.gg/HTUBtvjUZa)
|
17 |
+
|
18 |
+
## License
|
19 |
+
|
20 |
+
This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-nc/4.0/) (**cc-by-nc-4.0**) use only which takes priority over the **LLAMA 2 COMMUNITY LICENSE AGREEMENT**.
|
21 |
+
The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-nc-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences.
|
22 |
+
The licence can be changed after new model released.
|
23 |
+
|
24 |
+
## Model Details
|
25 |
+
**Base Model**
|
26 |
+
[mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
|
27 |
+
|
28 |
+
**Trained On**
|
29 |
+
A6000 48GB * 8
|
30 |
+
|
31 |
+
## Instruction format
|
32 |
+
|
33 |
+
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
|
34 |
+
|
35 |
+
E.g.
|
36 |
+
```
|
37 |
+
text = "<s>[INST] μμ΄μ λ΄ν΄μ μ
μ μ μλ €μ€. [/INST]"
|
38 |
+
```
|
39 |
+
|
40 |
+
# **Model Benchmark**
|
41 |
+
|
42 |
+
Preparing...
|
43 |
+
|
44 |
+
# **Implementation Code**
|
45 |
+
|
46 |
+
Since, chat_template already contains insturction format above.
|
47 |
+
You can use the code below.
|
48 |
+
|
49 |
+
```python
|
50 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
51 |
+
|
52 |
+
device = "cuda" # the device to load the model onto
|
53 |
+
|
54 |
+
model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-V0.1-7B")
|
55 |
+
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-V0.1-7B")
|
56 |
+
|
57 |
+
messages = [
|
58 |
+
{"role": "user", "content": "What is your favourite condiment?"},
|
59 |
+
]
|
60 |
+
|
61 |
+
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
|
62 |
+
|
63 |
+
model_inputs = encodeds.to(device)
|
64 |
+
model.to(device)
|
65 |
+
|
66 |
+
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
|
67 |
+
decoded = tokenizer.batch_decode(generated_ids)
|
68 |
+
print(decoded[0])
|
69 |
+
```
|
70 |
+
|
71 |
+
If you run it on oobabooga your prompt would look like this.
|
72 |
+
```
|
73 |
+
[INST] λ§μ»¨μ λν΄μ μλ €μ€. [/INST]
|
74 |
+
```
|
75 |
+
|
76 |
+
> Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
|
77 |
+
|
78 |
+
---
|