maywell commited on
Commit
7f03772
โ€ข
1 Parent(s): ff980cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md CHANGED
@@ -1,3 +1,69 @@
1
  ---
2
  license: cc-by-sa-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-sa-4.0
3
  ---
4
+
5
+ # **Synatra-7B-v0.3-Translation๐Ÿง**
6
+ ![Synatra-7B-v0.3-Translation](./Synatra.png)
7
+
8
+ ## Support Me
9
+ ์‹œ๋‚˜ํŠธ๋ผ๋Š” ๊ฐœ์ธ ํ”„๋กœ์ ํŠธ๋กœ, 1์ธ์˜ ์ž์›์œผ๋กœ ๊ฐœ๋ฐœ๋˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ๋งˆ์Œ์— ๋“œ์…จ๋‹ค๋ฉด ์•ฝ๊ฐ„์˜ ์—ฐ๊ตฌ๋น„ ์ง€์›์€ ์–ด๋–จ๊นŒ์š”?
10
+ [<img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy me a Coffee" width="217" height="50">](https://www.buymeacoffee.com/mwell)
11
+
12
+ Wanna be a sponser? (Please) Contact me on Telegram **AlzarTakkarsen**
13
+
14
+ # **License**
15
+
16
+ This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-sa/4.0/) (**cc-by-sa-4.0**) use, Under **5K MAU**
17
+ The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-sa-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences.
18
+ If your service has over **5K MAU** contact me for license approval.
19
+
20
+ # **Model Details**
21
+ **Base Model**
22
+ [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
23
+
24
+ **Trained On**
25
+ A100 80GB * 1
26
+
27
+ **Instruction format**
28
+
29
+ It follows [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) format and **Alpaca(No-Input)** format.
30
+
31
+ ```python
32
+ <|im_start|>system
33
+ ์ฃผ์–ด์ง„ ๋ฌธ์žฅ์„ ํ•œ๊ตญ์–ด๋กœ ๋ฒˆ์—ญํ•ด๋ผ.<|im_end|>
34
+ <|im_start|>user
35
+ {instruction}<|im_end|>
36
+ <|im_start|>assistant
37
+
38
+ ```
39
+
40
+ ## Ko-LLM-Leaderboard
41
+
42
+ On Benchmarking...
43
+
44
+ # **Implementation Code**
45
+
46
+ Since, chat_template already contains insturction format above.
47
+ You can use the code below.
48
+
49
+ ```python
50
+ from transformers import AutoModelForCausalLM, AutoTokenizer
51
+
52
+ device = "cuda" # the device to load the model onto
53
+
54
+ model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-7B-v0.3-Translation")
55
+ tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-7B-v0.3-Translation")
56
+
57
+ messages = [
58
+ {"role": "user", "content": "๋ฐ”๋‚˜๋‚˜๋Š” ์›๋ž˜ ํ•˜์–€์ƒ‰์ด์•ผ?"},
59
+ ]
60
+
61
+ encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
62
+
63
+ model_inputs = encodeds.to(device)
64
+ model.to(device)
65
+
66
+ generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
67
+ decoded = tokenizer.batch_decode(generated_ids)
68
+ print(decoded[0])
69
+ ```