File size: 1,888 Bytes
86c5271
8840132
dd7bd39
 
 
 
 
86c5271
c1daeac
dd7bd39
 
c1daeac
dd7bd39
 
c1daeac
8840132
c1daeac
d00e65f
 
 
 
c1daeac
 
8840132
c1daeac
 
 
 
 
 
 
 
 
 
 
 
 
c8a4e63
 
c1daeac
 
c8a4e63
c1daeac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f3969ca
 
 
 
 
 
 
 
1226dd4
 
f3969ca
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
license: apache-2.0
language:
- ko
- en
tags:
- moe
---

# **Synatra-Mixtral-8x7B**
<img src="./Synatra-Mixtral.png" alt="Synatra-Mixtral-8x7B" width="512"/>


**Synatra-Mixtral-8x7B** is a fine-tuned version of the Mixtral-8x7B-Instruct-v0.1 model using **Korean** datasets.

This model features overwhelmingly superior comprehension and inference capabilities and is licensed under apache-2.0.

# **Join Our Discord**

[Server Link](https://discord.gg/MrBt3PXdXc)

# **License**

**OPEN**, Apache-2.0.

# **Model Details**

**Base Model**  
[mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)  

**Trained On**  
A100 80GB * 6

**Instruction format**

It follows **Alpaca** format.
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{input}

### Response:
{output}
```

# **Model Benchmark**
TBD

# **Implementation Code**

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-Mixtral-8x7B")
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-Mixtral-8x7B")

messages = [
    {"role": "user", "content": "μ•„μΈμŠˆνƒ€μΈμ˜ μƒλŒ€μ„±μ΄λ‘ μ— λŒ€ν•΄μ„œ μžμ„Ένžˆ μ„€λͺ…ν•΄μ€˜."},
]

encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")

model_inputs = encodeds.to(device)
model.to(device)

generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```

# **Author's Message**

This model's training got sponsered by no one but support from people around Earth.

[Support Me](https://www.buymeacoffee.com/mwell)

Contact Me on Discord - **is.maywell**

Follow me on twitter: https://twitter.com/stablefluffy