Edit README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,85 @@
|
|
1 |
---
|
|
|
|
|
|
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- ko
|
4 |
+
- en
|
5 |
license: apache-2.0
|
6 |
+
tags:
|
7 |
+
- text-generation
|
8 |
+
- qwen2.5
|
9 |
+
- korean
|
10 |
+
- instruct
|
11 |
+
- mlx
|
12 |
+
pipeline_tag: text-generation
|
13 |
---
|
14 |
+
|
15 |
+
## Qwen2.5-7B-Instruct-kowiki-qa mlx convert model
|
16 |
+
- Original model is [beomi/Qwen2.5-7B-Instruct-kowiki-qa](https://huggingface.co/beomi/Qwen2.5-7B-Instruct-kowiki-qa)
|
17 |
+
|
18 |
+
|
19 |
+
## Requirement
|
20 |
+
- `pip install mlx-lm`
|
21 |
+
|
22 |
+
## Usage
|
23 |
+
- [Generate with CLI](https://github.com/ml-explore/mlx-examples/blob/main/llms/README.md#command-line)
|
24 |
+
```bash
|
25 |
+
mlx_lm.generate --model sucream/Qwen2.5-7B-Instruct-kowiki-qa --prompt "νλμ΄ νλ μ΄μ κ° λμΌ?"
|
26 |
+
```
|
27 |
+
|
28 |
+
- [In Python](https://github.com/ml-explore/mlx-examples/blob/main/llms/README.md#python-api)
|
29 |
+
```python
|
30 |
+
from mlx_lm import load, generate
|
31 |
+
|
32 |
+
model, tokenizer = load(
|
33 |
+
"sucream/Qwen2.5-7B-Instruct-kowiki-qa",
|
34 |
+
tokenizer_config={"trust_remote_code": True},
|
35 |
+
)
|
36 |
+
|
37 |
+
prompt = "νλμ΄ νλ μ΄μ κ° λμΌ?"
|
38 |
+
|
39 |
+
messages = [
|
40 |
+
{"role": "system", "content": "λΉμ μ μΉμ² ν μ±λ΄μ
λλ€."},
|
41 |
+
{"role": "user", "content": prompt},
|
42 |
+
]
|
43 |
+
prompt = tokenizer.apply_chat_template(
|
44 |
+
messages,
|
45 |
+
tokenize=False,
|
46 |
+
add_generation_prompt=True,
|
47 |
+
)
|
48 |
+
|
49 |
+
text = generate(
|
50 |
+
model,
|
51 |
+
tokenizer,
|
52 |
+
prompt=prompt,
|
53 |
+
# verbose=True,
|
54 |
+
# max_tokens=8196,
|
55 |
+
# temp=0.0,
|
56 |
+
)
|
57 |
+
```
|
58 |
+
|
59 |
+
- [OpenAI Compitable HTTP Server](https://github.com/ml-explore/mlx-examples/blob/main/llms/mlx_lm/SERVER.md)
|
60 |
+
```bash
|
61 |
+
mlx_lm.server --model sucream/Qwen2.5-7B-Instruct-kowiki-qa --host 0.0.0.0
|
62 |
+
```
|
63 |
+
|
64 |
+
```python
|
65 |
+
import openai
|
66 |
+
|
67 |
+
|
68 |
+
client = openai.OpenAI(
|
69 |
+
base_url="http://localhost:8080/v1",
|
70 |
+
)
|
71 |
+
|
72 |
+
prompt = "νλμ΄ νλ μ΄μ κ° λμΌ?"
|
73 |
+
|
74 |
+
messages = [
|
75 |
+
{"role": "system", "content": "λΉμ μ μΉμ ν μ±λ΄μ
λλ€.",},
|
76 |
+
{"role": "user", "content": prompt},
|
77 |
+
]
|
78 |
+
res = client.chat.completions.create(
|
79 |
+
model='sucream/Qwen2.5-7B-Instruct-kowiki-qa',
|
80 |
+
messages=messages,
|
81 |
+
temperature=0.2,
|
82 |
+
)
|
83 |
+
|
84 |
+
print(res.choices[0].message.content)
|
85 |
+
```
|