riccardomusmeci
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -45,7 +45,7 @@ generate(
|
|
45 |
```
|
46 |
|
47 |
Quantize the model weights
|
48 |
-
```
|
49 |
from mlx_llm.model import create_model, quantize, save_weights
|
50 |
|
51 |
model = create_model(model_name)
|
@@ -82,5 +82,5 @@ chat_llm = ChatLLM.build(
|
|
82 |
chat_llm.run(max_tokens=500, temp=0.1)
|
83 |
```
|
84 |
|
85 |
-
With `mlx-llm` you can also play with a simple RAG. Go check the examples.
|
86 |
|
|
|
45 |
```
|
46 |
|
47 |
Quantize the model weights
|
48 |
+
```python
|
49 |
from mlx_llm.model import create_model, quantize, save_weights
|
50 |
|
51 |
model = create_model(model_name)
|
|
|
82 |
chat_llm.run(max_tokens=500, temp=0.1)
|
83 |
```
|
84 |
|
85 |
+
With `mlx-llm` you can also play with a simple RAG. Go check the [examples](https://github.com/riccardomusmeci/mlx-llm/tree/main/examples).
|
86 |
|