Update README.md
Browse files
README.md
CHANGED
@@ -39,14 +39,9 @@ model = AutoModelForCausalLM.from_pretrained(model_name)
|
|
39 |
```
|
40 |
|
41 |
#### Quantized Version
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
model_id = "ekshat/zephyr_7b_q4_k_m"
|
46 |
-
filename = "unsloth.Q4_K_M.gguf"
|
47 |
-
|
48 |
-
tokenizer = AutoTokenizer.from_pretrained(model_id, gguf_file=filename)
|
49 |
-
model = AutoModelForCausalLM.from_pretrained(model_id, gguf_file=filename)
|
50 |
```
|
51 |
|
52 |
### Generating Text
|
|
|
39 |
```
|
40 |
|
41 |
#### Quantized Version
|
42 |
+
Using Ollama, .gguf model can be used on local system. For more details, refer
|
43 |
+
```link
|
44 |
+
https://www.markhneedham.com/blog/2023/10/18/ollama-hugging-face-gguf-models/
|
|
|
|
|
|
|
|
|
|
|
45 |
```
|
46 |
|
47 |
### Generating Text
|