blogcncom commited on
Commit
301ae64
1 Parent(s): de56b96

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Mistral-7B-Instruct-v0.3
3
+ license: apache-2.0
4
+ tags:
5
+ - llama-cpp
6
+ - gguf-my-repo
7
+ extra_gated_description: If you want to learn more about how we process your personal
8
+ data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
9
+ ---
10
+
11
+ # blogcncom/Mistral-7B-Instruct-v0.3-Q4_0-GGUF
12
+ This model was converted to GGUF format from [`mistralai/Mistral-7B-Instruct-v0.3`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
+ Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) for more details on the model.
14
+
15
+ ## Use with llama.cpp
16
+ Install llama.cpp through brew (works on Mac and Linux)
17
+
18
+ ```bash
19
+ brew install llama.cpp
20
+
21
+ ```
22
+ Invoke the llama.cpp server or the CLI.
23
+
24
+ ### CLI:
25
+ ```bash
26
+ llama-cli --hf-repo blogcncom/Mistral-7B-Instruct-v0.3-Q4_0-GGUF --hf-file mistral-7b-instruct-v0.3-q4_0.gguf -p "The meaning to life and the universe is"
27
+ ```
28
+
29
+ ### Server:
30
+ ```bash
31
+ llama-server --hf-repo blogcncom/Mistral-7B-Instruct-v0.3-Q4_0-GGUF --hf-file mistral-7b-instruct-v0.3-q4_0.gguf -c 2048
32
+ ```
33
+
34
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
35
+
36
+ Step 1: Clone llama.cpp from GitHub.
37
+ ```
38
+ git clone https://github.com/ggerganov/llama.cpp
39
+ ```
40
+
41
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
42
+ ```
43
+ cd llama.cpp && LLAMA_CURL=1 make
44
+ ```
45
+
46
+ Step 3: Run inference through the main binary.
47
+ ```
48
+ ./llama-cli --hf-repo blogcncom/Mistral-7B-Instruct-v0.3-Q4_0-GGUF --hf-file mistral-7b-instruct-v0.3-q4_0.gguf -p "The meaning to life and the universe is"
49
+ ```
50
+ or
51
+ ```
52
+ ./llama-server --hf-repo blogcncom/Mistral-7B-Instruct-v0.3-Q4_0-GGUF --hf-file mistral-7b-instruct-v0.3-q4_0.gguf -c 2048
53
+ ```