cthiriet staghado commited on
Commit
0d98ca1
1 Parent(s): 3721b72

Add inference llama.cpp example (#3)

Browse files

- Add inference llama.cpp example (46c9330ad34bf46bbf77d753441a6bbd9d9553aa)


Co-authored-by: Said Taghadouini <staghado@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +28 -0
README.md CHANGED
@@ -74,6 +74,34 @@ out = model.generate(input_ids, max_new_tokens=10)
74
  print(tokenizer.batch_decode(out))
75
  ```
76
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
  ### Model hyperparameters
78
 
79
  More details about the model hyperparameters are given in the table below :
 
74
  print(tokenizer.batch_decode(out))
75
  ```
76
 
77
+ ### On-device Inference
78
+
79
+ Since Mambaoutai is only 1.6B parameters, it can run on a CPU at a a fast speed.
80
+
81
+ Here is an example of how to run it on llama.cpp:
82
+
83
+ ```bash
84
+ # Clone llama.cpp repository and compile it from source
85
+ git clone https://github.com/ggerganov/llama.cpp\
86
+ cd llama.cpp
87
+ make
88
+
89
+ # Create a venv and install dependencies
90
+ conda create -n mamba-cpp python=3.10
91
+ conda activate mamba-cpp
92
+ pip install -r requirements/requirements-convert-hf-to-gguf.txt
93
+
94
+ # Download the weights, tokenizer, config, tokenizer_config and special_tokens_map from this repo and
95
+ # put them in a directory 'Mambaoutai/'
96
+ mkdir Mambaoutai
97
+
98
+ # Convert the weights to GGUF format
99
+ python convert-hf-to-gguf.py Mambaoutai
100
+
101
+ # Run inference with a prompt
102
+ ./main -m Mambaoutai/ggml-model-f16.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -e -ngl 1
103
+ ```
104
+
105
  ### Model hyperparameters
106
 
107
  More details about the model hyperparameters are given in the table below :