Update README.md
Browse files
README.md
CHANGED
@@ -14,16 +14,15 @@ The encryption is a simple XOR between files, ensuring that only the people that
|
|
14 |
You can find the decrypt code on https://github.com/LianjiaTech/BELLE/tree/main/models .
|
15 |
|
16 |
|
17 |
-
# Model Card for
|
18 |
|
19 |
## Welcome
|
20 |
-
4-bit quantized
|
21 |
-
|
22 |
If you find this model helpful, please *like* this model and star us on https://github.com/LianjiaTech/BELLE !
|
23 |
|
24 |
|
25 |
## Model description
|
26 |
-
|
27 |
|
28 |
The code of Chinese data generation and other detailed information can be found in our Github project repository: https://github.com/LianjiaTech/BELLE.
|
29 |
|
@@ -33,15 +32,14 @@ Should you accept our license and acknowledged the limitations, download the mod
|
|
33 |
|
34 |
|
35 |
## Model Usage
|
36 |
-
This is a quantized version of [BELLE-LLaMA-7B-2M](https://huggingface.co/BelleGroup/BELLE-LLaMA-7B-2M-enc) made for offline on-devices inferencing.
|
37 |
|
38 |
You can use this model with ChatBELLE, a minimal, cross-platform LLM chat app powered by [BELLE](https://github.com/LianjiaTech/BELLE)
|
39 |
using quantized on-device offline models and Flutter UI, running on macOS (done), Windows, Android,
|
40 |
iOS(see [Known Issues](#known-issues)) and more.
|
41 |
|
42 |
### macOS
|
43 |
-
* Download
|
44 |
-
* Open the app by right click then Ctrl-click `Open`, then click `Open`.
|
45 |
* The app will prompt the intended model file path and fail to load the model. Close the app.
|
46 |
* Download quantized model from [BELLE-LLaMA-7B-2M-q4](https://huggingface.co/BelleGroup/BELLE-LLaMA-7B-2M-q4/blob/main/belle-model.bin).
|
47 |
* Move and rename the model to the path prompted by the app. Defaults to `~/Library/Containers/com.barius.chatbelle/Data/belle-model.bin` .
|
|
|
14 |
You can find the decrypt code on https://github.com/LianjiaTech/BELLE/tree/main/models .
|
15 |
|
16 |
|
17 |
+
# Model Card for ChatBELLE-int4
|
18 |
|
19 |
## Welcome
|
20 |
+
4-bit quantized model using [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
|
|
21 |
If you find this model helpful, please *like* this model and star us on https://github.com/LianjiaTech/BELLE !
|
22 |
|
23 |
|
24 |
## Model description
|
25 |
+
ChatBELLE-int4 is based on 7B model and quantized to 4-bit.
|
26 |
|
27 |
The code of Chinese data generation and other detailed information can be found in our Github project repository: https://github.com/LianjiaTech/BELLE.
|
28 |
|
|
|
32 |
|
33 |
|
34 |
## Model Usage
|
|
|
35 |
|
36 |
You can use this model with ChatBELLE, a minimal, cross-platform LLM chat app powered by [BELLE](https://github.com/LianjiaTech/BELLE)
|
37 |
using quantized on-device offline models and Flutter UI, running on macOS (done), Windows, Android,
|
38 |
iOS(see [Known Issues](#known-issues)) and more.
|
39 |
|
40 |
### macOS
|
41 |
+
* Download [chatbelle.dmg](https://github.com/LianjiaTech/BELLE/releases/download/v0.95/chatbelle.dmg) from [Releases](https://github.com/LianjiaTech/BELLE/releases/tag/v0.95) page, double click to open it, then drag `Chat Belle.dmg` into `Applications` folder.
|
42 |
+
* Open the `Chat Belle` app in `Applications` folder by right click then Ctrl-click `Open`, then click `Open`.
|
43 |
* The app will prompt the intended model file path and fail to load the model. Close the app.
|
44 |
* Download quantized model from [BELLE-LLaMA-7B-2M-q4](https://huggingface.co/BelleGroup/BELLE-LLaMA-7B-2M-q4/blob/main/belle-model.bin).
|
45 |
* Move and rename the model to the path prompted by the app. Defaults to `~/Library/Containers/com.barius.chatbelle/Data/belle-model.bin` .
|