unsubscribe commited on
Commit
d52aead
·
verified ·
1 Parent(s): 5e9ef02

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,12 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ internlm2_5-1_8b-chat-fp16.gguf filter=lfs diff=lfs merge=lfs -text
37
+ internlm2_5-1_8b-chat-q2_k.gguf filter=lfs diff=lfs merge=lfs -text
38
+ internlm2_5-1_8b-chat-q3_k_m.gguf filter=lfs diff=lfs merge=lfs -text
39
+ internlm2_5-1_8b-chat-q4_0.gguf filter=lfs diff=lfs merge=lfs -text
40
+ internlm2_5-1_8b-chat-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
41
+ internlm2_5-1_8b-chat-q5_0.gguf filter=lfs diff=lfs merge=lfs -text
42
+ internlm2_5-1_8b-chat-q5_k_m.gguf filter=lfs diff=lfs merge=lfs -text
43
+ internlm2_5-1_8b-chat-q6_k.gguf filter=lfs diff=lfs merge=lfs -text
44
+ internlm2_5-1_8b-chat-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - chat
8
+ ---
9
+ # InternLM2.5-1.8B-Chat GGUF Model
10
+
11
+ ## Introduction
12
+
13
+ The `internlm2_5-1_8b-chat` model in GGUF format can be utilized by [llama.cpp](https://github.com/ggerganov/llama.cpp), a highly popular open-source framework for Large Language Model (LLM) inference, across a variety of hardware platforms, both locally and in the cloud.
14
+ This repository offers `internlm2_5-1_8b-chat` models in GGUF format in both half precision and various low-bit quantized versions, including `q5_0`, `q5_k_m`, `q6_k`, and `q8_0`.
15
+
16
+ In the subsequent sections, we will first present the installation procedure, followed by an explanation of the model download process.
17
+ And finally we will illustrate the methods for model inference and service deployment through specific examples.
18
+
19
+ ## Installation
20
+
21
+ We recommend building `llama.cpp` from source. The following code snippet provides an example for the Linux CUDA platform. For instructions on other platforms, please refer to the [official guide](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#build).
22
+
23
+ - Step 1: create a conda environment and install cmake
24
+
25
+ ```shell
26
+ conda create --name internlm2 python=3.10 -y
27
+ conda activate internlm2
28
+ pip install cmake
29
+ ```
30
+
31
+ - Step 2: clone the source code and build the project
32
+
33
+ ```shell
34
+ git clone --depth=1 https://github.com/ggerganov/llama.cpp.git
35
+ cd llama.cpp
36
+ cmake -B build -DGGML_CUDA=ON
37
+ cmake --build build --config Release -j
38
+ ```
39
+
40
+ All the built targets can be found in the sub directory `build/bin`
41
+
42
+ In the following sections, we assume that the working directory is at the root directory of `llama.cpp`.
43
+
44
+ ## Download models
45
+
46
+ In the [introduction section](#introduction), we mentioned that this repository includes several models with varying levels of computational precision. You can download the appropriate model based on your requirements.
47
+ For instance, `internlm2_5-1_8b-chat-fp16.gguf` can be downloaded as below:
48
+
49
+ ```shell
50
+ pip install huggingface-hub
51
+ huggingface-cli download internlm/internlm2_5-1_8b-chat-gguf internlm2_5-1_8b-chat-fp16.gguf --local-dir . --local-dir-use-symlinks False
52
+ ```
53
+
54
+ ## Inference
55
+
56
+ You can use `llama-cli` for conducting inference. For a detailed explanation of `llama-cli`, please refer to [this guide](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
57
+
58
+ ### chat example
59
+
60
+ ```shell
61
+ build/bin/llama-cli \
62
+ --model internlm2_5-1_8b-chat-fp16.gguf  \
63
+ --predict 512 \
64
+ --ctx-size 4096 \
65
+ --gpu-layers 24 \
66
+ --temp 0.8 \
67
+ --top-p 0.8 \
68
+ --top-k 50 \
69
+ --seed 1024 \
70
+ --color \
71
+ --prompt "<|im_start|>system\nYou are an AI assistant whose name is InternLM (书生·浦语).\n- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.\n- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文.<|im_end|>\n" \
72
+ --interactive \
73
+ --multiline-input \
74
+ --conversation \
75
+ --verbose \
76
+ --logdir workdir/logdir \
77
+ --in-prefix "<|im_start|>user\n" \
78
+ --in-suffix "<|im_end|>\n<|im_start|>assistant\n"
79
+ ```
80
+
81
+ ### Function call example
82
+
83
+ `llama-cli` example:
84
+
85
+ ```shell
86
+ build/bin/llama-cli \
87
+ --model internlm2_5-1_8b-chat-fp16.gguf \
88
+ --predict 512 \
89
+ --ctx-size 4096 \
90
+ --gpu-layers 24 \
91
+ --temp 0.8 \
92
+ --top-p 0.8 \
93
+ --top-k 50 \
94
+ --seed 1024 \
95
+ --color \
96
+ --prompt '<|im_start|>system\nYou are InternLM2-Chat, a harmless AI assistant.<|im_end|>\n<|im_start|>system name=<|plugin|>[{"name": "get_current_weather", "parameters": {"required": ["location"], "type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "unit": {"type": "string"}}}, "description": "Get the current weather in a given location"}]<|im_end|>\n<|im_start|>user\n' \
97
+ --interactive \
98
+ --multiline-input \
99
+ --conversation \
100
+ --verbose \
101
+ --in-suffix "<|im_end|>\n<|im_start|>assistant\n" \
102
+ --special
103
+ ```
104
+
105
+ Conversation results:
106
+
107
+ ```text
108
+ <s><|im_start|>system
109
+ You are InternLM2-Chat, a harmless AI assistant.<|im_end|>
110
+ <|im_start|>system name=<|plugin|>[{"name": "get_current_weather", "parameters": {"required": ["location"], "type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "unit": {"type": "string"}}}, "description": "Get the current weather in a given location"}]<|im_end|>
111
+ <|im_start|>user
112
+
113
+ > I want to know today's weather in Shanghai
114
+ I need to use the get_current_weather function to get the current weather in Shanghai.<|action_start|><|plugin|>
115
+ {"name": "get_current_weather", "parameters": {"location": "Shanghai"}}<|action_end|>32
116
+ <|im_end|>
117
+
118
+ > <|im_start|>environment name=<|plugin|>\n{"temperature": 22}
119
+ The current temperature in Shanghai is 22 degrees Celsius.<|im_end|>
120
+
121
+ >
122
+ ```
123
+
124
+ ## Serving
125
+
126
+ `llama.cpp` provides an OpenAI API compatible server - `llama-server`. You can deploy `internlm2_5-1_8b-chat-fp16.gguf` into a service like this:
127
+
128
+ ```shell
129
+ ./build/bin/llama-server -m ./internlm2_5-1_8b-chat-fp16.gguf -ngl 24
130
+ ```
131
+
132
+ At the client side, you can access the service through OpenAI API:
133
+
134
+ ```python
135
+ from openai import OpenAI
136
+ client = OpenAI(
137
+ api_key='YOUR_API_KEY',
138
+ base_url='http://localhost:8080/v1'
139
+ )
140
+ model_name = client.models.list().data[0].id
141
+ response = client.chat.completions.create(
142
+ model=model_name,
143
+ messages=[
144
+ {"role": "system", "content": "You are a helpful assistant."},
145
+ {"role": "user", "content": " provide three suggestions about time management"},
146
+ ],
147
+ temperature=0.8,
148
+ top_p=0.8
149
+ )
150
+ print(response)
151
+ ```
internlm2_5-1_8b-chat-fp16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f2d18adf63dacfa5d35447116e365218508c43debd113de7fbdcf248c9c6dc8
3
+ size 3780559616
internlm2_5-1_8b-chat-q2_k.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9459859cdfb4e374038eaa47f10409c673fbd5c555a4b538887fc7fae34d0dd
3
+ size 771885824
internlm2_5-1_8b-chat-q3_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc241b5496c9d863da286d84125bd8a8b8842488237191a55e2b4ad9c4d00194
3
+ size 964412160
internlm2_5-1_8b-chat-q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da6674980efc0caae393f9c5d10c846ec9b34ee5ec598268010de5d0655a4b9a
3
+ size 1113971456
internlm2_5-1_8b-chat-q4_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ceb5ea45cddf01b024f6d5d082bcac2d3fffbc62e72fcfa543cfc6f84b805873
3
+ size 1172364032
internlm2_5-1_8b-chat-q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb8540862f5465f90493ddd910d3297f76c8e763ea4e051725d5e8148582ab53
3
+ size 1326406400
internlm2_5-1_8b-chat-q5_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:385d62826f341dc97aebfc77248f46b6e1edf0b3d3a1fc4a7f3491ad7c95dcfd
3
+ size 1356487424
internlm2_5-1_8b-chat-q6_k.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:544f4a569ea169d50a3a7519a11b9396c8f21cd109351860d2790449de4aafa3
3
+ size 1552118528
internlm2_5-1_8b-chat-q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8526cc24717fcab32b20540c546f8c23a6ea3ff40b86f421a0cd060c8123e8b2
3
+ size 2009613056