unsubscribe commited on
Commit
70b8d2a
1 Parent(s): 221fb26

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,21 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ internlm2_5-7b-chat-fp16.gguf filter=lfs diff=lfs merge=lfs -text
37
+ internlm2_5-7b-chat-q2_k.gguf filter=lfs diff=lfs merge=lfs -text
38
+ internlm2_5-7b-chat-q3_k_m.gguf filter=lfs diff=lfs merge=lfs -text
39
+ internlm2_5-7b-chat-q4_0.gguf filter=lfs diff=lfs merge=lfs -text
40
+ internlm2_5-7b-chat-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
41
+ internlm2_5-7b-chat-q5_0.gguf filter=lfs diff=lfs merge=lfs -text
42
+ internlm2_5-7b-chat-q5_k_m.gguf filter=lfs diff=lfs merge=lfs -text
43
+ internlm2_5-7b-chat-q6_k.gguf filter=lfs diff=lfs merge=lfs -text
44
+ internlm2_5-7b-chat-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
45
+ internlm2_5-7b-chat-1m-fp16.gguf filter=lfs diff=lfs merge=lfs -text
46
+ internlm2_5-7b-chat-1m-q2_k.gguf filter=lfs diff=lfs merge=lfs -text
47
+ internlm2_5-7b-chat-1m-q3_k_m.gguf filter=lfs diff=lfs merge=lfs -text
48
+ internlm2_5-7b-chat-1m-q4_0.gguf filter=lfs diff=lfs merge=lfs -text
49
+ internlm2_5-7b-chat-1m-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
50
+ internlm2_5-7b-chat-1m-q5_0.gguf filter=lfs diff=lfs merge=lfs -text
51
+ internlm2_5-7b-chat-1m-q5_k_m.gguf filter=lfs diff=lfs merge=lfs -text
52
+ internlm2_5-7b-chat-1m-q6_k.gguf filter=lfs diff=lfs merge=lfs -text
53
+ internlm2_5-7b-chat-1m-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - chat
8
+ ---
9
+ # InternLM2.5-7B-Chat-1M GGUF Model
10
+
11
+ ## Introduction
12
+
13
+ The `internlm2_5-7b-chat-1m` model in GGUF format can be utilized by [llama.cpp](https://github.com/ggerganov/llama.cpp), a highly popular open-source framework for Large Language Model (LLM) inference, across a variety of hardware platforms, both locally and in the cloud.
14
+ This repository offers `internlm2_5-7b-chat-1m` models in GGUF format in both half precision and various low-bit quantized versions, including `q5_0`, `q5_k_m`, `q6_k`, and `q8_0`.
15
+
16
+ In the subsequent sections, we will first present the installation procedure, followed by an explanation of the model download process.
17
+ And finally we will illustrate the methods for model inference and service deployment through specific examples.
18
+
19
+ ## Installation
20
+
21
+ We recommend building `llama.cpp` from source. The following code snippet provides an example for the Linux CUDA platform. For instructions on other platforms, please refer to the [official guide](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#build).
22
+
23
+ - Step 1: create a conda environment and install cmake
24
+
25
+ ```shell
26
+ conda create --name internlm2 python=3.10 -y
27
+ conda activate internlm2
28
+ pip install cmake
29
+ ```
30
+
31
+ - Step 2: clone the source code and build the project
32
+
33
+ ```shell
34
+ git clone --depth=1 https://github.com/ggerganov/llama.cpp.git
35
+ cd llama.cpp
36
+ cmake -B build -DGGML_CUDA=ON
37
+ cmake --build build --config Release -j
38
+ ```
39
+
40
+ All the built targets can be found in the sub directory `build/bin`
41
+
42
+ In the following sections, we assume that the working directory is at the root directory of `llama.cpp`.
43
+
44
+ ## Download models
45
+
46
+ In the [introduction section](#introduction), we mentioned that this repository includes several models with varying levels of computational precision. You can download the appropriate model based on your requirements.
47
+ For instance, `internlm2_5-7b-chat-1m-fp16.gguf` can be downloaded as below:
48
+
49
+ ```shell
50
+ pip install huggingface-hub
51
+ huggingface-cli download internlm/internlm2_5-7b-chat-1m-gguf internlm2_5-7b-chat-1m-fp16.gguf --local-dir . --local-dir-use-symlinks False
52
+ ```
53
+
54
+ ## Inference
55
+
56
+ You can use `llama-cli` for conducting inference. For a detailed explanation of `llama-cli`, please refer to [this guide](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
57
+
58
+ ```shell
59
+ build/bin/llama-cli \
60
+ --model internlm2_5-7b-chat-1m-fp16.gguf  \
61
+ --predict 512 \
62
+ --ctx-size 4096 \
63
+ --gpu-layers 32 \
64
+ --temp 0.8 \
65
+ --top-p 0.8 \
66
+ --top-k 50 \
67
+ --seed 1024 \
68
+ --color \
69
+ --prompt "<|im_start|>system\nYou are an AI assistant whose name is InternLM (书生·浦语).\n- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.\n- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文.<|im_end|>\n" \
70
+ --interactive \
71
+ --multiline-input \
72
+ --conversation \
73
+ --verbose \
74
+ --logdir workdir/logdir \
75
+ --in-prefix "<|im_start|>user\n" \
76
+ --in-suffix "<|im_end|>\n<|im_start|>assistant\n"
77
+ ```
78
+
79
+ ## Serving
80
+
81
+ `llama.cpp` provides an OpenAI API compatible server - `llama-server`. You can deploy `internlm2_5-7b-chat-1m-fp16.gguf` into a service like this:
82
+
83
+ ```shell
84
+ ./build/bin/llama-server -m ./internlm2_5-7b-chat-1m-fp16.gguf -ngl 32
85
+ ```
86
+
87
+ At the client side, you can access the service through OpenAI API:
88
+
89
+ ```python
90
+ from openai import OpenAI
91
+ client = OpenAI(
92
+ api_key='YOUR_API_KEY',
93
+ base_url='http://localhost:8080/v1'
94
+ )
95
+ model_name = client.models.list().data[0].id
96
+ response = client.chat.completions.create(
97
+ model=model_name,
98
+ messages=[
99
+ {"role": "system", "content": "You are a helpful assistant."},
100
+ {"role": "user", "content": " provide three suggestions about time management"},
101
+ ],
102
+ temperature=0.8,
103
+ top_p=0.8
104
+ )
105
+ print(response)
106
+ ```
internlm2_5-7b-chat-1m-fp16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07c1eb5406028d04d3bbf93d5bf53b13ff054a8832b272d0a00323c5fffac833
3
+ size 15478092608
internlm2_5-7b-chat-1m-q2_k.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d898a1dd4810e506e8d05f312e0281ed33929c503230e1e4a35ee1e5fe24e655
3
+ size 3005449024
internlm2_5-7b-chat-1m-q3_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1f05c3a442e4f03187136b4e235247280b4be70848ba53e4b88348a67e6c754
3
+ size 3830379328
internlm2_5-7b-chat-1m-q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2c6912a6cd6e37faec1d333fa95e5272b06e3f911c315a1c588b46cc93362bf
3
+ size 4453245760
internlm2_5-7b-chat-1m-q4_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5373bd21a01a76777d52b7929dc9440d6e7ac66d8f221d78117cfb8e9e871e2
3
+ size 4712768320
internlm2_5-7b-chat-1m-q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0cd9cf7f4768d0330ade772dd0dc819bc422a35ad39f21348e1c125078ff2895
3
+ size 5373043520
internlm2_5-7b-chat-1m-q5_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce53338af367ef322e654f2c447a4855b9abe6d77ac9b0d65dee419ae68d6294
3
+ size 5506736960
internlm2_5-7b-chat-1m-q6_k.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d5e92b27fb41bd7168d236379fc66589d93bc4bf237881950b65740f550c8c3
3
+ size 6350328640
internlm2_5-7b-chat-1m-q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3be6827347870aaf46aec26ed8e0c5c533304ac0a00ea1d406cddad0ec151242
3
+ size 8224240448