andrijdavid commited on
Commit
ef22318
1 Parent(s): 32b5a74

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +306 -0
README.md ADDED
@@ -0,0 +1,306 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ ---
4
+ language:
5
+ - multilingual
6
+ license: mit
7
+ tags:
8
+ - nlp
9
+ - code
10
+ - GGUF
11
+ license_link: https://huggingface.co/microsoft/Phi-3-small-128k-instruct/resolve/main/LICENSE
12
+ pipeline_tag: text-generation
13
+ inference:
14
+ parameters:
15
+ temperature: 0.7
16
+ widget:
17
+ - messages:
18
+ - role: user
19
+ content: Can you provide ways to eat combinations of bananas and dragonfruits?
20
+ quantized_by: andrijdavid
21
+ ---
22
+ # Phi-3-small-128k-instruct-GGUF
23
+ - Original model: [Phi-3-small-128k-instruct](https://huggingface.co/microsoft/Phi-3-small-128k-instruct)
24
+
25
+ <!-- description start -->
26
+ ## Description
27
+
28
+ This repo contains GGUF format model files for [Phi-3-small-128k-instruct](https://huggingface.co/microsoft/Phi-3-small-128k-instruct).
29
+
30
+ <!-- description end -->
31
+ <!-- README_GGUF.md-about-gguf start -->
32
+ ### About GGUF
33
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
34
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
35
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
36
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
37
+ * [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​
38
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
39
+ * [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
40
+ * [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
41
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
42
+ * [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
43
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
44
+ * [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
45
+ * [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
46
+ * [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
47
+ <!-- README_GGUF.md-about-gguf end -->
48
+
49
+ <!-- compatibility_gguf start -->
50
+ ## Explanation of quantisation methods
51
+ <details>
52
+ <summary>Click to see details</summary>
53
+ The new methods available are:
54
+
55
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
56
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
57
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
58
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
59
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
60
+ </details>
61
+ <!-- compatibility_gguf end -->
62
+
63
+ <!-- README_GGUF.md-how-to-download start -->
64
+ ## How to download GGUF files
65
+
66
+ **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
67
+
68
+ The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
69
+
70
+ * LM Studio
71
+ * LoLLMS Web UI
72
+ * Faraday.dev
73
+
74
+ ### In `text-generation-webui`
75
+
76
+ Under Download Model, you can enter the model repo: LiteLLMs/Phi-3-small-128k-instruct-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
77
+
78
+ Then click Download.
79
+
80
+ ### On the command line, including multiple files at once
81
+
82
+ I recommend using the `huggingface-hub` Python library:
83
+
84
+ ```shell
85
+ pip3 install huggingface-hub
86
+ ```
87
+
88
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
89
+
90
+ ```shell
91
+ huggingface-cli download LiteLLMs/Phi-3-small-128k-instruct-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
92
+ ```
93
+
94
+ <details>
95
+ <summary>More advanced huggingface-cli download usage (click to read)</summary>
96
+
97
+ You can also download multiple files at once with a pattern:
98
+
99
+ ```shell
100
+ huggingface-cli download LiteLLMs/Phi-3-small-128k-instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
101
+ ```
102
+
103
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
104
+
105
+ To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
106
+
107
+ ```shell
108
+ pip3 install huggingface_hub[hf_transfer]
109
+ ```
110
+
111
+ And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
112
+
113
+ ```shell
114
+ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Phi-3-small-128k-instruct-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
115
+ ```
116
+
117
+ Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
118
+ </details>
119
+ <!-- README_GGUF.md-how-to-download end -->
120
+ <!-- README_GGUF.md-how-to-run start -->
121
+ ## Example `llama.cpp` command
122
+
123
+ Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
124
+
125
+ ```shell
126
+ ./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
127
+ ```
128
+
129
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
130
+
131
+ Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
132
+
133
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
134
+
135
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
136
+
137
+ ## How to run in `text-generation-webui`
138
+
139
+ Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
140
+
141
+ ## How to run from Python code
142
+
143
+ You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
144
+
145
+ ### How to load this model in Python code, using llama-cpp-python
146
+
147
+ For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
148
+
149
+ #### First install the package
150
+
151
+ Run one of the following commands, according to your system:
152
+
153
+ ```shell
154
+ # Base ctransformers with no GPU acceleration
155
+ pip install llama-cpp-python
156
+ # With NVidia CUDA acceleration
157
+ CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
158
+ # Or with OpenBLAS acceleration
159
+ CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
160
+ # Or with CLBLast acceleration
161
+ CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
162
+ # Or with AMD ROCm GPU acceleration (Linux only)
163
+ CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
164
+ # Or with Metal GPU acceleration for macOS systems only
165
+ CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
166
+ # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
167
+ $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
168
+ pip install llama-cpp-python
169
+ ```
170
+
171
+ #### Simple llama-cpp-python example code
172
+
173
+ ```python
174
+ from llama_cpp import Llama
175
+ # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
176
+ llm = Llama(
177
+ model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
178
+ n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
179
+ n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
180
+ n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
181
+ )
182
+ # Simple inference example
183
+ output = llm(
184
+ "<PROMPT>", # Prompt
185
+ max_tokens=512, # Generate up to 512 tokens
186
+ stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
187
+ echo=True # Whether to echo the prompt
188
+ )
189
+ # Chat Completion API
190
+ llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
191
+ llm.create_chat_completion(
192
+ messages = [
193
+ {"role": "system", "content": "You are a story writing assistant."},
194
+ {
195
+ "role": "user",
196
+ "content": "Write a story about llamas."
197
+ }
198
+ ]
199
+ )
200
+ ```
201
+
202
+ ## How to use with LangChain
203
+
204
+ Here are guides on using llama-cpp-python and ctransformers with LangChain:
205
+
206
+ * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
207
+ * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
208
+
209
+ <!-- README_GGUF.md-how-to-run end -->
210
+
211
+ <!-- footer end -->
212
+
213
+ <!-- original-model-card start -->
214
+ # Original model card: Phi-3-small-128k-instruct
215
+
216
+ ## Model Summary
217
+
218
+ The Phi-3-Small-128K-Instruct is a 7B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
219
+ The model belongs to the Phi-3 family with the Small version in two variants [8K](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) which is the context length (in tokens) that it can support.
220
+
221
+ The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
222
+ When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Small-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up.
223
+
224
+ Resources and Technical Documentation:
225
+
226
+ + [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024)
227
+ + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
228
+ + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
229
+ + [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook)
230
+
231
+ | | Short Context | Long Context |
232
+ | | -- | -- | - | - |
233
+ | AGI Eval<br>5-shot | 43.9 | 42.1 | 45.2 | 42.0 | 48.4 | 49.0 | 59.6 |
234
+ | MMLU<br>5-shot | 75.5 | 63.6 | 70.5 | 66.5 | 71.4 | 66.7 | 84.0 |
235
+ | BigBench Hard<br>3-shot | 77.6 | 59.6 | 69.7 | 51.5 | 68.3 | 75.6 | 87.7 |
236
+ | ANLI<br>7-shot | 55.8 | 48.7 | 55.2 | 57.3 | 58.1 | 64.2 | 71.7 |
237
+ | HellaSwag<br>5-shot | 79.6 | 49.8 | 70.4 | 71.1 | 78.8 | 76.2 | 88.3 |
238
+ | ARC Challenge<br>10-shot | 90.8 | 78.3 | 87.3 | 82.8 | 87.4 | 88.3 | 95.6 |
239
+ | ARC Easy<br>10-shot | 97.3 | 91.4 | 95.6 | 93.4 | 96.3 | 96.1 | 98.8 |
240
+ | BoolQ<br>2-shot | 83.7 | 66.0 | 76.6 | 80.9 | 79.1 | 86.4 | 91.3 |
241
+ | CommonsenseQA<br>10-shot | 80.8 | 76.2 | 78.1 | 79.0 | 79.6 | 81.8 | 86.7 |
242
+ | MedQA<br>2-shot | 46.3 | 49.6 | 62.2 | 60.5 | 63.4 | 58.2 | 83.7 |
243
+ | OpenBookQA<br>10-shot | 87.8 | 78.6 | 85.8 | 82.6 | 86.0 | 86.4 | 93.4 |
244
+ | PIQA<br>5-shot | 88.1 | 78.1 | 86.0 | 75.7 | 86.6 | 86.2 | 90.1 |
245
+ | Social IQA<br>5-shot | 78.7 | 65.5 | 75.9 | 73.9 | 68.3 | 75.4 | 81.7 |
246
+ | TruthfulQA (MC2)<br>10-shot | 69.6 | 52.1 | 60.1 | 63.2 | 67.7 | 72.6 | 85.2 |
247
+ | WinoGrande<br>5-shot | 80.1 | 55.6 | 62.0 | 65.0 | 68.8 | 72.2 | 86.7 |
248
+ | TriviaQA<br>5-shot | 66.0 | 72.3 | 82.2 | 67.7 | 85.8 | 80.2 | 73.3 |
249
+ | GSM8K Chain of Thought<br>8-shot | 87.3 | 59.8 | 64.7 | 77.4 | 78.1 | 80.4 | 94.2 |
250
+ | HumanEval<br>0-shot | 59.1 | 34.1 | 37.8 | 60.4 | 62.2 | 64.4 | 79.9 |
251
+ | MBPP<br>3-shot | 70.3 | 51.5 | 60.2 | 67.7 | 77.8 | 73.2 | 86.7 |
252
+ | Average | 74.6 | 61.8 | 69.8 | 69.4 | 74.3 | 75.4 | 85.2 |
253
+
254
+ We take a closer look at different categories across 80 public benchmark datasets at the table below:
255
+
256
+ | Benchmark | Phi-3-Small-128K-Instruct<br>7b | Gemma<br>7B | Mixtral<br>8x7B | Llama-3-Instruct<br>8b | GPT-3.5-Turbo<br>version 1106 | Gemini<br>Pro | GPT-4-Turbo<br>version 1106 (Chat) |
257
+ | -- | - | - |
258
+ | Popular aggregated benchmark | 70.6 | 59.4 | 66.2 | 59.9 | 67.0 | 67.5 | 80.5 |
259
+ | Reasoning | 80.3 | 69.1 | 77.0 | 75.7 | 78.3 | 80.4 | 89.3 |
260
+ | Language understanding | 67.4 | 58.4 | 64.9 | 65.4 | 70.4 | 75.3 | 81.6 |
261
+ | Code generation | 60.0 | 45.6 | 52.7 | 56.4 | 70.4 | 66.7 | 76.1 |
262
+ | Math | 48.1 | 35.8 | 40.3 | 41.1 | 52.8 | 50.9 | 67.1 |
263
+ | Factual knowledge | 41.7 | 46.7 | 58.6 | 43.1 | 63.4 | 54.6 | 45.9 |
264
+ | Multilingual | 62.6 | 63.2 | 63.4 | 65.0 | 69.1 | 76.5 | 82.0 |
265
+ | Robustness | 68.7 | 38.4 | 51.0 | 64.5 | 69.3 | 69.7 | 84.6 |
266
+
267
+
268
+ ## Software
269
+
270
+ * [PyTorch](https://github.com/pytorch/pytorch)
271
+ * [DeepSpeed](https://github.com/microsoft/DeepSpeed)
272
+ * [Transformers](https://github.com/huggingface/transformers)
273
+ * [Flash-Attention](https://github.com/HazyResearch/flash-attention)
274
+ * [Tiktoken](https://github.com/openai/tiktoken)
275
+ * [Triton](https://github.com/openai/triton)
276
+
277
+ ## Hardware
278
+ Note that by default, the Phi-3-Small model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
279
+ * NVIDIA A100
280
+ * NVIDIA A6000
281
+ * NVIDIA H100
282
+
283
+ If you want to run the model on:
284
+ + Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128K](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)
285
+
286
+ ## Cross Platform Support
287
+
288
+ ONNX runtime ecosystem now supports Phi3 small models across platforms and hardware.
289
+ Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA).
290
+ Along with DML, ONNX Runtime provides cross platform support for Phi3 Small across a range of devices CPU, GPU, and mobile.
291
+ Here are some of the optimized configurations we have added:
292
+
293
+ 1. ONNX models for int4 DML: Quantized to int4 via AWQ
294
+ 2. ONNX model for fp16 CUDA
295
+ 3. ONNX model for int4 CUDA: Quantized to int4 via RTN
296
+ 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
297
+
298
+ ## License
299
+
300
+ The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-small-128k/resolve/main/LICENSE).
301
+
302
+ ## Trademarks
303
+
304
+ This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
305
+
306
+ <!-- original-model-card end -->