Update README.md
#1
by
MaziyarPanahi
- opened
README.md
CHANGED
@@ -1,3 +1,87 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
- de
|
5 |
+
- fr
|
6 |
+
- it
|
7 |
+
- pt
|
8 |
+
- hi
|
9 |
+
- es
|
10 |
+
- th
|
11 |
+
tags:
|
12 |
+
- quantized
|
13 |
+
- 2-bit
|
14 |
+
- 3-bit
|
15 |
+
- GGUF
|
16 |
+
- text-generation
|
17 |
+
- text-generation
|
18 |
+
model_name: Meta-Llama-3.1-405B-Instruct-GGUF
|
19 |
+
base_model: meta-llama/Meta-Llama-3.1-405B-Instruct
|
20 |
+
inference: false
|
21 |
+
model_creator: meta-llama
|
22 |
+
pipeline_tag: text-generation
|
23 |
+
quantized_by: MaziyarPanahi
|
24 |
+
license: llama3.1
|
25 |
+
---
|
26 |
+
# [MaziyarPanahi/Meta-Llama-3.1-405B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-405B-Instruct-GGUF)
|
27 |
+
- Model creator: [meta-llama](https://huggingface.co/meta-llama)
|
28 |
+
- Original model: [meta-llama/Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct)
|
29 |
+
|
30 |
+
## Description
|
31 |
+
[MaziyarPanahi/Meta-Llama-3.1-405B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-405B-Instruct-GGUF) contains GGUF format model files for [meta-llama/Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct).
|
32 |
+
|
33 |
+
## Sample
|
34 |
+
|
35 |
+
> llama.cpp/llama-cli -m Meta-Llama-3.1-405B-Instruct.Q2_K.gguf-00001-of-00009.gguf -p "write 10 sentences ending with the word apple." -n 1024 -t 40
|
36 |
+
|
37 |
+
```
|
38 |
+
system_info: n_threads = 40 / 80 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 |
|
39 |
+
sampling:
|
40 |
+
repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
|
41 |
+
top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
|
42 |
+
mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
|
43 |
+
sampling order:
|
44 |
+
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
|
45 |
+
generate: n_ctx = 131072, n_batch = 2048, n_predict = 1024, n_keep = 1
|
46 |
+
|
47 |
+
|
48 |
+
write 10 sentences ending with the word apple.
|
49 |
+
1. I love to eat a crunchy, juicy apple.
|
50 |
+
2. The teacher gave the student a shiny, red apple.
|
51 |
+
3. The farmer plucked a ripe, delicious apple.
|
52 |
+
4. My favorite snack is a sweet, tasty apple.
|
53 |
+
5. The child picked a fresh, green apple.
|
54 |
+
6. The cafeteria served a healthy, sliced apple.
|
55 |
+
7. The vendor sold a crisp, autumn apple.
|
56 |
+
8. The artist painted a still life with a golden apple.
|
57 |
+
9. The baby took a big bite of a soft, mealy apple.
|
58 |
+
10. The family enjoyed a basket of fresh, orchard apple. [end of text]
|
59 |
+
|
60 |
+
llama_print_timings: load time = 1068588.13 ms
|
61 |
+
llama_print_timings: sample time = 2262.60 ms / 136 runs ( 16.64 ms per token, 60.11 tokens per second)
|
62 |
+
llama_print_timings: prompt eval time = 339484.02 ms / 11 tokens (30862.18 ms per token, 0.03 tokens per second)
|
63 |
+
llama_print_timings: eval time = 33458013.45 ms / 135 runs (247837.14 ms per token, 0.00 tokens per second)
|
64 |
+
llama_print_timings: total time = 33800561.08 ms / 146 tokens
|
65 |
+
Log end
|
66 |
+
```
|
67 |
+
|
68 |
+
### About GGUF
|
69 |
+
|
70 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
71 |
+
|
72 |
+
Here is an incomplete list of clients and libraries that are known to support GGUF:
|
73 |
+
|
74 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
75 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
76 |
+
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
|
77 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
78 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
79 |
+
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
|
80 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
81 |
+
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
82 |
+
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
83 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
|
84 |
+
|
85 |
+
## Special thanks
|
86 |
+
|
87 |
+
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|