Add gpt-2 gguf model
Browse filesSigned-off-by: Aisuko <urakiny@gmail.com>
- .gitattributes +1 -0
- README.md +64 -0
- ggml-model-Q4_K_M-v2.gguf +3 -0
- ggml-model-Q4_K_M.gguf +3 -0
- ggml-model-f16.gguf +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
*.gguf filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,3 +1,67 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
4 |
+
|
5 |
+
|
6 |
+
# How to run
|
7 |
+
```bash
|
8 |
+
git clone https://github.com/ggerganov/llama.cpp
|
9 |
+
cd llama.cpp
|
10 |
+
|
11 |
+
make -j<CPU counts-2>
|
12 |
+
|
13 |
+
./llama-cli -m name.gguf -n 256 --repeat_penalty 1.0 --color -i -r "User:" -f prompts/chat-with-bob.txt
|
14 |
+
```
|
15 |
+
|
16 |
+
# 117M model result is unuseful
|
17 |
+
|
18 |
+
```markdown
|
19 |
+
|
20 |
+
system_info: n_threads = 4 / 8 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 |
|
21 |
+
main: interactive mode on.
|
22 |
+
Reverse prompt: 'User:'
|
23 |
+
sampling:
|
24 |
+
repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
|
25 |
+
top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
|
26 |
+
mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
|
27 |
+
sampling order:
|
28 |
+
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
|
29 |
+
generate: n_ctx = 1024, n_batch = 2048, n_predict = 256, n_keep = 0
|
30 |
+
|
31 |
+
|
32 |
+
== Running in interactive mode. ==
|
33 |
+
- Press Ctrl+C to interject at any time.
|
34 |
+
- Press Return to return control to the AI.
|
35 |
+
- To return control without starting a new line, end your input with '/'.
|
36 |
+
- If you want to submit another line, end your input with '\'.
|
37 |
+
|
38 |
+
Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.
|
39 |
+
|
40 |
+
User: Hello, Bob.
|
41 |
+
Bob: Hello. How may I help you today?
|
42 |
+
User: Please tell me the largest city in Europe.
|
43 |
+
Bob: Sure. The largest city in Europe is Moscow, the capital of Russia.
|
44 |
+
User:What is the largest city in Australia?
|
45 |
+
Bob: The biggest city in Australia is New York City.
|
46 |
+
|
47 |
+
User:New York is a city of US
|
48 |
+
|
49 |
+
Bob: The US is a city of the US.
|
50 |
+
|
51 |
+
User:thanks
|
52 |
+
|
53 |
+
User, you do have a question.
|
54 |
+
|
55 |
+
User, you have a question.
|
56 |
+
|
57 |
+
Bob: Alright. You are an early user.
|
58 |
+
|
59 |
+
User:
|
60 |
+
|
61 |
+
llama_print_timings: load time = 29.65 ms
|
62 |
+
llama_print_timings: sample time = 2.09 ms / 66 runs ( 0.03 ms per token, 31548.76 tokens per second)
|
63 |
+
llama_print_timings: prompt eval time = 25528.34 ms / 116 tokens ( 220.07 ms per token, 4.54 tokens per second)
|
64 |
+
llama_print_timings: eval time = 212.84 ms / 63 runs ( 3.38 ms per token, 296.00 tokens per second)
|
65 |
+
llama_print_timings: total time = 69083.22 ms / 179 tokens
|
66 |
+
(llama.cpp-4B8ytfKj-py3.10) ec2-user@ip-10-110-145-102:~/workspace/gguf/llama.cpp$
|
67 |
+
```
|
ggml-model-Q4_K_M-v2.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6281d15f9663025df2dffc5f4a4a3850bd833b0d20e1d254bd0dd854f7c722a4
|
3 |
+
size 112858624
|
ggml-model-Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6281d15f9663025df2dffc5f4a4a3850bd833b0d20e1d254bd0dd854f7c722a4
|
3 |
+
size 112858624
|
ggml-model-f16.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1e9d608e30bb653af1fa113bf725b7e41a37d1494a5db836e59bb3a599d5e6fd
|
3 |
+
size 329664992
|