Upload folder using huggingface_hub
Browse files- .gitattributes +8 -0
- README.md +2 -39
- TinyLlama-1.1B-1T-OpenOrca.IQ2_XXS.gguf +3 -0
- TinyLlama-1.1B-1T-OpenOrca.IQ3_XXS.gguf +3 -0
- TinyLlama-1.1B-1T-OpenOrca.IQ4_XS.gguf +3 -0
- TinyLlama-1.1B-1T-OpenOrca.Q4_K.gguf +3 -0
- TinyLlama-1.1B-1T-OpenOrca.Q5_K.gguf +3 -0
- TinyLlama-1.1B-1T-OpenOrca.Q6_K.gguf +3 -0
- TinyLlama-1.1B-1T-OpenOrca.Q8_0.gguf +3 -0
- TinyLlama-1.1B-1T-OpenOrca.gguf +3 -0
- imatrix.dat +3 -0
.gitattributes
CHANGED
@@ -34,3 +34,11 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
tinyllama-1.1b-1t-openorca.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
tinyllama-1.1b-1t-openorca.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
TinyLlama-1.1B-1T-OpenOrca.IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
TinyLlama-1.1B-1T-OpenOrca.IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
TinyLlama-1.1B-1T-OpenOrca.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
TinyLlama-1.1B-1T-OpenOrca.Q4_K.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
TinyLlama-1.1B-1T-OpenOrca.Q5_K.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
TinyLlama-1.1B-1T-OpenOrca.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
TinyLlama-1.1B-1T-OpenOrca.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
imatrix.dat filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,42 +1,5 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
- en
|
4 |
-
license: apache-2.0
|
5 |
-
tags:
|
6 |
-
- llama-cpp
|
7 |
-
- gguf-my-repo
|
8 |
-
datasets:
|
9 |
-
- Open-Orca/OpenOrca
|
10 |
-
- bigcode/starcoderdata
|
11 |
-
- cerebras/SlimPajama-627B
|
12 |
---
|
13 |
|
14 |
-
|
15 |
-
This model was converted to GGUF format from [`jeff31415/TinyLlama-1.1B-1T-OpenOrca`](https://huggingface.co/jeff31415/TinyLlama-1.1B-1T-OpenOrca) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
16 |
-
Refer to the [original model card](https://huggingface.co/jeff31415/TinyLlama-1.1B-1T-OpenOrca) for more details on the model.
|
17 |
-
## Use with llama.cpp
|
18 |
-
|
19 |
-
Install llama.cpp through brew.
|
20 |
-
|
21 |
-
```bash
|
22 |
-
brew install ggerganov/ggerganov/llama.cpp
|
23 |
-
```
|
24 |
-
Invoke the llama.cpp server or the CLI.
|
25 |
-
|
26 |
-
CLI:
|
27 |
-
|
28 |
-
```bash
|
29 |
-
llama-cli --hf-repo Felladrin/TinyLlama-1.1B-1T-OpenOrca-Q8_0-GGUF --model tinyllama-1.1b-1t-openorca.Q8_0.gguf -p "The meaning to life and the universe is"
|
30 |
-
```
|
31 |
-
|
32 |
-
Server:
|
33 |
-
|
34 |
-
```bash
|
35 |
-
llama-server --hf-repo Felladrin/TinyLlama-1.1B-1T-OpenOrca-Q8_0-GGUF --model tinyllama-1.1b-1t-openorca.Q8_0.gguf -c 2048
|
36 |
-
```
|
37 |
-
|
38 |
-
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
39 |
-
|
40 |
-
```
|
41 |
-
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-1t-openorca.Q8_0.gguf -n 128
|
42 |
-
```
|
|
|
1 |
---
|
2 |
+
base_model: jeff31415/TinyLlama-1.1B-1T-OpenOrca
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
|
5 |
+
GGUF version of [jeff31415/TinyLlama-1.1B-1T-OpenOrca](https://huggingface.co/jeff31415/TinyLlama-1.1B-1T-OpenOrca).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
TinyLlama-1.1B-1T-OpenOrca.IQ2_XXS.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b33bf1492becb8bc1ee04f5836d7d6c31bb3dab6e8c8004e5a24c05d93bbeb26
|
3 |
+
size 518175808
|
TinyLlama-1.1B-1T-OpenOrca.IQ3_XXS.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f0beff1b7753e914bb90c50ffccac5f847c5c90910c1c72c937e8e1035c1dcd9
|
3 |
+
size 634059840
|
TinyLlama-1.1B-1T-OpenOrca.IQ4_XS.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:363b751198f3b54e1a35c1e3131aa3eee45b41e979f5a38ede73b62265685a26
|
3 |
+
size 779770944
|
TinyLlama-1.1B-1T-OpenOrca.Q4_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3c16d42538d9f7d596013160a25b7e1f16d831793a07bf2bf19c51edb4b6957f
|
3 |
+
size 839334976
|
TinyLlama-1.1B-1T-OpenOrca.Q5_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8d88d08ce3acc3530ad30ce4ee198a9eed6f568e0020f2269049845d300ef8f1
|
3 |
+
size 945372224
|
TinyLlama-1.1B-1T-OpenOrca.Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5ddb4762617fd423122f3c84a99e89bdca576efe9bfe9a788b981d40c687b5d9
|
3 |
+
size 1058036800
|
TinyLlama-1.1B-1T-OpenOrca.Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7efc09cecf01caaeb2c216b2e6f41a4738575012b441b8ad6646c9cf675402e7
|
3 |
+
size 1292688448
|
TinyLlama-1.1B-1T-OpenOrca.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6d9846d0d249e9e782800b81959fd67015cc250103d79337d77b53ee04874c2f
|
3 |
+
size 2201017216
|
imatrix.dat
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1634ca217daa3fa9243ce8dad10148e3ac73a0393eaf123be8436e10463ee37b
|
3 |
+
size 1582042
|