Upload folder using huggingface_hub (#1)
Browse files- 691f0798d621d1e6cb87d207cca12638643e74fc7b51b846af2ef9d149b67a05 (6e552b237c25bba3e2c7e533e60efae65ea1e7a6)
- 829087af7004497f22e419cf0fabb68da1ff424c9dfe5bca82497f05bdd15441 (7d83b622c2ef3960cb00b320d47bce5de50d9203)
- f3b7df95c5c5f984b674e849fb89527058bc3f0ccb298204c50a0010f49b1ab7 (8c47cff4d712c6fa30cb37e1d0a837d37af3fe05)
- 592fa95f0be277e9188e6357d3c58e51202fa7caecf18ff91ff516638dbf2383 (8c0b7182d5895b4dba89cc410d6068f645de1ba4)
- c5ab00ffe0df78b71efd81fe2864ff8a40add069a8043df9eb4c8103363a41d1 (b422a14fbe6e0123bfd75a4ef1df18143b92f1c5)
- 9fb572291ec8097f442ca9b26642319b88d7c21b5455b817bfd033b327fa7ed5 (1b8d26f6eeb548e8546ee854cd4588aa9cd379e3)
- 72c2e8a318e694ba4b71bbebe1f0298c2cd02eee2bf6beeed65068dd5ce076aa (1cdab037d25034f4bcc07d4c7e2bfddee8bd0fb1)
- 35e1b4b2d3eef5eaca906a12898727b72d86c93f4d2f367e083c9f78276c6d65 (eb0e5b5cc3385b7029e63d7583f9bd041e3694ac)
- 842592e363f822a91e0aaf2d8083ee2e566378bbfc4cfb7619eecf8c602f7d35 (6548114b05ef0422f4098324202ce44e62a41576)
- c26c52f30d8dc9ace1ac1b8d6ff81d09bde8b1b5c96faa5d990465d0fdbc523e (2c5c296f7860a11031a16338d5ec871c01195afc)
- 891c367af97ae3f364b4d959a9bf263053193522ee17d94f8dbdfb83e490ae75 (5cb02d2cea6c1ee3f5b5a8aaf99266b7c0f1bd70)
- c1d6fa4be3f3e2be52d78b763fd7e87629fb7be9a4ffbfcd1f2c649244743012 (709bfa1dd57f781d2d0d4f5006f858e360ca51f3)
- .gitattributes +11 -0
- Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q2_K.gguf +3 -0
- Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q3_K_L.gguf +3 -0
- Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q3_K_M.gguf +3 -0
- Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q3_K_S.gguf +3 -0
- Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q4_K_M.gguf +3 -0
- Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q4_K_S.gguf +3 -0
- Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q5_K_M.gguf +3 -0
- Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q5_K_S.gguf +3 -0
- Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q6_K.gguf +3 -0
- Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q8_0.gguf +3 -0
- Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.fp16.gguf +3 -0
- README.md +58 -0
@@ -33,3 +33,14 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
45 |
+
Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
46 |
+
Ognoexperiment27multi_verse_modelExperiment27pastiche-7B.fp16.gguf filter=lfs diff=lfs merge=lfs -text
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cc5843d17d8a69252c62188e33fb2a4cc67a6104d1f030c5b9f531727298151a
|
3 |
+
size 2719242080
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:84a4596dace7c184cb0a4088f92f4d772ca0e96c3739be300e783bde7611c2f8
|
3 |
+
size 3822024544
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:675119981a4895bdb9ac0959fc2a50283c63ea02097b67331529bb216534e3a0
|
3 |
+
size 3518986080
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a1e8bab5eb3ddca9a64dccfa5d90a88fe68e6418b4232c51fb779c47776e8aeb
|
3 |
+
size 3164567392
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ca0b82e38c7751c77358e2f9f0886467e22c16a0768618ddb33ad8922497b4ff
|
3 |
+
size 4368439136
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5add7c553e0dd1e18f7312d69f72abe0653d84dc7554f66a786e9a62986fb83a
|
3 |
+
size 4140373856
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ea47f531f5e5fb6fe2391897f602e179d80e80f421f2afde3712153e6f04d4c8
|
3 |
+
size 5131409248
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fdec9b75d4327b8fc784eb36bdbb0656b441c45dac2df113aefe7a09bc5b147a
|
3 |
+
size 4997715808
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:93eabbb7b2dc32dd630b48bc10648421750a8608a7de791a765941eb5a7aebb7
|
3 |
+
size 5942064992
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0a1d3059253be6a8d624ef46ca429a2ab3de9c7b40be00f7262f41aeb3282b30
|
3 |
+
size 7695857504
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:503abf978e74b5cc4553d3dac9e7c2934a5bbab8b5008144ad394504505642c7
|
3 |
+
size 14484731744
|
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- quantized
|
4 |
+
- 2-bit
|
5 |
+
- 3-bit
|
6 |
+
- 4-bit
|
7 |
+
- 5-bit
|
8 |
+
- 6-bit
|
9 |
+
- 8-bit
|
10 |
+
- GGUF
|
11 |
+
- transformers
|
12 |
+
- safetensors
|
13 |
+
- mistral
|
14 |
+
- text-generation
|
15 |
+
- merge
|
16 |
+
- mergekit
|
17 |
+
- lazymergekit
|
18 |
+
- automerger
|
19 |
+
- license:apache-2.0
|
20 |
+
- autotrain_compatible
|
21 |
+
- endpoints_compatible
|
22 |
+
- text-generation-inference
|
23 |
+
- region:us
|
24 |
+
- text-generation
|
25 |
+
model_name: Ognoexperiment27multi_verse_modelExperiment27pastiche-7B-GGUF
|
26 |
+
base_model: automerger/Ognoexperiment27multi_verse_modelExperiment27pastiche-7B
|
27 |
+
inference: false
|
28 |
+
model_creator: automerger
|
29 |
+
pipeline_tag: text-generation
|
30 |
+
quantized_by: MaziyarPanahi
|
31 |
+
---
|
32 |
+
# [MaziyarPanahi/Ognoexperiment27multi_verse_modelExperiment27pastiche-7B-GGUF](https://huggingface.co/MaziyarPanahi/Ognoexperiment27multi_verse_modelExperiment27pastiche-7B-GGUF)
|
33 |
+
- Model creator: [automerger](https://huggingface.co/automerger)
|
34 |
+
- Original model: [automerger/Ognoexperiment27multi_verse_modelExperiment27pastiche-7B](https://huggingface.co/automerger/Ognoexperiment27multi_verse_modelExperiment27pastiche-7B)
|
35 |
+
|
36 |
+
## Description
|
37 |
+
[MaziyarPanahi/Ognoexperiment27multi_verse_modelExperiment27pastiche-7B-GGUF](https://huggingface.co/MaziyarPanahi/Ognoexperiment27multi_verse_modelExperiment27pastiche-7B-GGUF) contains GGUF format model files for [automerger/Ognoexperiment27multi_verse_modelExperiment27pastiche-7B](https://huggingface.co/automerger/Ognoexperiment27multi_verse_modelExperiment27pastiche-7B).
|
38 |
+
|
39 |
+
### About GGUF
|
40 |
+
|
41 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
42 |
+
|
43 |
+
Here is an incomplete list of clients and libraries that are known to support GGUF:
|
44 |
+
|
45 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
46 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
47 |
+
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
|
48 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
49 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
50 |
+
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
|
51 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
52 |
+
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
53 |
+
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
54 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
|
55 |
+
|
56 |
+
## Special thanks
|
57 |
+
|
58 |
+
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|