MaziyarPanahi commited on
Commit
cec5973
1 Parent(s): 75bf9f7

Upload folder using huggingface_hub (#1)

Browse files

- b804aadbbaf770eea1237ff4653fc91a2ccc1946e89c616d427fa42e69181433 (ddcba5fd56b681bb8719eda70d343a3deb7c3402)
- adbc141c6be028d5bd6aa3eedb1d071ed9dea89c0f2ffe6445fa380c5651ad7f (081bd71c688d9e96e82ed5d20948deedc2202fc0)
- 5c08538c133e532647361bdc3bafd9039983dd1445e1d4fa6cad6f2fbae26e9e (8e16ddf2265e1a64cdf6346485378be70f069656)
- d65fdf20d8d9d51579f9b72fac951e6aeed333c90d64e769da7e61ab49ec8e5a (59360dfc43c57abd6cb0d6857e93b6819295ff96)
- 050cd504df5d5dee68b3c187ebdbafa6e5c3238ba69d081b16c0c65b7f45f7c8 (2f698b4697bbceaf72473cad5ac31612d0f8178c)
- 055a87bf5006178a5fdf07cd284ab2ca95ca49e1cf3e51124e1039b7cebc98b9 (25434554ea82e5687689d81958caf9614dab4ed8)
- b4c6b78c05c847ae83a62aeceb04b85a8263185b8051c0491e74cc644250bf84 (aee312a5780d42826c575c34134326491621b153)
- 685412f86020958041b2007a1d19c671b51ca6cad4608f21c5cdcecd55ba6c16 (3a10499ab9be8d51f10541cc1ac0a07a7a582808)
- e6eb2c83267f0b59b5184a5b9518105ac2b1f760ae94b8e49048a902a1d43a9e (3d571a1ac02e5898a8295285ec9113745f18517d)
- a8939051785819494508e80ff858efe62615ab1bcad558c0d5b8db492a9f2f99 (c21e5200e8ac4aa740d09d8f17ece1754b983d95)
- e0950436e5965ab6777d64d1597607b9b409fc780e4fde9f6ed28e428fc5a40e (36fdbaeebb0b49e9576c70d423b1becea1f194a7)
- 9dda9246353af066ebe4b1bc142193fed2a0b2fc20f6a709e1c4bf195795df06 (246f2b4e0537a685d6af38c45538bf2eb3b6aec1)

.gitattributes CHANGED
@@ -33,3 +33,14 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ NeuralsynthesisInex12-7B.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ NeuralsynthesisInex12-7B.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ NeuralsynthesisInex12-7B.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ NeuralsynthesisInex12-7B.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ NeuralsynthesisInex12-7B.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
41
+ NeuralsynthesisInex12-7B.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
42
+ NeuralsynthesisInex12-7B.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
43
+ NeuralsynthesisInex12-7B.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
44
+ NeuralsynthesisInex12-7B.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
45
+ NeuralsynthesisInex12-7B.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
46
+ NeuralsynthesisInex12-7B.fp16.gguf filter=lfs diff=lfs merge=lfs -text
NeuralsynthesisInex12-7B.Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a5b0fccbe32cf81aa435ddd9aa7dc1ededd60a0945c08a1c21342ac185fd792
3
+ size 2719242080
NeuralsynthesisInex12-7B.Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:945df880ccef85fae8e7ba675b16bc79d2e7596a286ef8467e69247e9d55d239
3
+ size 3822024544
NeuralsynthesisInex12-7B.Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4da2b2c5b67c4b952a2d9eb257057ba15ade7218a23bb2eb9b7d59ec3a31ffd7
3
+ size 3518986080
NeuralsynthesisInex12-7B.Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac76ccca3cf00a2b0eacb8d57c7eafa5e13da9fe2aa5e39a92817dcd09ba43f8
3
+ size 3164567392
NeuralsynthesisInex12-7B.Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9e2ea136435a844afe817e62f52ef53b20ad2aac0c223c1183acaae4e4fb011
3
+ size 4368439136
NeuralsynthesisInex12-7B.Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b06640c1df3ccf0116244038b7f082f1e75cae80365bf6fa257f11e7a2d7699d
3
+ size 4140373856
NeuralsynthesisInex12-7B.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b83a16f7bcd821883192a0ce35670eaeebff8b6d21a37196d57b77032d4f5f91
3
+ size 5131409248
NeuralsynthesisInex12-7B.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b245bb23759613a3f41fe0ae2048784640436f803f8b829df1da316d4a65b140
3
+ size 4997715808
NeuralsynthesisInex12-7B.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25d12765075d754b0a5784d94d0be9f11ecb92723a94e34a2b3ab452a4a28c01
3
+ size 5942064992
NeuralsynthesisInex12-7B.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51f4dc26876d1025fabc1d6771f01842955eae99bb8c6c1786bd69541f28ab55
3
+ size 7695857504
NeuralsynthesisInex12-7B.fp16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0cc19787c1f0d9b219c086b1dfa0dd8f85415b3dfbddf3d3bbde14e27d69b415
3
+ size 14484731744
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - quantized
4
+ - 2-bit
5
+ - 3-bit
6
+ - 4-bit
7
+ - 5-bit
8
+ - 6-bit
9
+ - 8-bit
10
+ - GGUF
11
+ - transformers
12
+ - safetensors
13
+ - mistral
14
+ - text-generation
15
+ - merge
16
+ - mergekit
17
+ - lazymergekit
18
+ - automerger
19
+ - base_model:MSL7/INEX12-7b
20
+ - license:apache-2.0
21
+ - autotrain_compatible
22
+ - endpoints_compatible
23
+ - text-generation-inference
24
+ - region:us
25
+ - text-generation
26
+ model_name: NeuralsynthesisInex12-7B-GGUF
27
+ base_model: automerger/NeuralsynthesisInex12-7B
28
+ inference: false
29
+ model_creator: automerger
30
+ pipeline_tag: text-generation
31
+ quantized_by: MaziyarPanahi
32
+ ---
33
+ # [MaziyarPanahi/NeuralsynthesisInex12-7B-GGUF](https://huggingface.co/MaziyarPanahi/NeuralsynthesisInex12-7B-GGUF)
34
+ - Model creator: [automerger](https://huggingface.co/automerger)
35
+ - Original model: [automerger/NeuralsynthesisInex12-7B](https://huggingface.co/automerger/NeuralsynthesisInex12-7B)
36
+
37
+ ## Description
38
+ [MaziyarPanahi/NeuralsynthesisInex12-7B-GGUF](https://huggingface.co/MaziyarPanahi/NeuralsynthesisInex12-7B-GGUF) contains GGUF format model files for [automerger/NeuralsynthesisInex12-7B](https://huggingface.co/automerger/NeuralsynthesisInex12-7B).
39
+
40
+ ### About GGUF
41
+
42
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
43
+
44
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
45
+
46
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
47
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
48
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
49
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
50
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
51
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
52
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
53
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
54
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
55
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
56
+
57
+ ## Special thanks
58
+
59
+ 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.