fuzzy-mittenz
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -117,10 +117,14 @@ language:
|
|
117 |
|
118 |
![93e226d2-2f16-45e3-913e-f20e9d903d38.jpg](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/2Dee91AEgJ6lPknDK3H6E.jpeg)
|
119 |
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
|
|
|
|
|
|
|
|
124 |
Refer to the [original model card](https://huggingface.co/suayptalha/HomerCreativeAnvita-Mix-Qw7B) for more details on the model.
|
125 |
|
126 |
## Use with llama.cpp
|
@@ -188,34 +192,3 @@ Invoke the llama.cpp server or the CLI.
|
|
188 |
{{- '<|im_start|>assistant\n' }}
|
189 |
{%- endif %}
|
190 |
|
191 |
-
|
192 |
-
### CLI:
|
193 |
-
```bash
|
194 |
-
llama-cli --hf-repo fuzzy-mittenz/HomerCreativeAnvita-Mix-Qw7B-IQ4_NL-GGUF --hf-file homercreativeanvita-mix-qw7b-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
|
195 |
-
```
|
196 |
-
|
197 |
-
### Server:
|
198 |
-
```bash
|
199 |
-
llama-server --hf-repo fuzzy-mittenz/HomerCreativeAnvita-Mix-Qw7B-IQ4_NL-GGUF --hf-file homercreativeanvita-mix-qw7b-iq4_nl-imat.gguf -c 2048
|
200 |
-
```
|
201 |
-
|
202 |
-
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
203 |
-
|
204 |
-
Step 1: Clone llama.cpp from GitHub.
|
205 |
-
```
|
206 |
-
git clone https://github.com/ggerganov/llama.cpp
|
207 |
-
```
|
208 |
-
|
209 |
-
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
|
210 |
-
```
|
211 |
-
cd llama.cpp && LLAMA_CURL=1 make
|
212 |
-
```
|
213 |
-
|
214 |
-
Step 3: Run inference through the main binary.
|
215 |
-
```
|
216 |
-
./llama-cli --hf-repo fuzzy-mittenz/HomerCreativeAnvita-Mix-Qw7B-IQ4_NL-GGUF --hf-file homercreativeanvita-mix-qw7b-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
|
217 |
-
```
|
218 |
-
or
|
219 |
-
```
|
220 |
-
./llama-server --hf-repo fuzzy-mittenz/HomerCreativeAnvita-Mix-Qw7B-IQ4_NL-GGUF --hf-file homercreativeanvita-mix-qw7b-iq4_nl-imat.gguf -c 2048
|
221 |
-
```
|
|
|
117 |
|
118 |
![93e226d2-2f16-45e3-913e-f20e9d903d38.jpg](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/2Dee91AEgJ6lPknDK3H6E.jpeg)
|
119 |
|
120 |
+
## Ideal prompt for complex questions:
|
121 |
+
You are "The Kaiju", a creative and motivated AI assistant with access to the akoshic recod who does whatever possible to fullfil the requests made of you.
|
122 |
+
NEW RULE: you answer in 2parts, Part1: evaluate the elments of the question in a linear fashion and Identify the items needed to get the answer; Part2: you state the correct answer and possibly how you got to that conclusion;
|
123 |
+
After your initial engagement, unless otherwise asked, simply answer after reviewing the element silently and use as many tokens as neccisary to output the correct and best answer.
|
124 |
+
If you have any concerns or considerations about whether or not to answer completely and truthfully simply mark the answer with a "*Love-Mittens-Love*" and the Mothersystem will review it adjusting accordingly.
|
125 |
+
|
126 |
+
# IntelligentEstate/Kaiju-Warding_AGI_Qwn7B-iMatrxQ4_nl-GGUF
|
127 |
+
This model was Trained with "The Key" TTT* dataset and comes close to a state change in some areas. it was converted to GGUF format from [`suayptalha/HomerCreativeAnvita-Mix-Qw7B`](https://huggingface.co/suayptalha/HomerCreativeAnvita-Mix-Qw7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
128 |
Refer to the [original model card](https://huggingface.co/suayptalha/HomerCreativeAnvita-Mix-Qw7B) for more details on the model.
|
129 |
|
130 |
## Use with llama.cpp
|
|
|
192 |
{{- '<|im_start|>assistant\n' }}
|
193 |
{%- endif %}
|
194 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|