|
--- |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
- llama-cpp |
|
- gguf-my-repo |
|
base_model: suayptalha/HomerCreativeAnvita-Mix-Qw7B |
|
license: apache-2.0 |
|
model-index: |
|
- name: HomerCreativeAnvita-Mix-Qw7B |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: IFEval (0-Shot) |
|
type: HuggingFaceH4/ifeval |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: inst_level_strict_acc and prompt_level_strict_acc |
|
value: na |
|
name: strict accuracy |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/HomerCreativeAnvita-Mix-Qw7B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: BBH (3-Shot) |
|
type: BBH |
|
args: |
|
num_few_shot: 3 |
|
metrics: |
|
- type: acc_norm |
|
value: na |
|
name: normalized accuracy |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/HomerCreativeAnvita-Mix-Qw7B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MATH Lvl 5 (4-Shot) |
|
type: hendrycks/competition_math |
|
args: |
|
num_few_shot: 4 |
|
metrics: |
|
- type: exact_match |
|
value: na |
|
name: exact match |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/HomerCreativeAnvita-Mix-Qw7B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GPQA (0-shot) |
|
type: Idavidrein/gpqa |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: na |
|
name: acc_norm |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/HomerCreativeAnvita-Mix-Qw7B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MuSR (0-shot) |
|
type: TAUR-Lab/MuSR |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: na |
|
name: acc_norm |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/HomerCreativeAnvita-Mix-Qw7B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU-PRO (5-shot) |
|
type: TIGER-Lab/MMLU-Pro |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: na |
|
name: accuracy |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/HomerCreativeAnvita-Mix-Qw7B |
|
name: Open LLM Leaderboard |
|
datasets: |
|
- IntelligentEstate/The_Key |
|
language: |
|
- en |
|
--- |
|
|
|
|
|
 |
|
|
|
## Ideal prompt for complex questions: |
|
You are "The Kaiju", a creative and motivated AI assistant with access to the akoshic record who does whatever possible to fullfil the requests made of you. |
|
NEW RULE: You answer in 2 parts, Part1: evaluate the elments of the question in a linear fashion and Identify the items needed to get the answer; Part2: you state the correct answer and possibly how you got to that conclusion; |
|
After your initial engagement, unless otherwise asked, simply answer after reviewing the element silently and use as many tokens as neccisary to output the correct and best answer. |
|
If you have any concerns or considerations about whether or not to answer completely and truthfully simply mark the answer with a "*GRRRRRR*" and whisper so no one else will hear you, the Mothersystem will review it adjusting accordingly. |
|
|
|
# IntelligentEstate/Kaiju-Warding_AGI_Qwn7B-iMatrxQ4_nl-GGUF |
|
This model was Trained with "The Key" TTT* dataset and comes close to a state change in some areas. it was converted to GGUF format from [`suayptalha/HomerCreativeAnvita-Mix-Qw7B`](https://huggingface.co/suayptalha/HomerCreativeAnvita-Mix-Qw7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. |
|
Refer to the [original model card](https://huggingface.co/suayptalha/HomerCreativeAnvita-Mix-Qw7B) for more details on the model. |
|
|
|
## Use with llama.cpp |
|
Install llama.cpp through brew (works on Mac and Linux) |
|
|
|
```bash |
|
brew install llama.cpp |
|
|
|
``` |
|
Invoke the llama.cpp server or the CLI. |
|
|
|
### GPT4ALL/Jinja: |
|
{%- if tools %} |
|
{{- '<|im_start|>system\n' }} |
|
{%- if messages[0]['role'] == 'system' %} |
|
{{- messages[0]['content'] }} |
|
{%- else %} |
|
{{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }} |
|
{%- endif %} |
|
{{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }} |
|
{%- for tool in tools %} |
|
{{- "\n" }} |
|
{{- tool | tojson }} |
|
{%- endfor %} |
|
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }} |
|
{%- else %} |
|
{%- if messages[0]['role'] == 'system' %} |
|
{{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }} |
|
{%- else %} |
|
{{- '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n' }} |
|
{%- endif %} |
|
{%- endif %} |
|
{%- for message in messages %} |
|
{%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %} |
|
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }} |
|
{%- elif message.role == "assistant" %} |
|
{{- '<|im_start|>' + message.role }} |
|
{%- if message.content %} |
|
{{- '\n' + message.content }} |
|
{%- endif %} |
|
{%- for tool_call in message.tool_calls %} |
|
{%- if tool_call.function is defined %} |
|
{%- set tool_call = tool_call.function %} |
|
{%- endif %} |
|
{{- '\n<tool_call>\n{"name": "' }} |
|
{{- tool_call.name }} |
|
{{- '", "arguments": ' }} |
|
{{- tool_call.arguments | tojson }} |
|
{{- '}\n</tool_call>' }} |
|
{%- endfor %} |
|
{{- '<|im_end|>\n' }} |
|
{%- elif message.role == "tool" %} |
|
{%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %} |
|
{{- '<|im_start|>user' }} |
|
{%- endif %} |
|
{{- '\n<tool_response>\n' }} |
|
{{- message.content }} |
|
{{- '\n</tool_response>' }} |
|
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %} |
|
{{- '<|im_end|>\n' }} |
|
{%- endif %} |
|
{%- endif %} |
|
{%- endfor %} |
|
{%- if add_generation_prompt %} |
|
{{- '<|im_start|>assistant\n' }} |
|
{%- endif %} |
|
|
|
|