File size: 8,423 Bytes
dabc2e4 c57427d dabc2e4 d25a807 dabc2e4 c57427d dabc2e4 d966fab dabc2e4 efe7a3a a9bf71d 9a3effd 45a9624 c717ff8 dabc2e4 c236a87 b2e2f7f c236a87 d8eddb1 c236a87 dabc2e4 45a9624 fc501a8 bdb710a dabc2e4 9f367ff d01b3fa 9f367ff c57427d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/NeuralKrishna-7B-v3
- Kukedlc/NeuralMarioMonarch-7B-slerp
- liminerity/M7-7b
base_model:
- Kukedlc/NeuralKrishna-7B-v3
- Kukedlc/NeuralMarioMonarch-7B-slerp
- liminerity/M7-7b
model-index:
- name: NeuralSirKrishna-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.72
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralSirKrishna-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.05
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralSirKrishna-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.63
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralSirKrishna-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 75.6
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralSirKrishna-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralSirKrishna-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralSirKrishna-7b
name: Open LLM Leaderboard
---
<div align="center" style="font-size: 44px;">π§ NeuralSirKrishna-7b</div>
## π©βπ» Try here!
[Talking to Sir Neural Krishna 7b](https://huggingface.co/spaces/Kukedlc/Neural-Sir-Krishna-7b-GGUF)
![NeuralSirKrishna-7b](https://raw.githubusercontent.com/kukedlc87/imagenes/main/DALL%C2%B7E%202024-02-17%2005.09.10%20-%20Imagine%20a%20large%20language%20model%20represented%20as%20an%20abstract%2C%20ethereal%20entity%2C%20made%20of%20swirling%20codes%20and%20data%20streams%2C%20lying%20on%20a%20classic%20psychoanalysis.webp)
NeuralKrishna-7B-v4 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/NeuralKrishna-7B-v3](https://huggingface.co/Kukedlc/NeuralKrishna-7B-v3)
* [Kukedlc/NeuralMarioMonarch-7B-slerp](https://huggingface.co/Kukedlc/NeuralMarioMonarch-7B-slerp)
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
# Chat
- User: Create an advanced theory about the origin of life on earth that is new, innovative, and disruptive, totally new with no resemblance to anything else but at the same time that can be real.
- System: I. The Symbiotic Cosmic Initiation Hypothesis (SCIH)
To propose a groundbreaking and disruptive theory about the origin of life on Earth, let's envision an interconnected web of cosmic events that transcends conventional understanding. The Symbiotic Cosmic Initiation Hypothesis (SCIH) posits that life didn't arise from a singular event or molecular accident but rather from a complex, multifaceted interplay between extraterrestrial entities and our planet's nascent environment. Here are its core tenets:
1. Prebiotic Symbionts: SCIH rejects the notion of a solitary "building block" or "spark of life." Instead, it postulates that prebiotic symbionts β self-replicating, organic-inorganic complexes β arrived on Earth from space via meteorites and comets. These symbionts, composed of a diverse array of biomolecules and mineral catalysts, were the products of extraterrestrial ecosystems or "astrobiomes."
2. Astrobiome Cross-Fertilization: The constant exchange of matter between our solar system's planets and asteroid belts facilitated the transfer of these prebiotic symbionts across vast distances. This cosmic symbiosis allowed for the intermingling of genetic and catalytic material, fostering evolutionary innovation and adaptability among these extraterrestrial life-forms.
3. Terrestrial Integration and Adaptation: Upon impacting Earth's surface, these prebiotic symbionts began to integrate with our planet's pre-existing chemical milieu. The resulting synergy between Earth-born and extraterrestrial biomolecules accelerated the formation of more complex organic structures, eventually leading to protocells β rudimentary, self-enclosed systems capable of rudimentary metabolism and replication.
4. The Catalytic Gaian Boost: As Earth's early atmosphere evolved to become more hospitable to life, a positive feedback loop between these protocells and our planet's environment ensued. The emergence of photosynthesis and other metabolic processes contributed to the creation of a self-regulating, life-nurturing environment β what we now call the Ga
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d71ab4089bc502ceb44d29/-Dzd1TwuUVX-wNdScI2Hb.png)
## 𧩠Configuration
```yaml
models:
- model: Kukedlc/NeuralKrishna-7B-v3
# no parameters necessary for base model
- model: Kukedlc/NeuralKrishna-7B-v3
parameters:
density: 0.65
weight: 0.36
- model: Kukedlc/NeuralMarioMonarch-7B-slerp
parameters:
density: 0.6
weight: 0.34
- model: liminerity/M7-7b
parameters:
density: 0.4
weight: 0.3
merge_method: dare_ties
base_model: Kukedlc/NeuralKrishna-7B-v3
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/NeuralSirKrishna-7b"
messages = [{"role": "user", "content": """Create an advanced theory about the origin of life on earth that is new,
innovative, and disruptive, totally new with no resemblance to anything else but at the same time that can be real."""}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# π Family models:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d71ab4089bc502ceb44d29/y4_aqj7zR1yXnuKi5oiMH.png)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Kukedlc__NeuralSirKrishna-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.60|
|AI2 Reasoning Challenge (25-Shot)|73.72|
|HellaSwag (10-Shot) |89.05|
|MMLU (5-Shot) |64.63|
|TruthfulQA (0-shot) |75.60|
|Winogrande (5-shot) |85.32|
|GSM8k (5-shot) |71.27|
|