KingNish commited on
Commit
02b0a34
1 Parent(s): f1f5fff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -46
README.md CHANGED
@@ -1,61 +1,36 @@
1
  ---
2
  tags:
 
 
 
 
3
  - merge
4
  - mergekit
5
  - lazymergekit
6
- - KingNish/KingNish-Llama3-8b-v0.2
7
  base_model:
8
- - KingNish/KingNish-Llama3-8b-v0.2
9
- - KingNish/KingNish-Llama3-8b-v0.2
10
- - KingNish/KingNish-Llama3-8b-v0.2
11
- - KingNish/KingNish-Llama3-8b-v0.2
12
- - KingNish/KingNish-Llama3-8b-v0.2
13
- - KingNish/KingNish-Llama3-8b-v0.2
14
- - KingNish/KingNish-Llama3-8b-v0.2
15
  ---
16
 
17
- # Power-Llama-3-14b
18
 
19
- Power-Llama-3-14b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
20
- * [KingNish/KingNish-Llama3-8b-v0.2](https://huggingface.co/KingNish/KingNish-Llama3-8b-v0.2)
21
- * [KingNish/KingNish-Llama3-8b-v0.2](https://huggingface.co/KingNish/KingNish-Llama3-8b-v0.2)
22
- * [KingNish/KingNish-Llama3-8b-v0.2](https://huggingface.co/KingNish/KingNish-Llama3-8b-v0.2)
23
- * [KingNish/KingNish-Llama3-8b-v0.2](https://huggingface.co/KingNish/KingNish-Llama3-8b-v0.2)
24
- * [KingNish/KingNish-Llama3-8b-v0.2](https://huggingface.co/KingNish/KingNish-Llama3-8b-v0.2)
25
- * [KingNish/KingNish-Llama3-8b-v0.2](https://huggingface.co/KingNish/KingNish-Llama3-8b-v0.2)
26
- * [KingNish/KingNish-Llama3-8b-v0.2](https://huggingface.co/KingNish/KingNish-Llama3-8b-v0.2)
27
 
28
- ## 🧩 Configuration
29
 
30
- ```yaml
31
- slices:
32
- - sources:
33
- - layer_range: [0, 8]
34
- model: KingNish/KingNish-Llama3-8b-v0.2
35
- - sources:
36
- - layer_range: [4, 12]
37
- model: KingNish/KingNish-Llama3-8b-v0.2
38
- - sources:
39
- - layer_range: [8, 16]
40
- model: KingNish/KingNish-Llama3-8b-v0.2
41
- - sources:
42
- - layer_range: [12, 20]
43
- model: KingNish/KingNish-Llama3-8b-v0.2
44
- - sources:
45
- - layer_range: [16, 24]
46
- model: KingNish/KingNish-Llama3-8b-v0.2
47
- - sources:
48
- - layer_range: [20, 28]
49
- model: KingNish/KingNish-Llama3-8b-v0.2
50
- - sources:
51
- - layer_range: [24, 32]
52
- model: KingNish/KingNish-Llama3-8b-v0.2
53
- merge_method: passthrough
54
- dtype: float16
55
- ```
56
 
57
  ## 💻 Usage
58
 
 
 
 
 
 
59
  ```python
60
  !pip install -qU transformers accelerate
61
 
@@ -63,7 +38,7 @@ from transformers import AutoTokenizer
63
  import transformers
64
  import torch
65
 
66
- model = "KingNish/Power-Llama-3-14b"
67
  messages = [{"role": "user", "content": "What is a large language model?"}]
68
 
69
  tokenizer = AutoTokenizer.from_pretrained(model)
@@ -75,6 +50,6 @@ pipeline = transformers.pipeline(
75
  device_map="auto",
76
  )
77
 
78
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
79
  print(outputs[0]["generated_text"])
80
  ```
 
1
  ---
2
  tags:
3
+ - KingNish
4
+ - Power Series
5
+ - llama
6
+ - llama-3
7
  - merge
8
  - mergekit
9
  - lazymergekit
 
10
  base_model:
11
+ - rhysjones/Phi-3-mini-mango-1-llamafied
12
+ license: mit
13
+ library_name: transformers
14
+ pipeline_tag: text-generation
 
 
 
15
  ---
16
 
17
+ # Power Llama 3 13B Instruct
18
 
19
+ Power Llama 3 13B Instruct is a Very Powerful Model under **POWER SERIES**. It's Creativity, Problem Solving skills, Maths and Logic is better than Meta Llama 3 70B and Beats all 13b Models.
 
 
 
 
 
 
 
20
 
21
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6612aedf09f16e7347dfa7e1/3XYEuQ7hXj-dh9abDL2u7.png)
22
 
23
+ ## 🧩 Evaluation
24
+
25
+ Coming Soon
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  ## 💻 Usage
28
 
29
+ Same as Llama 3 But
30
+ Best in Creative writing, Logical Reasoning, Translation, Maths, Coding, etc.
31
+
32
+ ## Code
33
+
34
  ```python
35
  !pip install -qU transformers accelerate
36
 
 
38
  import transformers
39
  import torch
40
 
41
+ model = "refine-ai/Power-Llama-3-13b"
42
  messages = [{"role": "user", "content": "What is a large language model?"}]
43
 
44
  tokenizer = AutoTokenizer.from_pretrained(model)
 
50
  device_map="auto",
51
  )
52
 
53
+ outputs = pipeline(prompt, max_new_tokens=4096, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
54
  print(outputs[0]["generated_text"])
55
  ```