Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ license: other
|
|
9 |
</div>
|
10 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
11 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
12 |
-
<p><a href="https://discord.gg/
|
13 |
</div>
|
14 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
15 |
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
@@ -39,9 +39,9 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
39 |
```
|
40 |
### Instruction:
|
41 |
<prompt>
|
42 |
-
|
43 |
### Response:
|
44 |
-
|
45 |
```
|
46 |
|
47 |
or
|
@@ -52,10 +52,10 @@ or
|
|
52 |
|
53 |
### Input:
|
54 |
<input>
|
55 |
-
|
56 |
### Response:
|
57 |
-
|
58 |
-
```
|
59 |
|
60 |
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
|
61 |
|
@@ -97,7 +97,7 @@ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](http
|
|
97 |
|
98 |
For further support, and discussions on these models and AI in general, join us at:
|
99 |
|
100 |
-
[TheBloke AI's Discord server](https://discord.gg/
|
101 |
|
102 |
## Thanks, and how to contribute.
|
103 |
|
@@ -107,14 +107,14 @@ I've had a lot of people ask if they can contribute. I enjoy providing models an
|
|
107 |
|
108 |
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
109 |
|
110 |
-
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits
|
111 |
|
112 |
* Patreon: https://patreon.com/TheBlokeAI
|
113 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
114 |
|
115 |
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
|
116 |
|
117 |
-
Thank you to all my generous patrons and donaters
|
118 |
<!-- footer end -->
|
119 |
|
120 |
# Original model card: Teknium's LLaMa Deus 7B v3
|
@@ -124,12 +124,12 @@ LoRA is fully Merged with llama7b, so you do not need to merge it to load the mo
|
|
124 |
|
125 |
Llama DEUS v3 is the largest dataset I've trained on yet, including:
|
126 |
|
127 |
-
GPTeacher - General Instruct - Code Instruct - Roleplay Instruct
|
128 |
-
My unreleased Roleplay V2 Instruct
|
129 |
-
GPT4-LLM Uncensored + Unnatural Instructions
|
130 |
-
WizardLM Uncensored
|
131 |
-
CamelAI's 20k Biology, 20k Physics, 20k Chemistry, and 50k Math GPT4 Datasets
|
132 |
-
CodeAlpaca
|
133 |
|
134 |
This model was trained for 4 epochs over 1 day of training, it's a rank 128 LORA that targets attention heads, LM_Head, and MLP layers
|
135 |
|
@@ -138,9 +138,9 @@ Prompt format:
|
|
138 |
```
|
139 |
### Instruction:
|
140 |
<prompt>
|
141 |
-
|
142 |
### Response:
|
143 |
-
|
144 |
```
|
145 |
|
146 |
or
|
@@ -151,7 +151,7 @@ or
|
|
151 |
|
152 |
### Input:
|
153 |
<input>
|
154 |
-
|
155 |
### Response:
|
156 |
-
|
157 |
-
```
|
|
|
9 |
</div>
|
10 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
11 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
12 |
+
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
|
13 |
</div>
|
14 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
15 |
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
|
|
39 |
```
|
40 |
### Instruction:
|
41 |
<prompt>
|
42 |
+
|
43 |
### Response:
|
44 |
+
|
45 |
```
|
46 |
|
47 |
or
|
|
|
52 |
|
53 |
### Input:
|
54 |
<input>
|
55 |
+
|
56 |
### Response:
|
57 |
+
|
58 |
+
```
|
59 |
|
60 |
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
|
61 |
|
|
|
97 |
|
98 |
For further support, and discussions on these models and AI in general, join us at:
|
99 |
|
100 |
+
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
|
101 |
|
102 |
## Thanks, and how to contribute.
|
103 |
|
|
|
107 |
|
108 |
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
109 |
|
110 |
+
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
111 |
|
112 |
* Patreon: https://patreon.com/TheBlokeAI
|
113 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
114 |
|
115 |
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
|
116 |
|
117 |
+
Thank you to all my generous patrons and donaters!
|
118 |
<!-- footer end -->
|
119 |
|
120 |
# Original model card: Teknium's LLaMa Deus 7B v3
|
|
|
124 |
|
125 |
Llama DEUS v3 is the largest dataset I've trained on yet, including:
|
126 |
|
127 |
+
GPTeacher - General Instruct - Code Instruct - Roleplay Instruct
|
128 |
+
My unreleased Roleplay V2 Instruct
|
129 |
+
GPT4-LLM Uncensored + Unnatural Instructions
|
130 |
+
WizardLM Uncensored
|
131 |
+
CamelAI's 20k Biology, 20k Physics, 20k Chemistry, and 50k Math GPT4 Datasets
|
132 |
+
CodeAlpaca
|
133 |
|
134 |
This model was trained for 4 epochs over 1 day of training, it's a rank 128 LORA that targets attention heads, LM_Head, and MLP layers
|
135 |
|
|
|
138 |
```
|
139 |
### Instruction:
|
140 |
<prompt>
|
141 |
+
|
142 |
### Response:
|
143 |
+
|
144 |
```
|
145 |
|
146 |
or
|
|
|
151 |
|
152 |
### Input:
|
153 |
<input>
|
154 |
+
|
155 |
### Response:
|
156 |
+
|
157 |
+
```
|