Update README.md
Browse files
README.md
CHANGED
@@ -14,6 +14,19 @@ pipeline_tag: text-generation
|
|
14 |
|
15 |
## This model has been trained for 3 Epochs using Unsloth on the Internal Knowledge Map dataset.
|
16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
```
|
18 |
r = 32,
|
19 |
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
|
|
|
14 |
|
15 |
## This model has been trained for 3 Epochs using Unsloth on the Internal Knowledge Map dataset.
|
16 |
|
17 |
+
Since this is a base model the IKM dataset greatly affects the output. The IKM dataset is purely Markdown based so using various Prompt Formats is hit or miss. Mistral Instruct, Chat ML and Alpaca are ok. So far the best Prompt Format I've found is as follows from LM Studio:
|
18 |
+
|
19 |
+
```
|
20 |
+
{System}
|
21 |
+
### Instruction:
|
22 |
+
{User}
|
23 |
+
### Response:
|
24 |
+
{Assistant}
|
25 |
+
```
|
26 |
+
---
|
27 |
+
|
28 |
+
## TRAINING
|
29 |
+
|
30 |
```
|
31 |
r = 32,
|
32 |
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
|