kingbri commited on
Commit
d5fc582
·
verified ·
1 Parent(s): cf69ca8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -49
README.md CHANGED
@@ -1,49 +1,49 @@
1
-
2
- ---
3
- license: other
4
- license_name: llama3
5
- license_link: LICENSE
6
- language:
7
- - en
8
- ---
9
-
10
- # PsyOrca2-DARE-13b
11
-
12
- This is a [Llama 3](https://huggingface.co/meta-llama/Meta-Llama-3-8B)-based model consisting of a merge between:
13
- - [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) (FP16 not available to the public yet. However, the merge config is.)
14
- - [Trappu/Picaro-lora-l3](https://huggingface.co/Trappu/Picaro-lora-l3) (with a fixed vocab size by merging on llama-2-13b)
15
-
16
- This merge was performed with permission from the Lora creator (Trappu)
17
-
18
- Mergekit config (Inspired from Charles Goddard):
19
-
20
- ```yml
21
- merge_method: passthrough
22
- models:
23
- - model: F:\AI\models\Meta-Llama-3-8B+F:\AI\loras\Picaro-lora-l3
24
- dtype: float16
25
- ```
26
-
27
- ## Usage
28
- This model will follow the ChatML instruct format without the system prompt:
29
-
30
- ```
31
- <|im_start|>user
32
- {prompt}<|im_end|>
33
- <|im_start|>assistant
34
- ```
35
-
36
- ## Bias, Risks, and Limitations
37
-
38
- The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
39
-
40
- ## Training Details
41
-
42
- This model is a merge. Please refer to the linked repositories of the merged models for details.
43
-
44
- ## Donate?
45
-
46
- All my infrastructure and cloud expenses are paid out of pocket. If you'd like to donate, you can do so here: https://ko-fi.com/kingbri
47
-
48
- You should not feel obligated to donate, but if you do, I'd appreciate it.
49
- ---
 
1
+
2
+ ---
3
+ license: other
4
+ license_name: llama3
5
+ license_link: LICENSE
6
+ language:
7
+ - en
8
+ ---
9
+
10
+ # L3-Picaro-8B
11
+
12
+ This is a [Llama 3](https://huggingface.co/meta-llama/Meta-Llama-3-8B)-based model consisting of a merge between:
13
+ - [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) (FP16 not available to the public yet. However, the merge config is.)
14
+ - [Trappu/Picaro-lora-l3](https://huggingface.co/Trappu/Picaro-lora-l3) (with a fixed vocab size by merging on llama-2-13b)
15
+
16
+ This merge was performed with permission from the Lora creator (Trappu)
17
+
18
+ Mergekit config (Inspired from Charles Goddard):
19
+
20
+ ```yml
21
+ merge_method: passthrough
22
+ models:
23
+ - model: F:\AI\models\Meta-Llama-3-8B+F:\AI\loras\Picaro-lora-l3
24
+ dtype: float16
25
+ ```
26
+
27
+ ## Usage
28
+ This model will follow the ChatML instruct format without the system prompt:
29
+
30
+ ```
31
+ <|im_start|>user
32
+ {prompt}<|im_end|>
33
+ <|im_start|>assistant
34
+ ```
35
+
36
+ ## Bias, Risks, and Limitations
37
+
38
+ The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
39
+
40
+ ## Training Details
41
+
42
+ This model is a merge. Please refer to the linked repositories of the merged models for details.
43
+
44
+ ## Donate?
45
+
46
+ All my infrastructure and cloud expenses are paid out of pocket. If you'd like to donate, you can do so here: https://ko-fi.com/kingbri
47
+
48
+ You should not feel obligated to donate, but if you do, I'd appreciate it.
49
+ ---