DavidAU commited on
Commit
29c040a
1 Parent(s): 8fc3bf4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -6
README.md CHANGED
@@ -42,13 +42,39 @@ pipeline_tag: text-generation
42
 
43
  <h2>Llama-3.2-1B-Instruct-NEO-SI-FI-GGUF</h2>
44
 
45
- It is the new "Llama-3.2-1B-Instruct", max context of 131,000 (128k) with the NEO IMATRIX Science Fictions and Story dataset.
 
 
46
 
47
  This model IS bullet proof and operates with all parameters, including temp settings from 0 to 5.
48
 
49
- The NEO IMATRIX dataset V2 was applied to it to enhance creativity.
 
 
 
 
 
50
 
51
- This model requires Llama3 template.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
 
53
  Please refer to the original model card for this model from Meta-Llama for additional details on operation.
54
 
@@ -97,6 +123,4 @@ This enhancement WAS NOT used to generate the examples below.
97
  <B>
98
  Example generations at TEMP = .8, IQ4_XS, REP PEN 1.1
99
 
100
- </B>
101
-
102
-
 
42
 
43
  <h2>Llama-3.2-1B-Instruct-NEO-SI-FI-GGUF</h2>
44
 
45
+ It is the new "Llama-3.2-1B-Instruct", max context of 131,000 (128k) with the NEO IMATRIX Science Fiction and Story dataset.
46
+
47
+ The power in this 1B (for its size) is frankly jaw dropping.
48
 
49
  This model IS bullet proof and operates with all parameters, including temp settings from 0 to 5.
50
 
51
+ The NEO IMATRIX dataset V2 was applied to it to enhance creativity. (see examples below)
52
+
53
+ <B>Model Template:</B>
54
+
55
+ This is a LLAMA3 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 8k / 8192.
56
+ However this can be extended using "rope" settings up to 32k.
57
 
58
+ If you use "Command-R" template your output will be very different from using "Llama3" template.
59
+
60
+ Here is the standard LLAMA3 template:
61
+
62
+ <PRE>
63
+ {
64
+ "name": "Llama 3",
65
+ "inference_params": {
66
+ "input_prefix": "<|start_header_id|>user<|end_header_id|>\n\n",
67
+ "input_suffix": "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
68
+ "pre_prompt": "You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.",
69
+ "pre_prompt_prefix": "<|start_header_id|>system<|end_header_id|>\n\n",
70
+ "pre_prompt_suffix": "<|eot_id|>",
71
+ "antiprompt": [
72
+ "<|start_header_id|>",
73
+ "<|eot_id|>"
74
+ ]
75
+ }
76
+ }
77
+ </PRE>
78
 
79
  Please refer to the original model card for this model from Meta-Llama for additional details on operation.
80
 
 
123
  <B>
124
  Example generations at TEMP = .8, IQ4_XS, REP PEN 1.1
125
 
126
+ </B>