mav23 commited on
Commit
78cf706
1 Parent(s): a67e93d

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +48 -0
  3. novaspark.Q4_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ novaspark.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model:
5
+ - grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B
6
+ tags:
7
+ - generated_from_trainer
8
+ datasets:
9
+ - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
10
+ - anthracite-org/stheno-filtered-v1.1
11
+ - PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
12
+ - Gryphe/Sonnet3.5-Charcard-Roleplay
13
+ - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
14
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
15
+ - anthracite-org/nopm_claude_writing_fixed
16
+ - anthracite-org/kalo_opus_misc_240827
17
+ model-index:
18
+ - name: Epiculous/NovaSpark
19
+ results: []
20
+ ---
21
+
22
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64adfd277b5ff762771e4571/pnFt8anKzuycrmIuB-tew.png)
23
+
24
+ Switching things up a bit since the last slew of models were all 12B, we now have NovaSpark! NovaSpark is an 8B model trained on GrimJim's [abliterated](https://huggingface.co/grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B) version of arcee's [SuperNova-lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite).
25
+ The hope is abliteration will remove some of the inherant refusals and censorship of the original model, however I noticed that finetuning on GrimJim's model undid some of the abliteration, therefore more than likely abiliteration will have to be reapplied to the resulting model to reinforce it.
26
+
27
+ # Quants!
28
+ <strong>full</strong> / [exl2](https://huggingface.co/Epiculous/NovaSpark-exl2) / [gguf](https://huggingface.co/Epiculous/NovaSpark-GGUF)
29
+
30
+ ## Prompting
31
+ This model is trained on llama instruct template, the prompting structure goes a little something like this:
32
+
33
+ ```
34
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
35
+
36
+ {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
37
+
38
+ {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
39
+ ```
40
+
41
+ ### Context and Instruct
42
+ This model is trained on llama-instruct, please use that Context and Instruct template.
43
+
44
+ ### Current Top Sampler Settings
45
+ [Smooth Creativity](https://files.catbox.moe/0ihfir.json): Credit to Juelsman for researching this one!<br/>
46
+ [Variant Chimera](https://files.catbox.moe/h7vd45.json): Credit to Numbra!<br/>
47
+ [Spicy_Temp](https://files.catbox.moe/9npj0z.json) <br/>
48
+ [Violet_Twilight-Nitral-Special](https://files.catbox.moe/ot54u3.json) <br/>
novaspark.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a8317b6a121b3ab6d55e61944294408c2f5a30b042e0ee450606a08c964e85d
3
+ size 4661212640