vicgalle commited on
Commit
6e3dd73
1 Parent(s): 21de20e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -0
README.md CHANGED
@@ -1,3 +1,82 @@
1
  ---
 
 
 
 
 
 
 
 
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - upstage/SOLAR-10.7B-Instruct-v1.0
4
+ - NousResearch/Nous-Hermes-2-SOLAR-10.7B
5
+ tags:
6
+ - mergekit
7
+ - merge
8
+ - solar
9
+ - gguf
10
  license: apache-2.0
11
  ---
12
+ # vicgalle/franken-SOLAR-18B-v1.0-GGUF
13
+
14
+ This is a SOLAR-like model upscaled to 18B.
15
+ It is a frankenmerge model created using mergekit, alternating layers of Nous-Hermes-2-SOLAR-10.7B and SOLAR-10.7B-Instruct.
16
+
17
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fad8602b8423e1d80b8a965/mMyHMuuftG71_o4at5suy.png)
18
+
19
+ Evaluations coming soon!
20
+
21
+ This model has very good writing capabilities (compared to SOLAR-10.7B), specially for role-playing.
22
+
23
+ ## Merge Details
24
+ ### Merge Method
25
+
26
+ This model was merged using the passthrough merge method.
27
+
28
+ ### Models Merged
29
+
30
+ The following models were included in the merge:
31
+ * [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)
32
+ * [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B)
33
+
34
+ ### Configuration
35
+
36
+ The following YAML configuration was used to produce this model:
37
+
38
+ ```yaml
39
+ slices:
40
+ - sources:
41
+ - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
42
+ layer_range: [0, 12]
43
+ - sources:
44
+ - model: upstage/SOLAR-10.7B-Instruct-v1.0
45
+ layer_range: [6, 18]
46
+ - sources:
47
+ - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
48
+ layer_range: [13, 25]
49
+ - sources:
50
+ - model: upstage/SOLAR-10.7B-Instruct-v1.0
51
+ layer_range: [19, 31]
52
+ - sources:
53
+ - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
54
+ layer_range: [26, 38]
55
+ - sources:
56
+ - model: upstage/SOLAR-10.7B-Instruct-v1.0
57
+ layer_range: [32, 44]
58
+ - sources:
59
+ - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
60
+ layer_range: [39, 48]
61
+
62
+ merge_method: passthrough
63
+ dtype: float16
64
+
65
+ ```
66
+
67
+
68
+ ### Usage
69
+
70
+ You can use the provided template:
71
+
72
+ ```
73
+ tokenizer = AutoTokenizer.from_pretrained("vicgalle/franken-SOLAR-18B-v1.0")
74
+ model = AutoModelForCausalLM.from_pretrained("vicgalle/franken-SOLAR-18B-v1.0", torch_dtype=torch.float16, load_in_4bit=True)
75
+
76
+ conversation = [ {'role': 'system', 'content': SYSTEM_PROMPT}, {'role': 'user', 'content': USER_PROMPT} ]
77
+ prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
78
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
79
+
80
+ outputs = model.generate(**inputs, use_cache=True, max_new_tokens=1024, do_sample=True, temperature=0.8)
81
+ output_text = tokenizer.decode(outputs[0])
82
+ ```