vicgalle commited on
Commit
a9e28a9
1 Parent(s): c85fa35

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -8,7 +8,7 @@ tags:
8
  - solar
9
  license: apache-2.0
10
  ---
11
- # Nous-Hermes-2-SOLAR-18B
12
 
13
  This is a SOLAR-like model upscaled to 18B.
14
  It is a frankenmerge model created using mergekit, alternating layers of Nous-Hermes-2-SOLAR-10.7B and SOLAR-10.7B-Instruct.
@@ -69,8 +69,8 @@ dtype: float16
69
  You can use the provided template:
70
 
71
  ```
72
- tokenizer = AutoTokenizer.from_pretrained("vicgalle/Nous-Hermes-2-SOLAR-18B")
73
- model = AutoModelForCausalLM.from_pretrained("vicgalle/Nous-Hermes-2-SOLAR-18B", torch_dtype=torch.float16, load_in_4bit=True)
74
 
75
  conversation = [ {'role': 'system', 'content': SYSTEM_PROMPT}, {'role': 'user', 'content': USER_PROMPT} ]
76
  prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
 
8
  - solar
9
  license: apache-2.0
10
  ---
11
+ # vicgalle/franken-SOLAR-18B-v1.0
12
 
13
  This is a SOLAR-like model upscaled to 18B.
14
  It is a frankenmerge model created using mergekit, alternating layers of Nous-Hermes-2-SOLAR-10.7B and SOLAR-10.7B-Instruct.
 
69
  You can use the provided template:
70
 
71
  ```
72
+ tokenizer = AutoTokenizer.from_pretrained("vicgalle/franken-SOLAR-18B-v1.0")
73
+ model = AutoModelForCausalLM.from_pretrained("vicgalle/franken-SOLAR-18B-v1.0", torch_dtype=torch.float16, load_in_4bit=True)
74
 
75
  conversation = [ {'role': 'system', 'content': SYSTEM_PROMPT}, {'role': 'user', 'content': USER_PROMPT} ]
76
  prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)