vicgalle's picture
Update README.md
d3f1d8a verified
|
raw
history blame
No virus
2.55 kB
metadata
base_model:
  - upstage/SOLAR-10.7B-Instruct-v1.0
  - NousResearch/Nous-Hermes-2-SOLAR-10.7B
tags:
  - mergekit
  - merge
  - solar
  - gguf
license: apache-2.0

vicgalle/franken-SOLAR-18B-v1.0-GGUF

This is a SOLAR-like model upscaled to 18B. It is a frankenmerge model created using mergekit, alternating layers of Nous-Hermes-2-SOLAR-10.7B and SOLAR-10.7B-Instruct.

This repo has the quantized GGUF versions from https://huggingface.co/vicgalle/franken-SOLAR-18B-v1.0

image/png

Evaluations coming soon!

This model has very good writing capabilities (compared to SOLAR-10.7B), specially for role-playing.

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
    - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
      layer_range: [0, 12]
  - sources:
    - model: upstage/SOLAR-10.7B-Instruct-v1.0
      layer_range: [6, 18]
  - sources:
    - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
      layer_range: [13, 25]
  - sources:
    - model: upstage/SOLAR-10.7B-Instruct-v1.0
      layer_range: [19, 31]
  - sources:
    - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
      layer_range: [26, 38]
  - sources:
    - model: upstage/SOLAR-10.7B-Instruct-v1.0
      layer_range: [32, 44]
  - sources:
    - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
      layer_range: [39, 48]
    
merge_method: passthrough
dtype: float16

Usage

You can use the provided template:

tokenizer = AutoTokenizer.from_pretrained("vicgalle/franken-SOLAR-18B-v1.0")
model = AutoModelForCausalLM.from_pretrained("vicgalle/franken-SOLAR-18B-v1.0", torch_dtype=torch.float16, load_in_4bit=True)

conversation = [ {'role': 'system', 'content': SYSTEM_PROMPT}, {'role': 'user', 'content': USER_PROMPT} ] 
prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, use_cache=True, max_new_tokens=1024, do_sample=True, temperature=0.8)
output_text = tokenizer.decode(outputs[0])