Llama-3-Luminurse-v0.2-OAS-8B
This is a merge of pre-trained language models created using mergekit.
Luminurse is a merge based on Lumimaid, enhanced with a biomedical model (at higher strength than v0.1), with a dash of TheSpice thrown in to improve formatting of text generation.
Boosting temperature has the interesting property of reducing repetitiveness and increasing verbosity of the model at the same time. Higher temperature also increases the odds of reasoning slippage (which can be manually mitigated by swiping for regeneration), so settings should be adjusted according to one's comfort levels. Lightly tested using Instruct prompts with temperature in the range of 1 to 1.6 (pick something in between to start, perhaps in the range of 1.2-1.45) and minP=0.01.
- static GGUFs, llama-bpe pre-tokenizer
- 8bpw exl2 quant
- static GGUFs, smaug-bpe pre-tokenizer c/o mradermacher
- weighted/imatrix GGUFs, smaug-bpe pre-tokenizer c/o mradermacher
Built with Meta Llama 3.
Merge Details
Merge Method
This model was merged using the task arithmetic merge method using NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
base_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
slices:
- sources:
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
layer_range: [0,32]
- model: grimjim/llama-3-aaditya-OpenBioLLM-8B
layer_range: [0,32]
parameters:
weight: 0.2
- model: cgato/L3-TheSpice-8b-v0.8.3
layer_range: [0,32]
parameters:
weight: 0.04
merge_method: task_arithmetic
dtype: bfloat16
- Downloads last month
- 368