L3-Inca-8B-v0.8
L3-Inca-8B-v0.8 is a merge of the following models:
using NurtureAI/Meta-Llama-3-8B-Instruct-32k as the base.
UPDATE: Changed the merging method from model_stock to ties and made Stheno have the most weight and density.
Quantized Models by mradermacher
• Static L3-Inca-8B-v0.8-GGUF
• Imatrix L3-Inca-8B-v0.8-i1-GGUF
Configuration
models:
- model: Sao10K/L3-8B-Stheno-v3.2
parameters:
density: 0.85
weight: 0.5
- model: Nitral-AI/Hathor-L3-8B-v.02
parameters:
density: 0.75
weight: 0.3
- model: grimjim/Llama-3-Luminurse-v0.2-OAS-8B
parameters:
density: 0.75
weight: 0.2
merge_method: ties
base_model: NurtureAI/Meta-Llama-3-8B-Instruct-32k
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for Ppoyaa/L3-Inca-8B-v0.8
Merge model
this model