Edit model card

Aura-llama-3

Aura-llama image

Now that the cute anime girl has your attention.

UPDATE: Model has been fixed

Aura-llama is using the methodology presented by SOLAR for scaling LLMs called depth up-scaling (DUS), which encompasses architectural modifications with continued pretraining. Using the solar paper as a base, I integrated Llama-3 weights into the upscaled layers, and In the future plan to continue training the model.

Aura-llama is a merge of the following models to create a base model to work from:

Merged Evals (Has Not Been Finetuned):

Aura-llama

  • Avg: 63.13
  • ARC: 58.02
  • HellaSwag: 77.82
  • MMLU: 65.61
  • T-QA: 51.94
  • Winogrande: 73.40
  • GSM8K: 52.01

🧩 Configuration


dtype: float16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 12]
    model: NousResearch/Meta-Llama-3-8B-Instruct
- sources:
  - layer_range: [8, 20]
    model: NousResearch/Meta-Llama-3-8B-Instruct
- sources:
  - layer_range: [16, 28]
    model: NousResearch/Meta-Llama-3-8B-Instruct
- sources:
  - layer_range: [24, 32]
    model: NousResearch/Meta-Llama-3-8B-Instruct
        

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 63.13
AI2 Reasoning Challenge (25-Shot) 58.02
HellaSwag (10-Shot) 77.82
MMLU (5-Shot) 65.61
TruthfulQA (0-shot) 51.94
Winogrande (5-shot) 73.40
GSM8k (5-shot) 52.01
Downloads last month
937
Safetensors
Model size
10.6B params
Tensor type
FP16
·
Inference API
Input a message to start chatting with TheSkullery/Aura-llama.
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Finetuned from

Evaluation results