Edit model card

Model Card for DeepAutoAI/ldm_soup_Llama-3.1-8B-Inst

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in

Overview

DeepAutoAI/ldm_soup_Llama-3.1-8B-Inst is developed by deepAuto.ai and builds upon the VAGOsolutions/Llama-3.1-SauerkrautLM-8B-Instruct model. Our approach leverages the base model’s pretrained weights and optimizes them for the Winogrande and ARC-Challenge datasets by training a latent diffusion model on the pretrained weights.

Through this process, we learn the distribution of the base model's weight space, enabling us to explore optimal configurations. We then sample multiple sets of weights, using the model-soup averaging technique to identify the best-performing weights for both datasets. These weights are merged using linear interpolation to create the final model weights for DeepAutoAI/ldm_soup_Llama-3.1-8B-Inst.

This approach has led to improved performance on previously unseen leaderboard tasks, all without any additional task-specific training.

The work is currently in progress

References

Diffusion-Based Neural Network Weights Generation

Evaluation

Results

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 28.64
IFEval (0-Shot) 80.33
BBH (3-Shot) 31.10
MATH Lvl 5 (4-Shot) 11.56
GPQA (0-shot) 5.26
MuSR (0-shot) 11.52
MMLU-PRO (5-shot) 32.07
Downloads last month
555
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for DeepAutoAI/ldm_soup_Llama-3.1-8B-Inst

Finetuned
(452)
this model
Merges
1 model
Quantizations
3 models

Evaluation results