Llama-3-Smaug-8B

Built with Meta Llama 3

image/png

This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to meta-llama/Meta-Llama-3-8B-Instruct.

Model Description

Evaluation

MT-Bench

########## First turn ##########
                   score
model             turn
Llama-3-Smaug-8B 1   8.77500
Meta-Llama-3-8B-Instruct 1   8.31250
########## Second turn ##########
                   score
model             turn
Meta-Llama-3-8B-Instruct 2   7.8875 
Llama-3-Smaug-8B 2   7.8875
########## Average ##########
                 score
model
Llama-3-Smaug-8B  8.331250
Meta-Llama-3-8B-Instruct 8.10
Model First turn Second Turn Average
Llama-3-Smaug-8B 8.78 7.89 8.33
Llama-3-8B-Instruct 8.31 7.89 8.10

This version of Smaug uses new techniques and new data compared to Smaug-72B, and more information will be released later on. For now, see the previous Smaug paper: https://arxiv.org/abs/2402.13228.

Downloads last month
14,333
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for abacusai/Llama-3-Smaug-8B

Adapters
1 model
Finetunes
7 models
Merges
29 models
Quantizations
9 models

Datasets used to train abacusai/Llama-3-Smaug-8B

Spaces using abacusai/Llama-3-Smaug-8B 7