Llama-3-Galen-70B-v1

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using aaditya/Llama3-OpenBioLLM-70B as a base.

Evaluation

Tasks Version Filter n-shot Metric Value Stderr
pubmedqa 1 none 0 acc 0.7820 ± 0.0185
professional_medicine 0 none 0 acc 0.9375 ± 0.0147
medical_genetics 0 none 0 acc 0.9300 ± 0.0256
college_medicine 0 none 0 acc 0.8555 ± 0.0268
college_biology 0 none 0 acc 0.9375 ± 0.0202
clinical_knowledge 0 none 0 acc 0.9283 ± 0.0159
anatomy 0 none 0 acc 0.8444 ± 0.0313
medqa_4options Yaml none 0 acc 0.7777 ± 0.0117
none 0 acc_norm 0.7777 ± 0.0117
medmcqa Yaml none 0 acc 0.7423 ± 0.0068
none 0 acc_norm 0.7423 ± 0.0068

Average: 0.8594

Downloads last month
29
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for abhinand/Llama-3-Galen-70B-v1

Finetuned
(2)
this model
Quantizations
2 models