Everyone-LLM-7b-Base
EveryoneLLM series of models made by the community, for the community.
This is the first version of Everyone-LLM, a model that combines the power of the large majority of powerfull fine-tuned LLM's made by the community, to create a vast and knowledgable LLM with various abilities.
Prompt template: Alpaca
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
The models that were used in this merger were as follow:
Thank you to the creators of the above ai models, they have full credit for the EveryoneLLM series of models. Without their hard work we wouldnt be able to achieve the great success we have in the open source community. 💗
You can find the write up for merging models here:
https://docs.google.com/document/d/1_vOftBnrk9NRk5h10UqrfJ5CDih9KBKL61yvrZtVWPE/edit?usp=sharing
Open LLM Leaderboard Scores
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|------------------------------------|---------|---------|-----------|---------|------------|------------|---------|
| rombodawg/Everyone-LLM-7b-Base | 70.21 | 66.38 | 86.02 | 64.94 | 57.89 | 80.43 | 65.58 |
Config for the merger can be found bellow:
models:
- model: cognitivecomputations_dolphin-2.6-mistral-7b-dpo
parameters:
weight: 1
- model: jondurbin_bagel-dpo-7b-v0.4
parameters:
weight: 1
- model: Locutusque_Hercules-2.0-Mistral-7B
parameters:
weight: 1
- model: Open-Orca_Mistral-7B-OpenOrca
parameters:
weight: 1
- model: teknium_OpenHermes-2.5-Mistral-7B
parameters:
weight: 1
- model: NousResearch_Nous-Capybara-7B-V1.9
parameters:
weight: 1
- model: Intel_neural-chat-7b-v3-3
parameters:
weight: 1
- model: mistralai_Mistral-7B-Instruct-v0.2
parameters:
weight: 1
- model: senseable_WestLake-7B-v2
parameters:
weight: 1
- model: defog_sqlcoder-7b
parameters:
weight: 1
- model: meta-math_MetaMath-Mistral-7B
parameters:
weight: 1
- model: nextai-team_apollo-v1-7b
parameters:
weight: 1
- model: WizardLM_WizardMath-7B-V1.1
parameters:
weight: 1
- model: openchat_openchat-3.5-0106
parameters:
weight: 1
merge_method: task_arithmetic
base_model: mistralai_Mistral-7B-v0.1
parameters:
normalize: true
int8_mask: true
dtype: float16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 70.21 |
AI2 Reasoning Challenge (25-Shot) | 66.38 |
HellaSwag (10-Shot) | 86.02 |
MMLU (5-Shot) | 64.94 |
TruthfulQA (0-shot) | 57.89 |
Winogrande (5-shot) | 80.43 |
GSM8k (5-shot) | 65.58 |
- Downloads last month
- 78
Model tree for rombodawg/Everyone-LLM-7b-Base
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard66.380
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard86.020
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard64.940
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard57.890
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard80.430
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard65.580