Jacoby746's picture
Adding Evaluation Results (#1)
57d83c5 verified
---
license: apache-2.0
base_model:
- BAAI/Infinity-Instruct-7M-Gen-mistral-7B
- Senseable/WestLake-7B-v2
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- Gryphe/Tiamat-7b-1.1-DPO
- uukuguy/speechless-instruct-mistral-7b-v0.2
base_model_relation: merge
model-index:
- name: Proto-Athena-v0.2-4x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 37.52
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Jacoby746/Proto-Athena-v0.2-4x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 30.34
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Jacoby746/Proto-Athena-v0.2-4x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 5.14
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Jacoby746/Proto-Athena-v0.2-4x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.49
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Jacoby746/Proto-Athena-v0.2-4x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.96
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Jacoby746/Proto-Athena-v0.2-4x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.41
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Jacoby746/Proto-Athena-v0.2-4x7B
name: Open LLM Leaderboard
---
Test merge of 7b models for learning purposees.
New in v0.2:
Wanted to try a different gate type and using bfloat16, along with more detailed prompting to see if there's a noticeable difference.
Description:
This model is a merge of BAAI/Infinity-Instruct-7M-Gen-mistral-7B, SanjiWatsuki/Kunoichi-7B, Gryphe_Tiamat-7b-1.1-DPO, Senseable_WestLake-7B-v2 and uukuguy/speechless-instruct-mistral-7b-v0.2
This is the first model I've ever uploaded and wanted to learn more about the process. Merged using mergekit-moe.
Works up to 8k context, 16k with 2.5 RoPe scaling
Prompt template: Custom format, or Alpaca
Alpaca:
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Jacoby746__Proto-Athena-v0.2-4x7B)
| Metric |Value|
|-------------------|----:|
|Avg. |19.14|
|IFEval (0-Shot) |37.52|
|BBH (3-Shot) |30.34|
|MATH Lvl 5 (4-Shot)| 5.14|
|GPQA (0-shot) | 6.49|
|MuSR (0-shot) |10.96|
|MMLU-PRO (5-shot) |24.41|