metadata
language:
- en
license: apache-2.0
library_name: transformers
tags:
- chat
- qwen
- qwen2
- finetune
- chatml
- OpenHermes-2.5
- HelpSteer2
- Orca
- SlimOrca
base_model: Qwen/Qwen2-7B
datasets:
- nvidia/HelpSteer2
- teknium/OpenHermes-2.5
- microsoft/orca-math-word-problems-200k
- Open-Orca/SlimOrca
model_name: calme-2.7-qwen2-7b
pipeline_tag: text-generation
inference: false
model_creator: MaziyarPanahi
quantized_by: MaziyarPanahi
model-index:
- name: calme-2.7-qwen2-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 35.92
name: strict accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.7-qwen2-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 28.91
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.7-qwen2-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 12.08
name: exact match
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.7-qwen2-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.48
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.7-qwen2-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 19.94
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.7-qwen2-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 30.06
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.7-qwen2-7b
name: Open LLM Leaderboard
MaziyarPanahi/calme-2.7-qwen2-7b
This is a fine-tuned version of the Qwen/Qwen2-7B
model. It aims to improve the base model across all benchmarks.
β‘ Quantized GGUF
All GGUF models are available here: MaziyarPanahi/calme-2.7-qwen2-7b-GGUF
π Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 22.07 |
IFEval (0-Shot) | 35.92 |
BBH (3-Shot) | 28.91 |
MATH Lvl 5 (4-Shot) | 12.08 |
GPQA (0-shot) | 5.48 |
MuSR (0-shot) | 19.94 |
MMLU-PRO (5-shot) | 30.06 |
Prompt Template
This model uses ChatML
prompt template:
<|im_start|>system
{System}
<|im_end|>
<|im_start|>user
{User}
<|im_end|>
<|im_start|>assistant
{Assistant}
How to use
# Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.7-qwen2-7b")
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.7-qwen2-7b")
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.7-qwen2-7b")