calme-2.3-qwen2-7b / README.md
MaziyarPanahi's picture
Create README.md (#2)
073d30c verified
|
raw
history blame
1.84 kB
metadata
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen2-7B/blob/main/LICENSE
language:
  - en
pipeline_tag: text-generation
tags:
  - chat
  - qwen
  - qwen2
  - finetune
  - chatml
  - OpenHermes-2.5
  - HelpSteer2
  - Orca
  - SlimOrca
library_name: transformers
inference: false
model_creator: MaziyarPanahi
quantized_by: MaziyarPanahi
base_model: Qwen/Qwen2-7B
model_name: Qwen2-7B-Instruct-v0.3
datasets:
  - nvidia/HelpSteer2
  - teknium/OpenHermes-2.5
  - microsoft/orca-math-word-problems-200k
  - Open-Orca/SlimOrca
Qwen2 fine-tune

MaziyarPanahi/Qwen2-7B-Instruct-v0.3

This is a fine-tuned version of the Qwen/Qwen2-7B model. It aims to improve the base model across all benchmarks.

⚑ Quantized GGUF

All GGUF models are available here: MaziyarPanahi/Qwen2-7B-Instruct-v0.3

πŸ† Open LLM Leaderboard Evaluation Results

coming soon!

Prompt Template

This model uses ChatML prompt template:

<|im_start|>system
{System}
<|im_end|>
<|im_start|>user
{User}
<|im_end|>
<|im_start|>assistant
{Assistant}

How to use


# Use a pipeline as a high-level helper

from transformers import pipeline

messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="MaziyarPanahi/Qwen2-7B-Instruct-v0.3")
pipe(messages)


# Load model directly

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Qwen2-7B-Instruct-v0.3")
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Qwen2-7B-Instruct-v0.3")