YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
Dracarys2-72B-Instruct - GGUF
- Model creator: https://huggingface.co/abacusai/
- Original model: https://huggingface.co/abacusai/Dracarys2-72B-Instruct/
Original model description:
language: - en license: other tags: - chat license_name: tongyi-qianwen license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE pipeline_tag: text-generation
Dracarys2-72B-Instruct
Introduction
We introduce the latest in the Smaug series, the Dracarys family of finetunes targeting coding performance improvements across a variety of base models.
This variant is a finetune of Qwen2.5-72B-Instruct
Compared to Qwen2.5-72B-Instruct, Dracarys has better LiveCodeBench scores (see evaluation results below).
Model Description
- Developed by: Abacus.AI
- License: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
- Finetuned from model: Qwen2.5-72B-Instruct.
How to use
The prompt format is unchanged from Qwen2.5-72B-Instruct (see evaluations for prompt details for LCB)
Use with transformers
See the snippet below for usage with Transformers:
import transformers
import torch
model_id = "abacusai/Dracarys2-72B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are data science coding assistant that generates Python code using Pandas and Numpy."},
{"role": "user", "content": "Write code to select rows from the dataframe `df` having the maximum `temp` for each `city`"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
Evaluation Results
LiveCodeBench
Model | Code Generation | Code Execution (COT) | Test Output Prediction |
---|---|---|---|
Dracarys2-72B-Instruct | 53.80 | 89.12 | 59.61 |
Qwen2.5-72B-Instruct | 53.03 | 88.72 | 46.28 |
Breakdown of LiveCodeBench CodeGeneration
Model | Easy | Medium | Hard |
---|---|---|---|
Dracarys2-72B-Instruct | 88.79 | 50.28 | 9.47 |
Qwen2.5-72B-Instruct | 86.99 | 49.59 | 9.99 |
Breakdown of LiveCodeBench TestOutputPrediction
Model | Easy | Medium | Hard |
---|---|---|---|
Dracarys2-72B-Instruct | 79.25 | 53.76 | 37.63 |
Qwen2.5-72B-Instruct | 68.43 | 39.46 | 22.22 |
- Downloads last month
- 42