AWQ quantization: done by stelterlab in INT4 GEMM with AutoAWQ by casper-hansen (https://github.com/casper-hansen/AutoAWQ/)
Original Weights by the open-r1 team. Original Model Card follows:
Model Card for OlympicCoder-32B
OlympicCoder-32B is a code mode that achieves very strong performance on competitive coding benchmarks such as LiveCodeBench andthe 2024 International Olympiad in Informatics.
Model description
- Model type: A 32B parameter model fine-tuned on a decontaminated version of the codeforces dataset.
- Language(s) (NLP): Primarily English
- License: apache-2.0
- Finetuned from model: Qwen/Qwen2.5-Coder-32B-Instruct
Evaluation
Usage
Here's how you can run the model using the pipeline()
function from 🤗 Transformers:
# pip install transformers
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="open-r1/OlympicCoder-32B", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{"role": "user", "content": "Write a python program to calculate the 10th Fibonacci number"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=8000, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
#<|im_start|>user
#Write a python program to calculate the 10th fibonacci number<|im_end|>
#<|im_start|>assistant
#<think>Okay, I need to write a Python program that calculates the 10th Fibonacci number. Hmm, the Fibonacci sequence starts with 0 and 1. Each subsequent number is the sum of the two preceding ones. So the sequence goes: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, and so on. ...
Training procedure
Training hyper-parameters
The following hyperparameters were used during training on 16 H100 nodes:
- dataset: open-r1/codeforces-cots_decontaminated
- learning_rate: 4.0e-5
- train_batch_size: 1
- seed: 42
- packing: false
- distributed_type: fsdp
- num_devices: 128
- gradient_accumulation_steps: 1
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_min_lr
- min_lr_rate: 0.1
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10.0
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for stelterlab/OlympicCoder-32B-AWQ
Base model
Qwen/Qwen2.5-32B
Finetuned
Qwen/Qwen2.5-Coder-32B
Finetuned
Qwen/Qwen2.5-Coder-32B-Instruct
Finetuned
open-r1/OlympicCoder-32B