Nxcode-CQ-7B-orpo / README.md
nhanv's picture
Update README.md
a5a9f61 verified
metadata
license: mit
tags:
  - code
pipeline_tag: text-generation

Nxcode-CQ-7B-orpo

Introduction

Nxcode-CQ-7B-orpo is an ORPO fine-tune of Qwen/CodeQwen1.5-7B-Chat on 100k samples ours datasets.

  • Strong code generation capabilities and competitve performance across a series of benchmarks;
  • Supporting 92 coding languages
  • Excellent performance in text-to-SQL, bug fix, etc.

Evalplus

EvalPlus pass@1
HumanEval 86.0
HumanEval+ 81.1

We use a simple template to generate the solution for evalplus:

"Complete the following Python function:\n{prompt}"

Evalplus Leaderboard

Models HumanEval HumanEval+
GPT-4-Turbo (April 2024) 90.2 86.6
GPT-4 (May 2023) 88.4 81.17
GPT-4-Turbo (Nov 2023) 85.4 79.3
CodeQwen1.5-7B-Chat 83.5 78.7
claude-3-opus (Mar 2024) 82.9 76.8
DeepSeek-Coder-33B-instruct 81.1 75.0
WizardCoder-33B-V1.1 79.9 73.2
OpenCodeInterpreter-DS-33B 79.3 73.8
speechless-codellama-34B-v2.0 77.4 72
GPT-3.5-Turbo (Nov 2023) 76.8 70.7
Llama3-70B-instruct 76.2 70.7

Quickstart

Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.You should use transformer version 4.39 to avoid loading tokenizer errors.

from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained(
    "NTQAI/Nxcode-CQ-7B-orpo",
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("NTQAI/Nxcode-CQ-7B-orpo")

prompt = """Complete the following Python function:
from typing import List


def has_close_elements(numbers: List[float], threshold: float) -> bool:
    """ Check if in given list of numbers, are any two numbers closer to each other than
    given threshold.
    >>> has_close_elements([1.0, 2.0, 3.0], 0.5)
    False
    >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)
    True
    """
"""
messages = [
    {"role": "user", "content": prompt}
]

inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
res = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)

Contact information

For personal communication related to this project, please contact Nha Nguyen Van (nha.nguyen@ntq-solution.com.vn).