iamtarun's picture
Create README.md
9b294e2
|
raw
history blame
4.03 kB
---
datasets:
- iamtarun/code_contest_python3_alpaca
language:
- en
metrics:
- code_eval
library_name: transformers
pipeline_tag: text-generation
tags:
- code
---
# Competitive Programming LLM for Python Language
This model is a finetuned version of [codegen350M-mono](https://huggingface.co/Salesforce/codegen-350M-mono) on cleaned coding competition [dataset](https://huggingface.co/datasets/iamtarun/code_contest_python3_alpaca) that uses alpaca style prompts while training.
## Prompt function
```python
'''
This function generates prompts using the problem description, sample input, and output examples.
@param1 description: str - text problem description
@param2 inputs: list - list of sample input examples
@param3 outputs: list - list of outputs corresponding to inputs
also, len(inputs) == len(outputs)
'''
def generate_prompt(description, inputs, outputs):
text = ("Below is a problem description that describes the problem. Write code in Python that appropriately solves the problem.\n\n"
"### Description:\n"
f"{description}\n\n")
assert len(inputs) == len(outputs)
c = 1
for inp, out in zip(inputs, outputs):
text += ("### Input:\n"
f"{inp}\n"
"### Output:\n"
f"{out}\n\n")
c += 1
if c > 2:
break
text += "### Code:\n"
return text
```
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(args.model_path, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(args.model_path)
# loading model for inference
model.eval()
# inference function
'''
This function takes text prompt as input which is generated from the generate_prompt function and returns the generated response
@param1 prompt: str - text prompt generated using generate_prompt function.
'''
def pipe(prompt):
device = "cuda"
inputs = tokenizer(prompt, return_tensors="pt").to(device)
with torch.no_grad():
output = model.generate(**inputs,
max_length=512,
do_sample=True,
temperature=0.5,
top_p=0.95,
repetition_penalty=1.15)
return tokenizer.decode(output[0].tolist(),
skip_special_tokens=True,
clean_up_tokenization_space=False)
# generating code for a problem description
description = "Mr. Chanek has an integer represented by a string s. Zero or more digits have been erased and are denoted by the character _. There are also zero or more digits marked by the character X, meaning they're the same digit. Mr. Chanek wants to count the number of possible integer s, where s is divisible by 25. Of course, s must not contain any leading zero. He can replace the character _ with any digit. He can also replace the character X with any digit, but it must be the same for every character X. As a note, a leading zero is any 0 digit that comes before the first nonzero digit in a number string in positional notation. For example, 0025 has two leading zeroes. An exception is the integer zero, (0 has no leading zero, but 0000 has three leading zeroes). Input One line containing the string s (1 ≤ |s| ≤ 8). The string s consists of the characters 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, _, and X. Output Output an integer denoting the number of possible integer s. Examples Input 25 Output 1 Input _00 Output 9 Input _XX Output 9 Input 0 Output 1 Input 0_25 Output 0 Note In the first example, the only possible s is 25. In the second and third example, s ∈ \{100, 200,300,400,500,600,700,800,900\}. In the fifth example, all possible s will have at least one leading zero."
inputs = ["0\n", "_XX\n", "_00\n", "0_25\n"]
outputs = ["1\n", "9\n", "9\n", "0\n"]
prompt = generate_prompt(description, inputs, outputs)
print(pipe(prompt))
print("\n", "="*100, "\n")
```