|
--- |
|
datasets: |
|
- iamtarun/python_code_instructions_18k_alpaca |
|
language: |
|
- en |
|
metrics: |
|
- code_eval |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
tags: |
|
- code |
|
widget: |
|
- text: 'def isprime(num):' |
|
example_title: Code Example 1 |
|
- text: 'def factorial(num):' |
|
example_title: Code Example 2 |
|
- text: 'def square(num):' |
|
example_title: Code Example 3 |
|
--- |
|
|
|
# Competitive Programming LLM for Python Language |
|
|
|
This model is a finetuned version of [codegen350M-mono](https://huggingface.co/Salesforce/codegen-350M-mono) on python code [dataset](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpaca) that uses alpaca style prompts while training. |
|
|
|
## Prompt function |
|
|
|
```python |
|
''' |
|
This function generates prompts using the problem description and input. |
|
@param1 instruction: str - text problem description |
|
@param2 inputs: str - input to the program |
|
''' |
|
def generate_prompt(instruction, inputs=""): |
|
text = ("Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n" |
|
"### Instruction:\n" |
|
f"{instruction}\n\n" |
|
"### Input:\n" |
|
f"{inputs}\n\n" |
|
"### Output:\n") |
|
return text |
|
``` |
|
|
|
## Usage |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
# load model and tokenizer |
|
model = AutoModelForCausalLM.from_pretrained("iamtarun/codegen-350M-mono-4bit-qlora", device_map="auto") |
|
tokenizer = AutoTokenizer.from_pretrained("iamtarun/codegen-350M-mono-4bit-qlora") |
|
|
|
# loading model for inference |
|
model.eval() |
|
|
|
# inference function |
|
''' |
|
This function takes text prompt as input which is generated from the generate_prompt function and returns the generated response |
|
|
|
@param1 prompt: str - text prompt generated using generate_prompt function. |
|
''' |
|
def pipe(prompt): |
|
device = "cuda" |
|
inputs = tokenizer(prompt, return_tensors="pt").to(device) |
|
with torch.no_grad(): |
|
output = model.generate(**inputs, |
|
max_length=512, |
|
do_sample=True, |
|
temperature=0.5, |
|
top_p=0.95, |
|
repetition_penalty=1.15) |
|
return tokenizer.decode(output[0].tolist(), |
|
skip_special_tokens=True, |
|
clean_up_tokenization_space=False) |
|
|
|
# generating code for a problem description |
|
instruction = "Write a function to calculate square of a number in python" |
|
inputs = "number = 5" |
|
prompt = generate_prompt(instruction, inputs) |
|
print(pipe(prompt)) |
|
print("\n", "="*100) |
|
``` |