Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,62 @@
|
|
1 |
---
|
|
|
|
|
|
|
2 |
license: bigcode-openrail-m
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
- code
|
5 |
license: bigcode-openrail-m
|
6 |
+
tags:
|
7 |
+
- codellama
|
8 |
+
- code_synthesis
|
9 |
+
- competition-level_code_generation
|
10 |
+
datasets:
|
11 |
+
- BAAI/TACO
|
12 |
---
|
13 |
+
# CodeLlama-7b-Python-taco
|
14 |
+
|
15 |
+
## Model Description
|
16 |
+
|
17 |
+
CodeLlama-7B-Python-TACO is a CodeLlama-7b-Python finetuned on APPS dataset. This model is specialized to solve competition-level programming tasks.
|
18 |
+
|
19 |
+
## Training data
|
20 |
+
|
21 |
+
The model is trained on the [Topics in Algorithmic Code Generation Dataset](https://github.com/FlagOpen/TACO). The dataset focused on algorithmic code generation, aiming to provide a more challenging training dataset and evaluation benchmark for the code generation model field. It includes 25,443 problems in the training set and 1,000 problems in the test set, making it the largest code generation dataset currently available. Each TACO problem is designed to match a diverse set of solution answers, with answers reaching sizes up to 1.55M, to ensure that models trained on this dataset are robust and not prone to overfitting. Furthermore, the TACO dataset includes fine-grained labels such as task topics, algorithms, skills, and difficulty levels, offering more precise guidance for both training and evaluating code generation models.
|
22 |
+
This model is fine-tuned using train split of TACO.
|
23 |
+
|
24 |
+
## Training procedure
|
25 |
+
|
26 |
+
The training script used to train this model can be found [here](https://github.com/FlagOpen/TACO/blob/main/train.py).
|
27 |
+
|
28 |
+
Training Details can be seen in our [paper](https://arxiv.org/abs/2312.14852)
|
29 |
+
|
30 |
+
|
31 |
+
## Intended Use and Limitations
|
32 |
+
|
33 |
+
The model is finetuned to solve programming problems given a text description and optional starter code.
|
34 |
+
|
35 |
+
### How to use
|
36 |
+
|
37 |
+
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
|
38 |
+
|
39 |
+
```py
|
40 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM
|
41 |
+
model = AutoModelForCausalLM.from_pretrained("FlagOpen/CodeLlama-7b-Python-taco")
|
42 |
+
tokenizer = AutoTokenizer.from_pretrained("FlagOpen/CodeLlama-7b-Python-taco")
|
43 |
+
prompt = """
|
44 |
+
A function to greet user. Given a user name it should say hello
|
45 |
+
def greet(name):
|
46 |
+
ANSWER:
|
47 |
+
"""
|
48 |
+
input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device)
|
49 |
+
start = input_ids.size(1)
|
50 |
+
out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2,
|
51 |
+
early_stopping=True, eos_token_id=tokenizer.eos_token_id, )
|
52 |
+
print(tokenizer.decode(out[0][start:]))
|
53 |
+
```
|
54 |
+
|
55 |
+
### Limitations and Biases
|
56 |
+
|
57 |
+
The model is intended to be only used for research purposes and comes with no guarantees of quality of generated code.
|
58 |
+
|
59 |
+
|
60 |
+
## Eval results
|
61 |
+
|
62 |
+
Coming soon...
|