Spaces:
Runtime error
Runtime error
IntelligenzaArtificiale
commited on
Commit
•
1e7d2f0
1
Parent(s):
f8b2586
Delete evaluation
Browse files- evaluation/demo_humaneval.md +0 -55
- evaluation/eval_table.md +0 -26
- evaluation/intro.md +0 -7
- evaluation/problem.md +0 -9
- evaluation/solution.md +0 -13
evaluation/demo_humaneval.md
DELETED
@@ -1,55 +0,0 @@
|
|
1 |
-
|
2 |
-
We can load HumanEval dataset and pass@k metric from 🤗 [`datasets`](https://huggingface.co/docs/datasets/index) and 🤗 [`evaluate`](https://huggingface.co/docs/evaluate/index)
|
3 |
-
|
4 |
-
```python
|
5 |
-
from datasets import load_dataset
|
6 |
-
from evaluate import load
|
7 |
-
|
8 |
-
human_eval = load_dataset("openai_humaneval")
|
9 |
-
code_eval_metric = load("code_eval")
|
10 |
-
```
|
11 |
-
|
12 |
-
We can easily compute the pass@k for a problem that asks for the implementation of a function that sums two integers:
|
13 |
-
|
14 |
-
```python
|
15 |
-
test_cases = ["assert add(2,3)==5"]
|
16 |
-
candidates = [["def add(a,b): return a*b", "def add(a, b): return a+b"]]
|
17 |
-
pass_at_k, results = code_eval_metric.compute(references=test_cases, predictions=candidates, k=[1, 2])
|
18 |
-
print(pass_at_k)
|
19 |
-
{'pass@1': 0.5, 'pass@2': 1.0}
|
20 |
-
```
|
21 |
-
|
22 |
-
To better understand how pass@k metric works, we will illustrate it with a concrete example from HumanEval dataset. We select the problem below and see how CodeParrot 🦜 (110M) performs and which code completions pass the unit tests:
|
23 |
-
|
24 |
-
**Problem:**
|
25 |
-
|
26 |
-
```python
|
27 |
-
|
28 |
-
def truncate_number(number: float) -> float:
|
29 |
-
""" Given a positive floating point number, it can be decomposed into
|
30 |
-
and integer part (largest integer smaller than given number) and decimals
|
31 |
-
(leftover part always smaller than 1).
|
32 |
-
|
33 |
-
Return the decimal part of the number.
|
34 |
-
>>> truncate_number(3.5)
|
35 |
-
0.5
|
36 |
-
"""
|
37 |
-
````
|
38 |
-
|
39 |
-
Instead of 200 candidate solutions, we will only generate 20 samples for illustration purposes. We use nucleus sampling with top-p where `p=0.95`, `temperature=0.2`, and sample tokens from the model until we encounter a stop sequence indicating the end of a method: ‘\nclass’, ‘\ndef’, ‘\n#’, ‘\nif’, or ‘\nprint’. For more details about decoding strategies for language generation, we recommend this [blog](https://huggingface.co/blog/how-to-generate).
|
40 |
-
|
41 |
-
**Remark**:
|
42 |
-
|
43 |
-
Regarding the temperature parameter, in [Codex](https://arxiv.org/pdf/2107.03374.pdf) paper, the authors observed that the best performing temperature increases as the number of samples permitted k increases. Similar behavior was also observed in [CodeGen](https://arxiv.org/pdf/2203.13474.pdf). When a model is only allowed a few samples to pass unit tests, it is beneficial to use the learned distribution, through a low temperature, to select candidates that are likely to pass. But when a model is allowed for more chances with a high k, using a higher sampling temperature to tilt the learned model distribution lets it explore diverse samples and thus have a greater chance of synthesizing a correct program.
|
44 |
-
|
45 |
-
|
46 |
-
For our experiment, we compute pass@1, pass@10 and pass@20, each corresponding to unit test pass rate when selecting respectively 1, 10 and 20 samples from the candidate solutions.
|
47 |
-
|
48 |
-
```
|
49 |
-
|
50 |
-
Results: {'pass@1': 0.1, 'pass@10': 0.7631, 'pass@20': 1.0}
|
51 |
-
|
52 |
-
````
|
53 |
-
|
54 |
-
If we take a closer look at the unit test results for each candidate solution, we find that 2 passed the unit test. This means that we have 2 correct solutions among 20, which corresponds to our pass@1 value `2/20 = 0.1`. The scores pass@10 and pass@20 are higher, because the more samples we select from the candidate completions, the more likely we are to include the correct implementation. As
|
55 |
-
for pass@20, it is `1`, since if we select all 20 candidates the problem gets solved which gives 100% success rate.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
evaluation/eval_table.md
DELETED
@@ -1,26 +0,0 @@
|
|
1 |
-
Table 1 below shows the HumanEval scores of CodeParrot, InCoder, PolyCoder, CodeGen and Codex (not open-source).
|
2 |
-
|
3 |
-
<div align="center">
|
4 |
-
|
5 |
-
Model | pass@1 | pass@10 | pass@100|
|
6 |
-
|-------|--------|---------|---------|
|
7 |
-
|CodeParrot (110M) | 3.80% | 6.57% | 12.78% |
|
8 |
-
|CodeParrot (1.5B) | 3.58% | 8.03% | 14.96% |
|
9 |
-
|CodeParrot (1.5B) | 3.99% | 8.69% | 17.88% |
|
10 |
-
|||||
|
11 |
-
|InCoder (6.7B) | 15.2% | 27.8% | 47.00% |
|
12 |
-
|||||
|
13 |
-
|PolyCoder (160M)| 2.13% | 3.35% | 4.88% |
|
14 |
-
|PolyCoder (400M)| 2.96% | 5.29% | 11.59% |
|
15 |
-
|PolyCoder (2.7B)| 5.59% | 9.84% | 17.68% |
|
16 |
-
|||||
|
17 |
-
|CodeGen-Mono (350M)| 12.76% | 23.11% | 35.19% |
|
18 |
-
|CodeGen-Mono (2.7B)| 23.70% | 36.64% | 57.01% |
|
19 |
-
|CodeGen-Mono (6.1B)| 26.13% | 42.29% | 65.82% |
|
20 |
-
|CodeGen-Mono (16.1B)| **29.28%** | **49.86%** | **75.00%** |
|
21 |
-
|||||
|
22 |
-
|Codex (25M)| 3.21% | 7.1% | 12.89%|
|
23 |
-
|Codex (300M)| 13.17%| 20.37% | 36.27% |
|
24 |
-
|Codex (12B)| 28.81%| 46.81% | 72.31% |
|
25 |
-
|
26 |
-
</div>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
evaluation/intro.md
DELETED
@@ -1,7 +0,0 @@
|
|
1 |
-
A natural way to evaluate code programs is to see if they pass unit tests, it is the idea behind the [pass@k](https://huggingface.co/metrics/code_eval) metric, a popular evaluation framework for code generation models, on [HumanEval](https://huggingface.co/datasets/openai_humaneval) dataset, which was introduced in [Codex paper](https://arxiv.org/pdf/2107.03374v2.pdf). The dataset includes 164 handwritten programming problems. In the pass@k metric, k code samples are generated per problem, and a problem is considered solved if any sample passes the unit tests and the total fraction of problems solved is reported.
|
2 |
-
In most papers, 200 candidate program completions are sampled, and pass@1, pass@10, and pass@100 are computed using an unbiased sampling estimator.
|
3 |
-
|
4 |
-
This plot shows the pass@100 by model size, for CodeParrot, InCoder, PolyCoder, CodeGen and Codex (not open-source):
|
5 |
-
<p align="center">
|
6 |
-
<img src="https://huggingface.co/datasets/loubnabnl/repo-images/resolve/main/pass@100_figure.png" alt="drawing" width="550"/>
|
7 |
-
</p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
evaluation/problem.md
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
def truncate_number(number: float) -> float:
|
2 |
-
""" Given a positive floating point number, it can be decomposed into
|
3 |
-
and integer part (largest integer smaller than given number) and decimals
|
4 |
-
(leftover part always smaller than 1).
|
5 |
-
Return the decimal part of the number.
|
6 |
-
>>> truncate_number(3.5)
|
7 |
-
0.5
|
8 |
-
"""
|
9 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
evaluation/solution.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
```python
|
2 |
-
|
3 |
-
def truncate_number(number: float) -> float:
|
4 |
-
""" Given a positive floating point number, it can be decomposed into
|
5 |
-
and integer part (largest integer smaller than given number) and decimals
|
6 |
-
(leftover part always smaller than 1).
|
7 |
-
|
8 |
-
Return the decimal part of the number.
|
9 |
-
>>> truncate_number(3.5)
|
10 |
-
0.5
|
11 |
-
"""
|
12 |
-
return number % 1
|
13 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|