skgouda commited on
Commit
d12e2aa
·
1 Parent(s): b6b3b85

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +175 -0
README.md CHANGED
@@ -1,3 +1,178 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ dataset_info:
3
+ features:
4
+ - name: task_id
5
+ dtype: string
6
+ - name: language
7
+ dtype: string
8
+ - name: prompt
9
+ dtype: string
10
+ - name: test
11
+ dtype: string
12
+ - name: entry_point
13
+ dtype: string
14
+ splits:
15
+ - name: humanevalx_python
16
+ num_bytes: 165716
17
+ num_examples: 164
18
+ download_size: 67983
19
+ dataset_size: 165716
20
  license: apache-2.0
21
+ task_categories:
22
+ - text-generation
23
+ tags:
24
+ - code-generation
25
+ - multi-humaneval
26
+ - humaneval
27
+ pretty_name: multi-humaneval
28
+ language:
29
+ - en
30
  ---
31
+ # Multi-HumanEval
32
+
33
+ ## Table of Contents
34
+ - [multi-humaneval](#multi-humaneval)
35
+ - [Table of Contents](#table-of-contents)
36
+ - [Dataset Description](#dataset-description)
37
+ - [Dataset Summary](#dataset-summary)
38
+ - [Supported Tasks and Leaderboards](#related-tasks-and-leaderboards)
39
+ - [Languages](#languages)
40
+ - [Dataset Structure](#dataset-structure)
41
+ - [Data Instances](#data-instances)
42
+ - [Data Fields](#data-fields)
43
+ - [Data Splits](#data-splits)
44
+ - [Dataset Creation](#dataset-creation)
45
+ - [Curation Rationale](#curation-rationale)
46
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
47
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
48
+ - [Social Impact of Dataset](#social-impact-of-dataset)
49
+ - [Additional Information](#additional-information)
50
+ - [Dataset Curators](#dataset-curators)
51
+ - [Licensing Information](#licensing-information)
52
+ - [Citation Information](#citation-information)
53
+ - [Contributions](#contributions)
54
+
55
+ # multi-humaneval
56
+
57
+ ## Dataset Description
58
+
59
+ - **Repository:** [GitHub Repository](https://github.com/amazon-science/mbxp-exec-eval)
60
+ - **Paper:** [Multi-lingual Evaluation of Code Generation Models](https://openreview.net/forum?id=Bo7eeXm6An8)
61
+
62
+ ### Dataset Summary
63
+
64
+ This repository contains data and code to perform execution-based multi-lingual evaluation of code generation capabilities and the corresponding data,
65
+ namely, a multi-lingual benchmark MBXP, multi-lingual MathQA and multi-lingual HumanEval.
66
+ <br>Results and findings can be found in the paper ["Multi-lingual Evaluation of Code Generation Models"](https://arxiv.org/abs/2210.14868).
67
+
68
+
69
+ ### Related Tasks and Leaderboards
70
+ * [Multi-HumanEval](https://huggingface.co/datasets/mxeval/multi-humaneval)
71
+ * [MBXP](https://huggingface.co/datasets/mxeval/mbxp)
72
+ * [MathQAX](https://huggingface.co/datasets/mxeval/mathqax)
73
+
74
+ ### Languages
75
+ The programming problems are written in multiple programming languages and contain English natural text in comments and docstrings.
76
+
77
+
78
+ ## Dataset Structure
79
+ To lookup currently supported datasets
80
+ ```python
81
+ get_dataset_config_names("amazon/multi-humaneval")
82
+ ['python', 'csharp', 'go', 'java', 'javascript', 'kotlin', 'perl', 'php', 'ruby', 'scala', 'swift', 'typescript']
83
+
84
+ ```
85
+ To load a specific dataset and language
86
+ ```python
87
+ from datasets import load_dataset
88
+ load_dataset("amazon/multi-humaneval", "python")
89
+ DatasetDict({
90
+ test: Dataset({
91
+ features: ['task_id', 'language', 'prompt', 'test', 'entry_point', 'canonical_solution', 'description'],
92
+ num_rows: 164
93
+ })
94
+ })
95
+ ```
96
+
97
+ ### Data Instances
98
+
99
+ An example of a dataset instance:
100
+
101
+ ```python
102
+ {
103
+ "task_id": "HumanEval/0",
104
+ "language": "python",
105
+ "prompt": "from typing import List\n\n\ndef has_close_elements(numbers: List[float], threshold: float) -> bool:\n \"\"\" Check if in given list of numbers, are any two numbers closer to each other than\n given threshold.\n >>> has_close_elements([1.0, 2.0, 3.0], 0.5)\n False\n >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n True\n \"\"\"\n",
106
+ "test": "\n\nMETADATA = {\n \"author\": \"jt\",\n \"dataset\": \"test\"\n}\n\n\ndef check(candidate):\n assert candidate([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) == True\n assert candidate([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) == False\n assert candidate([1.0, 2.0, 5.9, 4.0, 5.0], 0.95) == True\n assert candidate([1.0, 2.0, 5.9, 4.0, 5.0], 0.8) == False\n assert candidate([1.0, 2.0, 3.0, 4.0, 5.0, 2.0], 0.1) == True\n assert candidate([1.1, 2.2, 3.1, 4.1, 5.1], 1.0) == True\n assert candidate([1.1, 2.2, 3.1, 4.1, 5.1], 0.5) == False\n\n",
107
+ "entry_point": "has_close_elements",
108
+ "canonical_solution": " for idx, elem in enumerate(numbers):\n for idx2, elem2 in enumerate(numbers):\n if idx != idx2:\n distance = abs(elem - elem2)\n if distance < threshold:\n return True\n\n return False\n",
109
+ "description": "Check if in given list of numbers, are any two numbers closer to each other than\n given threshold.\n >>> has_close_elements([1.0, 2.0, 3.0], 0.5)\n False\n >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n True"
110
+ }
111
+ ```
112
+
113
+ ### Data Fields
114
+
115
+ - `task_id`: identifier for the data sample
116
+ - `prompt`: input for the model containing function header and docstrings
117
+ - `canonical_solution`: solution for the problem in the `prompt`
118
+ - `description`: task description
119
+ - `test`: contains function to test generated code for correctness
120
+ - `entry_point`: entry point for test
121
+ - `language`: programming lanuage identifier to call the appropriate subprocess call for program execution
122
+
123
+
124
+ ### Data Splits
125
+
126
+ - HumanXEval
127
+ - Python
128
+ - Csharp
129
+ - Go
130
+ - Java
131
+ - Javascript
132
+ - Kotlin
133
+ - Perl
134
+ - Php
135
+ - Ruby
136
+ - Scala
137
+ - Swift
138
+ - Typescript
139
+
140
+ ## Dataset Creation
141
+
142
+ ### Curation Rationale
143
+
144
+ Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps.
145
+
146
+ ### Personal and Sensitive Information
147
+
148
+ None.
149
+
150
+ ## Considerations for Using the Data
151
+ Make sure to sandbox the execution environment.
152
+
153
+ ### Social Impact of Dataset
154
+ With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
155
+
156
+ ### Dataset Curators
157
+ AWS AI Labs
158
+
159
+ ### Licensing Information
160
+
161
+ [LICENSE](https://huggingface.co/datasets/mxeval/multi-humaneval/blob/main/multi-humaneval-LICENSE) <br>
162
+ [THIRD PARTY LICENSES](https://huggingface.co/datasets/mxeval/multi-humaneval/blob/main/THIRD_PARTY_LICENSES)
163
+
164
+ ### Citation Information
165
+ ```
166
+ @inproceedings{
167
+ athiwaratkun2023multilingual,
168
+ title={Multi-lingual Evaluation of Code Generation Models},
169
+ author={Ben Athiwaratkun and Sanjay Krishna Gouda and Zijian Wang and Xiaopeng Li and Yuchen Tian and Ming Tan and Wasi Uddin Ahmad and Shiqi Wang and Qing Sun and Mingyue Shang and Sujan Kumar Gonugondla and Hantian Ding and Varun Kumar and Nathan Fulton and Arash Farahani and Siddhartha Jain and Robert Giaquinto and Haifeng Qian and Murali Krishna Ramanathan and Ramesh Nallapati and Baishakhi Ray and Parminder Bhatia and Sudipta Sengupta and Dan Roth and Bing Xiang},
170
+ booktitle={The Eleventh International Conference on Learning Representations },
171
+ year={2023},
172
+ url={https://openreview.net/forum?id=Bo7eeXm6An8}
173
+ }
174
+ ```
175
+
176
+ ### Contributions
177
+
178
+ [skgouda@](https://github.com/sk-g) [benathi@](https://github.com/benathi)