Datasets:

Modalities:
Text
Languages:
code
ArXiv:
Libraries:
Datasets
License:
loubnabnl HF staff commited on
Commit
0d3b6be
·
1 Parent(s): f1e6e86

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -1
README.md CHANGED
@@ -1,3 +1,81 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators: []
3
+ language_creators:
4
+ - crowdsourced
5
+ - expert-generated
6
+ language:
7
+ - code
8
+ license:
9
+ - other
10
+ multilinguality:
11
+ - multilingual
12
+ pretty_name: xlcost-text-to-code
13
+ size_categories:
14
+ - unknown
15
+ source_datasets: []
16
+ task_categories:
17
+ - sequence-modeling
18
+ task_ids:
19
+ - language-modeling
20
  ---
21
+
22
+ # XLCost for text-to-code synthesis
23
+
24
+ ## Dataset Description
25
+ This is a subset of [XLCoST benchmark](https://github.com/reddy-lab-code-research/XLCoST), for text-to-code generation at snippet level and program level for **7** programming languages: `Python, C, C#, C++, Java, Javascript and PHP`.
26
+
27
+ ## Languages
28
+
29
+ The dataset contains text in English and its corresponding code translation. Each program is divided into several code snippets, so the snipppet-level subsets contains these code snippets with their corresponding comments, for program-level subsets, the comments were concatenated in one long description. Moreover, programs in all the languages are aligned at the snippet level and the comment for a particular snippet is the same across all the languages.
30
+
31
+ ## Dataset Structure
32
+ To load the dataset you need to specify a subset among the **14 exiting instances**: `LANGUAGE-snippet-level/LANGUAGE-program-level` for `LANGUAGE` in `[Python, C, Csharp, C++, Java, Javascript and PHP]`. By default `Python-snippet-level` loaded.
33
+
34
+ ```python
35
+ from datasets import load_dataset
36
+ load_dataset("codeparrot/xlcost-text-to-code", "Python-program-level")
37
+
38
+ DatasetDict({
39
+ train: Dataset({
40
+ features: ['text', 'code'],
41
+ num_rows: 9263
42
+ })
43
+ test: Dataset({
44
+ features: ['text', 'code'],
45
+ num_rows: 887
46
+ })
47
+ validation: Dataset({
48
+ features: ['text', 'code'],
49
+ num_rows: 472
50
+ })
51
+ })
52
+ ```
53
+
54
+ ```python
55
+ next(iter(data["train"]))
56
+ {'text': 'Maximum Prefix Sum possible by merging two given arrays | Python3 implementation of the above approach ; Stores the maximum prefix sum of the array A [ ] ; Traverse the array A [ ] ; Stores the maximum prefix sum of the array B [ ] ; Traverse the array B [ ] ; Driver code',
57
+ 'code': 'def maxPresum ( a , b ) : NEW_LINE INDENT X = max ( a [ 0 ] , 0 ) NEW_LINE for i in range ( 1 , len ( a ) ) : NEW_LINE INDENT a [ i ] += a [ i - 1 ] NEW_LINE X = max ( X , a [ i ] ) NEW_LINE DEDENT Y = max ( b [ 0 ] , 0 ) NEW_LINE for i in range ( 1 , len ( b ) ) : NEW_LINE INDENT b [ i ] += b [ i - 1 ] NEW_LINE Y = max ( Y , b [ i ] ) NEW_LINE DEDENT return X + Y NEW_LINE DEDENT A = [ 2 , - 1 , 4 , - 5 ] NEW_LINE B = [ 4 , - 3 , 12 , 4 , - 3 ] NEW_LINE print ( maxPresum ( A , B ) ) NEW_LINE'}
58
+ ```
59
+ Note that the data undergo some tokenization hence the additional whitespaces and the use of NEW_LINE instead of `\n` and INDENT instead of `\t`, DEDENT to cancel indentation...
60
+
61
+ ## Data Fields
62
+
63
+ * text: natural language description/comment
64
+ * code: code at snippet/program level
65
+
66
+ ## Data Splits
67
+
68
+ Each subset has three splits: train, test and validation.
69
+
70
+ ## Citation Information
71
+
72
+ ```
73
+ @misc{zhu2022xlcost,
74
+ title = {XLCoST: A Benchmark Dataset for Cross-lingual Code Intelligence},
75
+ url = {https://arxiv.org/abs/2206.08474},
76
+ author = {Zhu, Ming and Jain, Aneesh and Suresh, Karthik and Ravindran, Roshan and Tipirneni, Sindhu and Reddy, Chandan K.},
77
+ year = {2022},
78
+ eprint={2206.08474},
79
+ archivePrefix={arXiv}
80
+ }
81
+ ```