system HF staff commited on
Commit
717eb0c
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - machine-generated
6
+ languages:
7
+ - code
8
+ licenses:
9
+ - other-several-licenses
10
+ multilinguality:
11
+ - multilingual
12
+ size_categories:
13
+ - n>1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - sequence-modeling
18
+ task_ids:
19
+ - language-modeling
20
+ ---
21
+
22
+ # Dataset Card for CodeSearchNet corpus
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+ - **Homepage:** https://wandb.ai/github/CodeSearchNet/benchmark
49
+ - **Repository:** https://github.com/github/CodeSearchNet
50
+ - **Paper:** https://arxiv.org/abs/1909.09436
51
+ - **Leaderboard:** https://wandb.ai/github/CodeSearchNet/benchmark/leaderboard
52
+
53
+ ### Dataset Summary
54
+
55
+ CodeSearchNet corpus is a dataset of 2 milllion (comment, code) pairs from opensource libraries hosted on GitHub. It contains code and documentation for several programming languages.
56
+
57
+ CodeSearchNet corpus was gathered to support the [CodeSearchNet challenge](https://wandb.ai/github/CodeSearchNet/benchmark), to explore the problem of code retrieval using natural language.
58
+
59
+ ### Supported Tasks and Leaderboards
60
+
61
+ - `language-modeling`: The dataset can be used to train a model for modelling programming languages, which consists in building language models for programming languages.
62
+
63
+ ### Languages
64
+
65
+ - Go **programming** language
66
+ - Java **programming** language
67
+ - Javascript **programming** language
68
+ - PHP **programming** language
69
+ - Python **programming** language
70
+ - Ruby **programming** language
71
+
72
+ ## Dataset Structure
73
+
74
+ ### Data Instances
75
+
76
+ A data point consists of a function code along with its documentation. Each data point also contains meta data on the function, such as the repository it was extracted from.
77
+ ```
78
+ {
79
+ 'id': '0',
80
+ 'repository_name': 'organisation/repository',
81
+ 'func_path_in_repository': 'src/path/to/file.py',
82
+ 'func_name': 'func',
83
+ 'whole_func_string': 'def func(args):\n"""Docstring"""\n [...]',
84
+ 'language': 'python',
85
+ 'func_code_string': '[...]',
86
+ 'func_code_tokens': ['def', 'func', '(', 'args', ')', ...],
87
+ 'func_documentation_string': 'Docstring',
88
+ 'func_documentation_string_tokens': ['Docstring'],
89
+ 'split_name': 'train',
90
+ 'func_code_url': 'https://github.com/<org>/<repo>/blob/<hash>/src/path/to/file.py#L111-L150'
91
+ }
92
+ ```
93
+ ### Data Fields
94
+
95
+ - `id`: Arbitrary number
96
+ - `repository_name`: name of the GitHub repository
97
+ - `func_path_in_repository`: tl;dr: path to the file which holds the function in the repository
98
+ - `func_name`: name of the function in the file
99
+ - `whole_func_string`: Code + documentation of the function
100
+ - `language`: Programming language in whoch the function is written
101
+ - `func_code_string`: Function code
102
+ - `func_code_tokens`: Tokens yielded by Treesitter
103
+ - `func_documentation_string`: Function documentation
104
+ - `func_documentation_string_tokens`: Tokens yielded by Treesitter
105
+ - `split_name`: Name of the split to which the example belongs (one of train, test or valid)
106
+ - `func_code_url`: URL to the function code on Github
107
+
108
+ ### Data Splits
109
+
110
+ Three splits are available:
111
+ - train
112
+ - test
113
+ - valid
114
+
115
+ ## Dataset Creation
116
+
117
+ ### Curation Rationale
118
+
119
+ [More Information Needed]
120
+
121
+ ### Source Data
122
+
123
+ #### Initial Data Collection and Normalization
124
+
125
+ All information can be retrieved in the [original technical review](https://arxiv.org/pdf/1909.09436.pdf)
126
+
127
+ **Corpus collection**:
128
+
129
+ Corpus has been collected from publicly available open-source non-fork GitHub repositories, using libraries.io to identify all projects which are used by at least one other project, and sort them by “popularity” as indicated by the number of stars and forks.
130
+
131
+ Then, any projects that do not have a license or whose license does not explicitly permit the re-distribution of parts of the project were removed. Treesitter - GitHub's universal parser - has been used to then tokenize all Go, Java, JavaScript, Python, PHP and Ruby functions (or methods) using and, where available, their respective documentation text using a heuristic regular expression.
132
+
133
+ **Corpus filtering**:
134
+
135
+ Functions without documentation are removed from the corpus. This yields a set of pairs ($c_i$, $d_i$) where ci is some function documented by di. Pairs ($c_i$, $d_i$) are passed through the folllowing preprocessing tasks:
136
+
137
+ - Documentation $d_i$ is truncated to the first full paragraph to remove in-depth discussion of function arguments and return values
138
+ - Pairs in which $d_i$ is shorter than three tokens are removed
139
+ - Functions $c_i$ whose implementation is shorter than three lines are removed
140
+ - Functions whose name contains the substring “test” are removed
141
+ - Constructors and standard extenion methods (eg `__str__` in Python or `toString` in Java) are removed
142
+ - Duplicates and near duplicates functions are removed, in order to keep only one version of the function
143
+
144
+ #### Who are the source language producers?
145
+
146
+ OpenSource contributors produced the code and documentations.
147
+
148
+ The dataset was gatherered and preprocessed automatically.
149
+
150
+ ### Annotations
151
+
152
+ #### Annotation process
153
+
154
+ [More Information Needed]
155
+
156
+ #### Who are the annotators?
157
+
158
+ [More Information Needed]
159
+
160
+ ### Personal and Sensitive Information
161
+
162
+ [More Information Needed]
163
+
164
+ ## Considerations for Using the Data
165
+
166
+ ### Social Impact of Dataset
167
+
168
+ [More Information Needed]
169
+
170
+ ### Discussion of Biases
171
+
172
+ [More Information Needed]
173
+
174
+ ### Other Known Limitations
175
+
176
+ [More Information Needed]
177
+
178
+ ## Additional Information
179
+
180
+ ### Dataset Curators
181
+
182
+ [More Information Needed]
183
+
184
+ ### Licensing Information
185
+
186
+ Each example in the dataset has is extracted from a GitHub repository, and each repository has its own license. Example-wise license information is not (yet) included in this dataset: you will need to find out yourself which license the code is using.
187
+
188
+ ### Citation Information
189
+
190
+ @article{husain2019codesearchnet,
191
+ title={{CodeSearchNet} challenge: Evaluating the state of semantic code search},
192
+ author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
193
+ journal={arXiv preprint arXiv:1909.09436},
194
+ year={2019}
195
+ }
code_search_net.py ADDED
@@ -0,0 +1,219 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """CodeSearchNet corpus: proxy dataset for semantic code search"""
18
+
19
+ # TODO: add licensing info in the examples
20
+ # TODO: log richer informations (especially while extracting the jsonl.gz files)
21
+ # TODO: enable custom configs; such as: "java+python"
22
+ # TODO: enable fetching examples with a given license, eg: "java_MIT"
23
+
24
+ from __future__ import absolute_import, division, print_function
25
+
26
+ import json
27
+ import os
28
+
29
+ import datasets
30
+
31
+
32
+ _CITATION = """\
33
+ @article{husain2019codesearchnet,
34
+ title={{CodeSearchNet} challenge: Evaluating the state of semantic code search},
35
+ author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
36
+ journal={arXiv preprint arXiv:1909.09436},
37
+ year={2019}
38
+ }
39
+ """
40
+
41
+ _DESCRIPTION = """\
42
+ CodeSearchNet corpus contains about 6 million functions from open-source code \
43
+ spanning six programming languages (Go, Java, JavaScript, PHP, Python, and Ruby). \
44
+ The CodeSearchNet Corpus also contains automatically generated query-like \
45
+ natural language for 2 million functions, obtained from mechanically scraping \
46
+ and preprocessing associated function documentation.
47
+ """
48
+
49
+ _HOMEPAGE = "https://github.com/github/CodeSearchNet"
50
+
51
+ _LICENSE = "Various"
52
+
53
+ _S3_BUCKET_URL = "https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/"
54
+ _AVAILABLE_LANGUAGES = ["python", "java", "javascript", "go", "ruby", "php"]
55
+ _URLs = {language: _S3_BUCKET_URL + f"{language}.zip" for language in _AVAILABLE_LANGUAGES}
56
+ # URLs for "all" are just the concatenation of URLs for all languages
57
+ _URLs["all"] = _URLs.copy()
58
+
59
+
60
+ class CodeSearchNet(datasets.GeneratorBasedBuilder):
61
+ """"CodeSearchNet corpus: proxy dataset for semantic code search."""
62
+
63
+ VERSION = datasets.Version("1.0.0", "Add CodeSearchNet corpus dataset")
64
+ BUILDER_CONFIGS = [
65
+ datasets.BuilderConfig(
66
+ name="all",
67
+ version=VERSION,
68
+ description="All available languages: Java, Go, Javascript, Python, PHP, Ruby",
69
+ ),
70
+ datasets.BuilderConfig(
71
+ name="java",
72
+ version=VERSION,
73
+ description="Java language",
74
+ ),
75
+ datasets.BuilderConfig(
76
+ name="go",
77
+ version=VERSION,
78
+ description="Go language",
79
+ ),
80
+ datasets.BuilderConfig(
81
+ name="python",
82
+ version=VERSION,
83
+ description="Pyhton language",
84
+ ),
85
+ datasets.BuilderConfig(
86
+ name="javascript",
87
+ version=VERSION,
88
+ description="Javascript language",
89
+ ),
90
+ datasets.BuilderConfig(
91
+ name="ruby",
92
+ version=VERSION,
93
+ description="Ruby language",
94
+ ),
95
+ datasets.BuilderConfig(
96
+ name="php",
97
+ version=VERSION,
98
+ description="PHP language",
99
+ ),
100
+ ]
101
+
102
+ DEFAULT_CONFIG_NAME = "all"
103
+
104
+ def _info(self):
105
+ return datasets.DatasetInfo(
106
+ description=_DESCRIPTION,
107
+ features=datasets.Features(
108
+ {
109
+ "repository_name": datasets.Value("string"),
110
+ "func_path_in_repository": datasets.Value("string"),
111
+ "func_name": datasets.Value("string"),
112
+ "whole_func_string": datasets.Value("string"),
113
+ "language": datasets.Value("string"),
114
+ "func_code_string": datasets.Value("string"),
115
+ "func_code_tokens": datasets.Sequence(datasets.Value("string")),
116
+ "func_documentation_string": datasets.Value("string"),
117
+ "func_documentation_tokens": datasets.Sequence(datasets.Value("string")),
118
+ "split_name": datasets.Value("string"),
119
+ "func_code_url": datasets.Value("string"),
120
+ # TODO - add licensing info in the examples
121
+ }
122
+ ),
123
+ # No default supervised keys
124
+ supervised_keys=None,
125
+ homepage=_HOMEPAGE,
126
+ license=_LICENSE,
127
+ citation=_CITATION,
128
+ )
129
+
130
+ def _split_generators(self, dl_manager):
131
+ """Returns SplitGenerators.
132
+
133
+ Note: The original data is stored in S3, and follows this unusual directory structure:
134
+ ```
135
+ .
136
+ ├── <language_name> # e.g. python
137
+ │   └── final
138
+ │   └── jsonl
139
+ │   ├── test
140
+ │   │   └── <language_name>_test_0.jsonl.gz
141
+ │   ├── train
142
+ │   │   ├── <language_name>_train_0.jsonl.gz
143
+ │   │   ├── <language_name>_train_1.jsonl.gz
144
+ │   │   ├── ...
145
+ │   │   └── <language_name>_train_n.jsonl.gz
146
+ │   └── valid
147
+ │   └── <language_name>_valid_0.jsonl.gz
148
+ ├── <language_name>_dedupe_definitions_v2.pkl
149
+ └── <language_name>_licenses.pkl
150
+ ```
151
+ """
152
+ data_urls = _URLs[self.config.name]
153
+ if isinstance(data_urls, str):
154
+ data_urls = {self.config.name: data_urls}
155
+ # Download & extract the language archives
156
+ data_dirs = [
157
+ os.path.join(directory, lang, "final", "jsonl")
158
+ for lang, directory in dl_manager.download_and_extract(data_urls).items()
159
+ ]
160
+
161
+ split2dirs = {
162
+ split_name: [os.path.join(directory, split_name) for directory in data_dirs]
163
+ for split_name in ["train", "test", "valid"]
164
+ }
165
+
166
+ split2paths = dl_manager.extract(
167
+ {
168
+ split_name: [
169
+ os.path.join(directory, entry_name)
170
+ for directory in split_dirs
171
+ for entry_name in os.listdir(directory)
172
+ ]
173
+ for split_name, split_dirs in split2dirs.items()
174
+ }
175
+ )
176
+
177
+ return [
178
+ datasets.SplitGenerator(
179
+ name=datasets.Split.TRAIN,
180
+ gen_kwargs={
181
+ "filepaths": split2paths["train"],
182
+ },
183
+ ),
184
+ datasets.SplitGenerator(
185
+ name=datasets.Split.TEST,
186
+ gen_kwargs={
187
+ "filepaths": split2paths["test"],
188
+ },
189
+ ),
190
+ datasets.SplitGenerator(
191
+ name=datasets.Split.VALIDATION,
192
+ gen_kwargs={
193
+ "filepaths": split2paths["valid"],
194
+ },
195
+ ),
196
+ ]
197
+
198
+ def _generate_examples(self, filepaths):
199
+ """Yields the examples by iterating through the available jsonl files."""
200
+ for file_id_, filepath in enumerate(filepaths):
201
+ with open(filepath, encoding="utf-8") as f:
202
+ for row_id_, row in enumerate(f):
203
+ # Key of the example = dir_id_ + entry_id + row_id,
204
+ # to ensure all examples have a distinct key
205
+ id_ = file_id_ + row_id_
206
+ data = json.loads(row)
207
+ yield id_, {
208
+ "repository_name": data["repo"],
209
+ "func_path_in_repository": data["path"],
210
+ "func_name": data["func_name"],
211
+ "whole_func_string": data["original_string"],
212
+ "language": data["language"],
213
+ "func_code_string": data["code"],
214
+ "func_code_tokens": data["code_tokens"],
215
+ "func_documentation_string": data["docstring"],
216
+ "func_documentation_tokens": data["docstring_tokens"],
217
+ "split_name": data["partition"],
218
+ "func_code_url": data["url"],
219
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"all": {"description": "CodeSearchNet corpus contains about 6 million functions from open-source code spanning six programming languages (Go, Java, JavaScript, PHP, Python, and Ruby). The CodeSearchNet Corpus also contains automatically generated query-like natural language for 2 million functions, obtained from mechanically scraping and preprocessing associated function documentation.\n", "citation": "@article{husain2019codesearchnet,\n title={{CodeSearchNet} challenge: Evaluating the state of semantic code search},\n author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},\n journal={arXiv preprint arXiv:1909.09436},\n year={2019}\n}\n", "homepage": "https://github.com/github/CodeSearchNet", "license": "Various", "features": {"repository_name": {"dtype": "string", "id": null, "_type": "Value"}, "func_path_in_repository": {"dtype": "string", "id": null, "_type": "Value"}, "func_name": {"dtype": "string", "id": null, "_type": "Value"}, "whole_func_string": {"dtype": "string", "id": null, "_type": "Value"}, "language": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_string": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "func_documentation_string": {"dtype": "string", "id": null, "_type": "Value"}, "func_documentation_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "split_name": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_url": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "code_search_net", "config_name": "all", "version": {"version_str": "1.0.0", "description": "Add CodeSearchNet corpus dataset", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5850604083, "num_examples": 1880853, "dataset_name": "code_search_net"}, "test": {"name": "test", "num_bytes": 308626333, "num_examples": 100529, "dataset_name": "code_search_net"}, "validation": {"name": "validation", "num_bytes": 274564382, "num_examples": 89154, "dataset_name": "code_search_net"}}, "download_checksums": {"https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/python.zip": {"num_bytes": 940909997, "checksum": "7223c6460bebfa85697b586da91e47bc5d64790a4d60bba5917106458ab6b40e"}, "https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/java.zip": {"num_bytes": 1060569153, "checksum": "05f9204b1808413fab30f0e69229e298f6de4ad468279d53a2aa5797e3a78c17"}, "https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/javascript.zip": {"num_bytes": 1664713350, "checksum": "fdc743f5af27f90c77584a2d29e2b7f8cecdd00c37b433c385b888ee062936dd"}, "https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/go.zip": {"num_bytes": 487525935, "checksum": "15d23f01dc2796447e1736263e6830079289d5ef41f09988011afdcf8da6b6e5"}, "https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/ruby.zip": {"num_bytes": 111758028, "checksum": "67aee5812d0f994df745c771c7791483f2b060561495747d424e307af4b342e6"}, "https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/php.zip": {"num_bytes": 851894048, "checksum": "c3bbf0d1b10010f88b058faea876f1f5471758399e30d58c11f78ff53660ce00"}}, "download_size": 5117370511, "post_processing_size": null, "dataset_size": 6433794798, "size_in_bytes": 11551165309}, "java": {"description": "CodeSearchNet corpus contains about 6 million functions from open-source code spanning six programming languages (Go, Java, JavaScript, PHP, Python, and Ruby). The CodeSearchNet Corpus also contains automatically generated query-like natural language for 2 million functions, obtained from mechanically scraping and preprocessing associated function documentation.\n", "citation": "@article{husain2019codesearchnet,\n title={{CodeSearchNet} challenge: Evaluating the state of semantic code search},\n author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},\n journal={arXiv preprint arXiv:1909.09436},\n year={2019}\n}\n", "homepage": "https://github.com/github/CodeSearchNet", "license": "Various", "features": {"repository_name": {"dtype": "string", "id": null, "_type": "Value"}, "func_path_in_repository": {"dtype": "string", "id": null, "_type": "Value"}, "func_name": {"dtype": "string", "id": null, "_type": "Value"}, "whole_func_string": {"dtype": "string", "id": null, "_type": "Value"}, "language": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_string": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "func_documentation_string": {"dtype": "string", "id": null, "_type": "Value"}, "func_documentation_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "split_name": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_url": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "code_search_net", "config_name": "java", "version": {"version_str": "1.0.0", "description": "Add CodeSearchNet corpus dataset", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1429272535, "num_examples": 454451, "dataset_name": "code_search_net"}, "test": {"name": "test", "num_bytes": 82377246, "num_examples": 26909, "dataset_name": "code_search_net"}, "validation": {"name": "validation", "num_bytes": 42358315, "num_examples": 15328, "dataset_name": "code_search_net"}}, "download_checksums": {"https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/java.zip": {"num_bytes": 1060569153, "checksum": "05f9204b1808413fab30f0e69229e298f6de4ad468279d53a2aa5797e3a78c17"}}, "download_size": 1060569153, "post_processing_size": null, "dataset_size": 1554008096, "size_in_bytes": 2614577249}, "go": {"description": "CodeSearchNet corpus contains about 6 million functions from open-source code spanning six programming languages (Go, Java, JavaScript, PHP, Python, and Ruby). The CodeSearchNet Corpus also contains automatically generated query-like natural language for 2 million functions, obtained from mechanically scraping and preprocessing associated function documentation.\n", "citation": "@article{husain2019codesearchnet,\n title={{CodeSearchNet} challenge: Evaluating the state of semantic code search},\n author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},\n journal={arXiv preprint arXiv:1909.09436},\n year={2019}\n}\n", "homepage": "https://github.com/github/CodeSearchNet", "license": "Various", "features": {"repository_name": {"dtype": "string", "id": null, "_type": "Value"}, "func_path_in_repository": {"dtype": "string", "id": null, "_type": "Value"}, "func_name": {"dtype": "string", "id": null, "_type": "Value"}, "whole_func_string": {"dtype": "string", "id": null, "_type": "Value"}, "language": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_string": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "func_documentation_string": {"dtype": "string", "id": null, "_type": "Value"}, "func_documentation_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "split_name": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_url": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "code_search_net", "config_name": "go", "version": {"version_str": "1.0.0", "description": "Add CodeSearchNet corpus dataset", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 738153234, "num_examples": 317832, "dataset_name": "code_search_net"}, "test": {"name": "test", "num_bytes": 32286998, "num_examples": 14291, "dataset_name": "code_search_net"}, "validation": {"name": "validation", "num_bytes": 26888527, "num_examples": 14242, "dataset_name": "code_search_net"}}, "download_checksums": {"https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/go.zip": {"num_bytes": 487525935, "checksum": "15d23f01dc2796447e1736263e6830079289d5ef41f09988011afdcf8da6b6e5"}}, "download_size": 487525935, "post_processing_size": null, "dataset_size": 797328759, "size_in_bytes": 1284854694}, "python": {"description": "CodeSearchNet corpus contains about 6 million functions from open-source code spanning six programming languages (Go, Java, JavaScript, PHP, Python, and Ruby). The CodeSearchNet Corpus also contains automatically generated query-like natural language for 2 million functions, obtained from mechanically scraping and preprocessing associated function documentation.\n", "citation": "@article{husain2019codesearchnet,\n title={{CodeSearchNet} challenge: Evaluating the state of semantic code search},\n author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},\n journal={arXiv preprint arXiv:1909.09436},\n year={2019}\n}\n", "homepage": "https://github.com/github/CodeSearchNet", "license": "Various", "features": {"repository_name": {"dtype": "string", "id": null, "_type": "Value"}, "func_path_in_repository": {"dtype": "string", "id": null, "_type": "Value"}, "func_name": {"dtype": "string", "id": null, "_type": "Value"}, "whole_func_string": {"dtype": "string", "id": null, "_type": "Value"}, "language": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_string": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "func_documentation_string": {"dtype": "string", "id": null, "_type": "Value"}, "func_documentation_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "split_name": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_url": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "code_search_net", "config_name": "python", "version": {"version_str": "1.0.0", "description": "Add CodeSearchNet corpus dataset", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1559645310, "num_examples": 412178, "dataset_name": "code_search_net"}, "test": {"name": "test", "num_bytes": 84342064, "num_examples": 22176, "dataset_name": "code_search_net"}, "validation": {"name": "validation", "num_bytes": 92154786, "num_examples": 23107, "dataset_name": "code_search_net"}}, "download_checksums": {"https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/python.zip": {"num_bytes": 940909997, "checksum": "7223c6460bebfa85697b586da91e47bc5d64790a4d60bba5917106458ab6b40e"}}, "download_size": 940909997, "post_processing_size": null, "dataset_size": 1736142160, "size_in_bytes": 2677052157}, "javascript": {"description": "CodeSearchNet corpus contains about 6 million functions from open-source code spanning six programming languages (Go, Java, JavaScript, PHP, Python, and Ruby). The CodeSearchNet Corpus also contains automatically generated query-like natural language for 2 million functions, obtained from mechanically scraping and preprocessing associated function documentation.\n", "citation": "@article{husain2019codesearchnet,\n title={{CodeSearchNet} challenge: Evaluating the state of semantic code search},\n author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},\n journal={arXiv preprint arXiv:1909.09436},\n year={2019}\n}\n", "homepage": "https://github.com/github/CodeSearchNet", "license": "Various", "features": {"repository_name": {"dtype": "string", "id": null, "_type": "Value"}, "func_path_in_repository": {"dtype": "string", "id": null, "_type": "Value"}, "func_name": {"dtype": "string", "id": null, "_type": "Value"}, "whole_func_string": {"dtype": "string", "id": null, "_type": "Value"}, "language": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_string": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "func_documentation_string": {"dtype": "string", "id": null, "_type": "Value"}, "func_documentation_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "split_name": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_url": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "code_search_net", "config_name": "javascript", "version": {"version_str": "1.0.0", "description": "Add CodeSearchNet corpus dataset", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 480286523, "num_examples": 123889, "dataset_name": "code_search_net"}, "test": {"name": "test", "num_bytes": 24056972, "num_examples": 6483, "dataset_name": "code_search_net"}, "validation": {"name": "validation", "num_bytes": 30168242, "num_examples": 8253, "dataset_name": "code_search_net"}}, "download_checksums": {"https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/javascript.zip": {"num_bytes": 1664713350, "checksum": "fdc743f5af27f90c77584a2d29e2b7f8cecdd00c37b433c385b888ee062936dd"}}, "download_size": 1664713350, "post_processing_size": null, "dataset_size": 534511737, "size_in_bytes": 2199225087}, "ruby": {"description": "CodeSearchNet corpus contains about 6 million functions from open-source code spanning six programming languages (Go, Java, JavaScript, PHP, Python, and Ruby). The CodeSearchNet Corpus also contains automatically generated query-like natural language for 2 million functions, obtained from mechanically scraping and preprocessing associated function documentation.\n", "citation": "@article{husain2019codesearchnet,\n title={{CodeSearchNet} challenge: Evaluating the state of semantic code search},\n author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},\n journal={arXiv preprint arXiv:1909.09436},\n year={2019}\n}\n", "homepage": "https://github.com/github/CodeSearchNet", "license": "Various", "features": {"repository_name": {"dtype": "string", "id": null, "_type": "Value"}, "func_path_in_repository": {"dtype": "string", "id": null, "_type": "Value"}, "func_name": {"dtype": "string", "id": null, "_type": "Value"}, "whole_func_string": {"dtype": "string", "id": null, "_type": "Value"}, "language": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_string": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "func_documentation_string": {"dtype": "string", "id": null, "_type": "Value"}, "func_documentation_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "split_name": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_url": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "code_search_net", "config_name": "ruby", "version": {"version_str": "1.0.0", "description": "Add CodeSearchNet corpus dataset", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 110681715, "num_examples": 48791, "dataset_name": "code_search_net"}, "test": {"name": "test", "num_bytes": 5359280, "num_examples": 2279, "dataset_name": "code_search_net"}, "validation": {"name": "validation", "num_bytes": 4830744, "num_examples": 2209, "dataset_name": "code_search_net"}}, "download_checksums": {"https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/ruby.zip": {"num_bytes": 111758028, "checksum": "67aee5812d0f994df745c771c7791483f2b060561495747d424e307af4b342e6"}}, "download_size": 111758028, "post_processing_size": null, "dataset_size": 120871739, "size_in_bytes": 232629767}, "php": {"description": "CodeSearchNet corpus contains about 6 million functions from open-source code spanning six programming languages (Go, Java, JavaScript, PHP, Python, and Ruby). The CodeSearchNet Corpus also contains automatically generated query-like natural language for 2 million functions, obtained from mechanically scraping and preprocessing associated function documentation.\n", "citation": "@article{husain2019codesearchnet,\n title={{CodeSearchNet} challenge: Evaluating the state of semantic code search},\n author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},\n journal={arXiv preprint arXiv:1909.09436},\n year={2019}\n}\n", "homepage": "https://github.com/github/CodeSearchNet", "license": "Various", "features": {"repository_name": {"dtype": "string", "id": null, "_type": "Value"}, "func_path_in_repository": {"dtype": "string", "id": null, "_type": "Value"}, "func_name": {"dtype": "string", "id": null, "_type": "Value"}, "whole_func_string": {"dtype": "string", "id": null, "_type": "Value"}, "language": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_string": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "func_documentation_string": {"dtype": "string", "id": null, "_type": "Value"}, "func_documentation_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "split_name": {"dtype": "string", "id": null, "_type": "Value"}, "func_code_url": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "code_search_net", "config_name": "php", "version": {"version_str": "1.0.0", "description": "Add CodeSearchNet corpus dataset", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1532564870, "num_examples": 523712, "dataset_name": "code_search_net"}, "test": {"name": "test", "num_bytes": 80203877, "num_examples": 28391, "dataset_name": "code_search_net"}, "validation": {"name": "validation", "num_bytes": 78163924, "num_examples": 26015, "dataset_name": "code_search_net"}}, "download_checksums": {"https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/php.zip": {"num_bytes": 851894048, "checksum": "c3bbf0d1b10010f88b058faea876f1f5471758399e30d58c11f78ff53660ce00"}}, "download_size": 851894048, "post_processing_size": null, "dataset_size": 1690932671, "size_in_bytes": 2542826719}}
dummy/all/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96d9ecc5afe9233404d42136c64ae22bdc8f60592ad3a976b8611464e8f90ef9
3
+ size 18608
dummy/go/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb9db4fb6fecbee76af9416ea102c7b21e342832585da2eb03e5592ddd48f11a
3
+ size 3770
dummy/java/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb0faa2e86b22ff59bb3359014b31b5eb7a80e557e0f0d1da43bbf516838bc27
3
+ size 3870
dummy/javascript/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3304c62fb31489c430d1a0eb91ae28befce821238bc788a4c757a3974388b81
3
+ size 4170
dummy/php/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f1521a2392c0ad0bf6e57b0a01e3b7107a7f7b7588a655180a2dea1155a34ae
3
+ size 3820
dummy/python/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae0538935b38ab1b2ba8fc7050ec6e961b38e9b1e5ccebe73533b7be81b10be0
3
+ size 3970
dummy/ruby/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4412850928663e578853c4236a176a3704738255b2d0fe2be99142d9b60ca94f
3
+ size 3870