parquet-converter commited on
Commit
a1e40dd
1 Parent(s): 46c2a16

Update parquet files

Browse files
README.md DELETED
@@ -1,212 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - found
4
- language_creators:
5
- - found
6
- language:
7
- - code
8
- license:
9
- - c-uda
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 1M<n<10M
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - text-classification
18
- task_ids:
19
- - semantic-similarity-classification
20
- pretty_name: CodeXGlueCcCloneDetectionBigCloneBench
21
- dataset_info:
22
- features:
23
- - name: id
24
- dtype: int32
25
- - name: id1
26
- dtype: int32
27
- - name: id2
28
- dtype: int32
29
- - name: func1
30
- dtype: string
31
- - name: func2
32
- dtype: string
33
- - name: label
34
- dtype: bool
35
- splits:
36
- - name: train
37
- num_bytes: 2888035757
38
- num_examples: 901028
39
- - name: validation
40
- num_bytes: 1371399694
41
- num_examples: 415416
42
- - name: test
43
- num_bytes: 1220662901
44
- num_examples: 415416
45
- download_size: 47955874
46
- dataset_size: 5480098352
47
- ---
48
- # Dataset Card for "code_x_glue_cc_clone_detection_big_clone_bench"
49
-
50
- ## Table of Contents
51
- - [Dataset Description](#dataset-description)
52
- - [Dataset Summary](#dataset-summary)
53
- - [Supported Tasks and Leaderboards](#supported-tasks)
54
- - [Languages](#languages)
55
- - [Dataset Structure](#dataset-structure)
56
- - [Data Instances](#data-instances)
57
- - [Data Fields](#data-fields)
58
- - [Data Splits](#data-splits-sample-size)
59
- - [Dataset Creation](#dataset-creation)
60
- - [Curation Rationale](#curation-rationale)
61
- - [Source Data](#source-data)
62
- - [Annotations](#annotations)
63
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
64
- - [Considerations for Using the Data](#considerations-for-using-the-data)
65
- - [Social Impact of Dataset](#social-impact-of-dataset)
66
- - [Discussion of Biases](#discussion-of-biases)
67
- - [Other Known Limitations](#other-known-limitations)
68
- - [Additional Information](#additional-information)
69
- - [Dataset Curators](#dataset-curators)
70
- - [Licensing Information](#licensing-information)
71
- - [Citation Information](#citation-information)
72
- - [Contributions](#contributions)
73
-
74
- ## Dataset Description
75
-
76
- - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-BigCloneBench
77
-
78
- ### Dataset Summary
79
-
80
- CodeXGLUE Clone-detection-BigCloneBench dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-BigCloneBench
81
-
82
- Given two codes as the input, the task is to do binary classification (0/1), where 1 stands for semantic equivalence and 0 for others. Models are evaluated by F1 score.
83
- The dataset we use is BigCloneBench and filtered following the paper Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree.
84
-
85
- ### Supported Tasks and Leaderboards
86
-
87
- - `semantic-similarity-classification`: The dataset can be used to train a model for classifying if two given java methods are cloens of each other.
88
-
89
- ### Languages
90
-
91
- - Java **programming** language
92
-
93
- ## Dataset Structure
94
-
95
- ### Data Instances
96
-
97
- An example of 'test' looks as follows.
98
- ```
99
- {
100
- "func1": " @Test(expected = GadgetException.class)\n public void malformedGadgetSpecIsCachedAndThrows() throws Exception {\n HttpRequest request = createCacheableRequest();\n expect(pipeline.execute(request)).andReturn(new HttpResponse(\"malformed junk\")).once();\n replay(pipeline);\n try {\n specFactory.getGadgetSpec(createContext(SPEC_URL, false));\n fail(\"No exception thrown on bad parse\");\n } catch (GadgetException e) {\n }\n specFactory.getGadgetSpec(createContext(SPEC_URL, false));\n }\n",
101
- "func2": " public InputStream getInputStream() throws TGBrowserException {\n try {\n if (!this.isFolder()) {\n URL url = new URL(this.url);\n InputStream stream = url.openStream();\n return stream;\n }\n } catch (Throwable throwable) {\n throw new TGBrowserException(throwable);\n }\n return null;\n }\n",
102
- "id": 0,
103
- "id1": 2381663,
104
- "id2": 4458076,
105
- "label": false
106
- }
107
- ```
108
-
109
- ### Data Fields
110
-
111
- In the following each data field in go is explained for each config. The data fields are the same among all splits.
112
-
113
- #### default
114
-
115
- |field name| type | description |
116
- |----------|------|---------------------------------------------------|
117
- |id |int32 | Index of the sample |
118
- |id1 |int32 | The first function id |
119
- |id2 |int32 | The second function id |
120
- |func1 |string| The full text of the first function |
121
- |func2 |string| The full text of the second function |
122
- |label |bool | 1 is the functions are not equivalent, 0 otherwise|
123
-
124
- ### Data Splits
125
-
126
- | name |train |validation| test |
127
- |-------|-----:|---------:|-----:|
128
- |default|901028| 415416|415416|
129
-
130
- ## Dataset Creation
131
-
132
- ### Curation Rationale
133
-
134
- [More Information Needed]
135
-
136
- ### Source Data
137
-
138
- #### Initial Data Collection and Normalization
139
-
140
- Data was mined from the IJaDataset 2.0 dataset.
141
- [More Information Needed]
142
-
143
- #### Who are the source language producers?
144
-
145
- [More Information Needed]
146
-
147
- ### Annotations
148
-
149
- #### Annotation process
150
-
151
- Data was manually labeled by three judges by automatically identifying potential clones using search heuristics.
152
- [More Information Needed]
153
-
154
- #### Who are the annotators?
155
-
156
- [More Information Needed]
157
-
158
- ### Personal and Sensitive Information
159
-
160
- [More Information Needed]
161
-
162
- ## Considerations for Using the Data
163
-
164
- ### Social Impact of Dataset
165
-
166
- [More Information Needed]
167
-
168
- ### Discussion of Biases
169
-
170
- Most of the clones are type 1 and 2 with type 3 and especially type 4 being rare.
171
-
172
- [More Information Needed]
173
-
174
- ### Other Known Limitations
175
-
176
- [More Information Needed]
177
-
178
- ## Additional Information
179
-
180
- ### Dataset Curators
181
-
182
- https://github.com/microsoft, https://github.com/madlag
183
-
184
- ### Licensing Information
185
-
186
- Computational Use of Data Agreement (C-UDA) License.
187
-
188
- ### Citation Information
189
-
190
- ```
191
- @inproceedings{svajlenko2014towards,
192
- title={Towards a big data curated benchmark of inter-project code clones},
193
- author={Svajlenko, Jeffrey and Islam, Judith F and Keivanloo, Iman and Roy, Chanchal K and Mia, Mohammad Mamun},
194
- booktitle={2014 IEEE International Conference on Software Maintenance and Evolution},
195
- pages={476--480},
196
- year={2014},
197
- organization={IEEE}
198
- }
199
-
200
- @inproceedings{wang2020detecting,
201
- title={Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree},
202
- author={Wang, Wenhan and Li, Ge and Ma, Bo and Xia, Xin and Jin, Zhi},
203
- booktitle={2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER)},
204
- pages={261--271},
205
- year={2020},
206
- organization={IEEE}
207
- }
208
- ```
209
-
210
- ### Contributions
211
-
212
- Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
code_x_glue_cc_clone_detection_big_clone_bench.py DELETED
@@ -1,95 +0,0 @@
1
- from typing import List
2
-
3
- import datasets
4
-
5
- from .common import TrainValidTestChild
6
- from .generated_definitions import DEFINITIONS
7
-
8
-
9
- _DESCRIPTION = """Given two codes as the input, the task is to do binary classification (0/1), where 1 stands for semantic equivalence and 0 for others. Models are evaluated by F1 score.
10
- The dataset we use is BigCloneBench and filtered following the paper Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree."""
11
-
12
- _CITATION = """@inproceedings{svajlenko2014towards,
13
- title={Towards a big data curated benchmark of inter-project code clones},
14
- author={Svajlenko, Jeffrey and Islam, Judith F and Keivanloo, Iman and Roy, Chanchal K and Mia, Mohammad Mamun},
15
- booktitle={2014 IEEE International Conference on Software Maintenance and Evolution},
16
- pages={476--480},
17
- year={2014},
18
- organization={IEEE}
19
- }
20
-
21
- @inproceedings{wang2020detecting,
22
- title={Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree},
23
- author={Wang, Wenhan and Li, Ge and Ma, Bo and Xia, Xin and Jin, Zhi},
24
- booktitle={2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER)},
25
- pages={261--271},
26
- year={2020},
27
- organization={IEEE}
28
- }"""
29
-
30
-
31
- class CodeXGlueCcCloneDetectionBigCloneBenchImpl(TrainValidTestChild):
32
- _DESCRIPTION = _DESCRIPTION
33
- _CITATION = _CITATION
34
-
35
- _FEATURES = {
36
- "id": datasets.Value("int32"), # Index of the sample
37
- "id1": datasets.Value("int32"), # The first function id
38
- "id2": datasets.Value("int32"), # The second function id
39
- "func1": datasets.Value("string"), # The full text of the first function
40
- "func2": datasets.Value("string"), # The full text of the second function
41
- "label": datasets.Value("bool"), # 1 is the functions are not equivalent, 0 otherwise
42
- }
43
-
44
- _SUPERVISED_KEYS = ["label"]
45
-
46
- def generate_urls(self, split_name):
47
- yield "index", f"{split_name}.txt"
48
- yield "data", "data.jsonl"
49
-
50
- def _generate_examples(self, split_name, file_paths):
51
- import json
52
-
53
- js_all = {}
54
-
55
- with open(file_paths["data"], encoding="utf-8") as f:
56
- for idx, line in enumerate(f):
57
- entry = json.loads(line)
58
- js_all[int(entry["idx"])] = entry["func"]
59
-
60
- with open(file_paths["index"], encoding="utf-8") as f:
61
- for idx, line in enumerate(f):
62
- line = line.strip()
63
- idx1, idx2, label = [int(i) for i in line.split("\t")]
64
- func1 = js_all[idx1]
65
- func2 = js_all[idx2]
66
-
67
- yield idx, dict(id=idx, id1=idx1, id2=idx2, func1=func1, func2=func2, label=(label == 1))
68
-
69
-
70
- CLASS_MAPPING = {
71
- "CodeXGlueCcCloneDetectionBigCloneBench": CodeXGlueCcCloneDetectionBigCloneBenchImpl,
72
- }
73
-
74
-
75
- class CodeXGlueCcCloneDetectionBigCloneBench(datasets.GeneratorBasedBuilder):
76
- BUILDER_CONFIG_CLASS = datasets.BuilderConfig
77
- BUILDER_CONFIGS = [
78
- datasets.BuilderConfig(name=name, description=info["description"]) for name, info in DEFINITIONS.items()
79
- ]
80
-
81
- def _info(self):
82
- name = self.config.name
83
- info = DEFINITIONS[name]
84
- if info["class_name"] in CLASS_MAPPING:
85
- self.child = CLASS_MAPPING[info["class_name"]](info)
86
- else:
87
- raise RuntimeError(f"Unknown python class for dataset configuration {name}")
88
- ret = self.child._info()
89
- return ret
90
-
91
- def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
92
- return self.child._split_generators(dl_manager=dl_manager)
93
-
94
- def _generate_examples(self, split_name, file_paths):
95
- return self.child._generate_examples(split_name, file_paths)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
common.py DELETED
@@ -1,75 +0,0 @@
1
- from typing import List
2
-
3
- import datasets
4
-
5
-
6
- # Citation, taken from https://github.com/microsoft/CodeXGLUE
7
- _DEFAULT_CITATION = """@article{CodeXGLUE,
8
- title={CodeXGLUE: A Benchmark Dataset and Open Challenge for Code Intelligence},
9
- year={2020},}"""
10
-
11
-
12
- class Child:
13
- _DESCRIPTION = None
14
- _FEATURES = None
15
- _CITATION = None
16
- SPLITS = {"train": datasets.Split.TRAIN}
17
- _SUPERVISED_KEYS = None
18
-
19
- def __init__(self, info):
20
- self.info = info
21
-
22
- def homepage(self):
23
- return self.info["project_url"]
24
-
25
- def _info(self):
26
- # This is the description that will appear on the datasets page.
27
- return datasets.DatasetInfo(
28
- description=self.info["description"] + "\n\n" + self._DESCRIPTION,
29
- features=datasets.Features(self._FEATURES),
30
- homepage=self.homepage(),
31
- citation=self._CITATION or _DEFAULT_CITATION,
32
- supervised_keys=self._SUPERVISED_KEYS,
33
- )
34
-
35
- def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
36
- SPLITS = self.SPLITS
37
- _URL = self.info["raw_url"]
38
- urls_to_download = {}
39
- for split in SPLITS:
40
- if split not in urls_to_download:
41
- urls_to_download[split] = {}
42
-
43
- for key, url in self.generate_urls(split):
44
- if not url.startswith("http"):
45
- url = _URL + "/" + url
46
- urls_to_download[split][key] = url
47
-
48
- downloaded_files = {}
49
- for k, v in urls_to_download.items():
50
- downloaded_files[k] = dl_manager.download_and_extract(v)
51
-
52
- return [
53
- datasets.SplitGenerator(
54
- name=SPLITS[k],
55
- gen_kwargs={"split_name": k, "file_paths": downloaded_files[k]},
56
- )
57
- for k in SPLITS
58
- ]
59
-
60
- def check_empty(self, entries):
61
- all_empty = all([v == "" for v in entries.values()])
62
- all_non_empty = all([v != "" for v in entries.values()])
63
-
64
- if not all_non_empty and not all_empty:
65
- raise RuntimeError("Parallel data files should have the same number of lines.")
66
-
67
- return all_empty
68
-
69
-
70
- class TrainValidTestChild(Child):
71
- SPLITS = {
72
- "train": datasets.Split.TRAIN,
73
- "valid": datasets.Split.VALIDATION,
74
- "test": datasets.Split.TEST,
75
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "CodeXGLUE Clone-detection-BigCloneBench dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-BigCloneBench\n\nGiven two codes as the input, the task is to do binary classification (0/1), where 1 stands for semantic equivalence and 0 for others. Models are evaluated by F1 score.\nThe dataset we use is BigCloneBench and filtered following the paper Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree.", "citation": "@inproceedings{svajlenko2014towards,\ntitle={Towards a big data curated benchmark of inter-project code clones},\nauthor={Svajlenko, Jeffrey and Islam, Judith F and Keivanloo, Iman and Roy, Chanchal K and Mia, Mohammad Mamun},\nbooktitle={2014 IEEE International Conference on Software Maintenance and Evolution},\npages={476--480},\nyear={2014},\norganization={IEEE}\n}\n\n@inproceedings{wang2020detecting,\ntitle={Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree},\nauthor={Wang, Wenhan and Li, Ge and Ma, Bo and Xia, Xin and Jin, Zhi},\nbooktitle={2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER)},\npages={261--271},\nyear={2020},\norganization={IEEE}\n}", "homepage": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/Clone-detection-BigCloneBench", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "id1": {"dtype": "int32", "id": null, "_type": "Value"}, "id2": {"dtype": "int32", "id": null, "_type": "Value"}, "func1": {"dtype": "string", "id": null, "_type": "Value"}, "func2": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "bool", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "label", "output": ""}, "task_templates": null, "builder_name": "code_x_glue_cc_clone_detection_big_clone_bench", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2888035757, "num_examples": 901028, "dataset_name": "code_x_glue_cc_clone_detection_big_clone_bench"}, "validation": {"name": "validation", "num_bytes": 1371399694, "num_examples": 415416, "dataset_name": "code_x_glue_cc_clone_detection_big_clone_bench"}, "test": {"name": "test", "num_bytes": 1220662901, "num_examples": 415416, "dataset_name": "code_x_glue_cc_clone_detection_big_clone_bench"}}, "download_checksums": {"https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/Clone-detection-BigCloneBench/dataset/train.txt": {"num_bytes": 17043552, "checksum": "29119bfa94673374249c3424809fbe6baaa1f0e87a13e3c727bbd6cdf1224b77"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/Clone-detection-BigCloneBench/dataset/data.jsonl": {"num_bytes": 15174797, "checksum": "d8bc51e62deddcc45bd26c5b57f5add2a2cf377f13b9f6c2fb656fbc8fca4dd2"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/Clone-detection-BigCloneBench/dataset/valid.txt": {"num_bytes": 7861019, "checksum": "e59e8c1321df59b6ab0143165cb603030c55800c00e2d782e06810517b8de1e4"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/Clone-detection-BigCloneBench/dataset/test.txt": {"num_bytes": 7876506, "checksum": "a6c0cf79be34e582fdc64007aa894ed094e4f9ff2e5395a8d2b5c39eeef2737a"}}, "download_size": 47955874, "post_processing_size": null, "dataset_size": 5480098352, "size_in_bytes": 5528054226}}
 
 
default/test/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:429704a5babd43b13cc7c47db72394e2fc767c51f85b90d6ad13f28be40922d3
3
+ size 90393518
default/test/0001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c06b8f9cf78a3febcfd9bf047c8d0ef2cdb2c1d2887a71e4cd9cc0747477672
3
+ size 90716888
default/test/0002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6c38c1665f707b92e27a2012362b2bd6b8643470a118260f416b79c4b9c48b5
3
+ size 39017737
default/train/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c55ad1e126b039791664cf9f6d06b17b0af1362bd0c7a8785515e455b9c7513b
3
+ size 141933790
default/train/0001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8fa2c7887a5a94990f1500b27e9c37305a7c9bcc91867dcb2c65cf472d66335c
3
+ size 141301304
default/train/0002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57611eafb1a3a633f22821eaa6535ba4a0a5611e905a1ead8c145ad659543522
3
+ size 141007583
default/train/0003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e765a2d8bccc6dd61c2a10db6b3d7160a5b34e744e73f7816b1f8895b45c495f
3
+ size 141867123
default/train/0004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f25c97a71526ff52fb3a456e65e8e2988c4ca23c1bcd96faf4b2a4c224ed4a1
3
+ size 141090697
default/train/0005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9411aef4253aab6b8740c16c40a0fcef32e43e355152fc8bf7c5e5a80b7ca5cc
3
+ size 107749179
default/validation/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c178210653980742c8a3f44bf62a8481c5b6550890cb32da7a4a42baa3e8c01
3
+ size 86882494
default/validation/0001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:efda128b62f2b026d02bd6023dfed89b5bb770daf96a11990444a2590e7b8f2b
3
+ size 87046124
default/validation/0002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f2386541bdd1e23e5a9b52f870c61a3e34fb3b964d1db750e8b1c7c48a34b3e
3
+ size 63948605
generated_definitions.py DELETED
@@ -1,12 +0,0 @@
1
- DEFINITIONS = {
2
- "default": {
3
- "class_name": "CodeXGlueCcCloneDetectionBigCloneBench",
4
- "dataset_type": "Code-Code",
5
- "description": "CodeXGLUE Clone-detection-BigCloneBench dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-BigCloneBench",
6
- "dir_name": "Clone-detection-BigCloneBench",
7
- "name": "default",
8
- "project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/Clone-detection-BigCloneBench",
9
- "raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/Clone-detection-BigCloneBench/dataset",
10
- "sizes": {"test": 415416, "train": 901028, "validation": 415416},
11
- }
12
- }