Datasets:
annotations_creators:
- expert-generated
language:
- code
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
pretty_name: codequeries
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- code
- code question answering
- code semantic parsing
- codeqa
task_categories:
- question-answering
task_ids:
- extractive-qa
Dataset Card for Codequeries
Table of Contents
- Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: Codequeires
- Repository: Code repo
- Leaderboard: Code repo
- Paper:
Dataset Summary
CodeQueries allows to explore extractive question-answering methodology over code by providing semantic natural language queries as question and code spans as answer or supporting fact. Given a query, finding the answer/supporting fact spans in code context involves analysis complex concepts and long chains of reasoning. The dataset is provided with five separate settings; details on the setting can be found in the paper.
Supported Tasks and Leaderboards
Query comprehension for code, Extractive question answering for code. Refer the paper.
Languages
The dataset contains code context from python
files.
Dataset Structure
How to use
The dataset can directly used with huggingface datasets. You can load and iterate through the dataset for the proposed five settings with the following two lines of code:
from datasets import load_dataset
ds = load_dataset("thepurpleowl/codequeries", "<ideal/file_ideal/prefix/twostep>", split="train")
print(next(iter(ds)))
#OUTPUT:
{
'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n",
'repo_name': 'MirekSz/webpack-es6-ts',
'path': 'app/mods/mod190.js',
'language': 'JavaScript',
'license': 'isc',
'size': 73
}
Data Splits and Data Fields
Detailed information on the data splits for proposed settings can be found in the paper.
In general, data splits in all prpoposed settings have examples in following fields -
- query_name (query name to uniquely identify the query)
- code_file_path (relative source file path w.r.t. ETH Py150 corpus)
- context_blocks (code blocks as context with metadata) [`prefix` setting doesn't have this field]
- answer_spans (answer spans with metadata)
- supporting_fact_spans (supporting-fact spans with metadata)
- example_type (1(positive)) or 0(negative)) example type)
- single_hop (True or False - for query type)
- subtokenized_input_sequence (example subtokens) [`prefix` setting has the corresponding token ids]
- label_sequence (example subtoken labels)
- relevance_label (0 (not relevant) or 1 (relevant) - relevance label of a block)
Data Splits
train | validation | test | |
---|---|---|---|
ideal | 9427 | 3270 | 3245 |
prefix | - | - | 3245 |
sliding_window | - | - | 3245 |
file_ideal | - | - | 3245 |
twostep | - | - | 3245 |
Dataset Creation
The dataset is created by using ETH Py150 Open corpus as source for code contexts. To get natural language queries and corresponding answer/supporting spans in ETH Py150 Open corpus files, CodeQL was used.
Licensing Information
Codequeries dataset is licensed under the Apache-2.0 License.
Citation Information
[More Information Needed]
Contributions
Thanks to @github-username for adding this dataset.# Dataset Card for Codequeries
Table of Contents
Dataset Description
- Homepage: Codequeires
- Repository: Code repo
- Paper:
Dataset Summary
CodeQueries allows to explore extractive question-answering methodology over code by providing semantic natural language queries as question and code spans as answer or supporting fact. Given a query, finding the answer/supporting fact spans in code context involves analysis complex concepts and long chains of reasoning. The dataset is provided with five separate settings; details on the setting can be found in the paper.
Supported Tasks and Leaderboards
Query comprehension for code, Extractive question answering for code.
Languages
The dataset contains code context from python
files.
Dataset Structure
How to use
The dataset can directly used with huggingface datasets. You can load and iterate through the dataset for the proposed five settings with the following two lines of code:
import datasets
# instead `twostep`, other settings are <ideal/file_ideal/prefix>.
ds = datasets.load_dataset("thepurpleowl/codequeries", "twostep", split=datasets.Split.TEST)
print(next(iter(ds)))
#OUTPUT:
{'query_name': 'Unused import',
'code_file_path': 'rcbops/glance-buildpackage/glance/tests/unit/test_db.py',
'context_block': {'content': '# vim: tabstop=4 shiftwidth=4 softtabstop=4\n\n# Copyright 2010-2011 OpenStack, LLC\ ...',
'metadata': 'root',
'header': "['module', '___EOS___']",
'index': 0},
'answer_spans': [{'span': 'from glance.common import context',
'start_line': 19,
'start_column': 0,
'end_line': 19,
'end_column': 33}
],
'supporting_fact_spans': [],
'example_type': 1,
'single_hop': False,
'subtokenized_input_sequence': ['[CLS]_', 'Un', 'used_', 'import_', '[SEP]_', 'module_', '\\u\\u\\uEOS\\u\\u\\u_', '#', ' ', 'vim', ':', ...],
'label_sequence': [4, 4, 4, 4, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, ...],
'relevance_label': 1
}
Data Splits and Data Fields
Detailed information on the data splits for proposed settings can be found in the paper.
In general, data splits in all prpoposed settings have examples in following fields -
- query_name (query name to uniquely identify the query)
- code_file_path (relative source file path w.r.t. ETH Py150 corpus)
- context_blocks (code blocks as context with metadata) [`prefix` setting doesn't have this field and `twostep` has `context_block`]
- answer_spans (answer spans with metadata)
- supporting_fact_spans (supporting-fact spans with metadata)
- example_type (1(positive)) or 0(negative)) example type)
- single_hop (True or False - for query type)
- subtokenized_input_sequence (example subtokens) [`prefix` setting has the corresponding token ids]
- label_sequence (example subtoken labels)
- relevance_label (0 (not relevant) or 1 (relevant) - relevance label of a block) [only `twostep` setting has this field]
Dataset Creation
The dataset is created by using ETH Py150 Open corpus as source for code contexts. To get natural language queries and corresponding answer/supporting spans in ETH Py150 Open corpus files, CodeQL was used.
Additional Information
Licensing Information
Codequeries dataset is licensed under the Apache-2.0 License.
Citation Information
[More Information Needed]