Datasets:
annotations_creators:
- no-annotation
languages:
- py
language_creators:
- found
licenses:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- code-generation
- conditional-text-generation
task_ids:
- language-modeling
- code-generation
Dataset Card for notional-python
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://notional.ai/
- Repository: [Needs More Information]
- Paper: [Needs More Information]
- Leaderboard: [Needs More Information]
- Point of Contact: [Needs More Information]
Dataset Summary
The Notional-python dataset contains python code files from 100 well-known repositories gathered from Google Bigquery Github Dataset. The dataset was created to test the ability of programming language models. Follow our repo to do the model evaluation using notional-python dataset.
Supported Tasks and Leaderboards
[Needs More Information]
Languages
Python
Dataset Structure
Data Instances
[Needs More Information]
Data Fields
[Needs More Information]
Data Splits
[Needs More Information]
Dataset Creation
Curation Rationale
Notional-python was built to provide a dataset for testing the ability of the machine to generate python code.
Source Data
Initial Data Collection and Normalization
The data was obtained by filtering code from Google Bigquery Github data In order to improve the quality of the dataset, only python code files that meet the below conditions are added to the dataset:
- Code with more than 60% of executable lines
- Code with logic, not config files or comment-only files
- Code with more than 30% of attribute declaration lines (E.G.: Some files contain just only class names and their class attributes, usually used for configuration of the project, these files were not selected)
- Code without
TODO
andFIXME
.
Who are the source language producers?
The producers are users of github.
Annotations
Annotation process
[Needs More Information]
Who are the annotators?
[Needs More Information]
Personal and Sensitive Information
[Needs More Information]
Considerations for Using the Data
Social Impact of Dataset
[Needs More Information]
Discussion of Biases
[Needs More Information]
Other Known Limitations
[Needs More Information]
Additional Information
Dataset Curators
[Needs More Information]
Licensing Information
[Needs More Information]
Citation Information
[Needs More Information]