Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,67 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
- py
|
5 |
+
tags:
|
6 |
+
- code
|
7 |
+
- documentation
|
8 |
+
- python
|
9 |
+
- docstring
|
10 |
+
- dataset
|
11 |
+
license: mit
|
12 |
+
---
|
13 |
+
# DocuMint Dataset
|
14 |
+
|
15 |
+
The DocuMint Dataset is a collection of 100,000 Python functions and their corresponding docstrings, extracted from popular open-source repositories in the Free and open-source software (FLOSS) ecosystem. This dataset was created to train the [DocuMint model](link-to-model-card), a fine-tuned variant of Google's CodeGemma-2B that generates high-quality docstrings for Python code functions.
|
16 |
+
|
17 |
+
## Dataset Description
|
18 |
+
|
19 |
+
The dataset consists of JSON-formatted entries, each containing a Python function definition (as the `instruction`) and its associated docstring (as the `response`). The functions were sourced from well-established and actively maintained projects, filtered based on metrics such as the number of contributors (> 50), commits (> 5k), stars (> 35k), and forks (> 10k).
|
20 |
+
|
21 |
+
An abstract syntax tree (AST) based parser was used to extract the functions and docstrings. Challenges in the data sampling process included syntactic errors, multi-language repositories, computational expense, repository size discrepancies, and ensuring diversity while avoiding repetition.
|
22 |
+
|
23 |
+
## Dataset Structure
|
24 |
+
|
25 |
+
Each entry in the dataset follows this structure:
|
26 |
+
|
27 |
+
```json
|
28 |
+
{
|
29 |
+
"instruction": "def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):\n \"\"\"\n Creates a set of `DataLoader`s for the `glue` dataset,\n using \"bert-base-cased\" as the tokenizer.\n\n Args:\n accelerator (`Accelerator`):\n An `Accelerator` object\n batch_size (`int`, *optional*):\n The batch size for the train and validation DataLoaders.\n \"\"\"\n tokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\n datasets = load_dataset(\"glue\", \"mrpc\")\n\n def tokenize_function(examples):\n # max_length=None => use the model max length (it's actually the default)\n outputs = tokenizer(examples[\"sentence1\"], examples[\"sentence2\"], truncation=True, max_length=None)\n return outputs\n\n # Apply the method we just defined to all the examples in all the splits of the dataset\n # starting with the main process first:\n with accelerator.main_process_first():\n tokenized_datasets = datasets.map(\n tokenize_function,\n batched=True,\n remove_columns=[\"idx\", \"sentence1\", \"sentence2\"],\n )\n\n # We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the\n # transformers library\n tokenized_datasets = tokenized_datasets.rename_column(\"label\", \"labels\")\n\n def collate_fn(examples):\n # For Torchxla, it's best to pad everything to the same length or training will be very slow.\n max_length = 128 if accelerator.distributed_type == DistributedType.XLA else None\n # When using mixed precision we want round multiples of 8/16\n if accelerator.mixed_precision == \"fp8\":\n pad_to_multiple_of = 16\n\t elif accelerator.mixed_precision != \"no\":\n pad_to_multiple_of = 8\n\t\t else:\n pad_to_multiple_of = None\n\n return tokenizer.pad(\n examples,\n padding=\"longest\",\n max_length=max_length,\n pad_to_multiple_of=pad_to_multiple_of,\n return_tensors=\"pt\",\n )\n\n # Instantiate dataloaders.\n train_dataloader = DataLoader(\n tokenized_datasets[\"train\"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True\n )\n eval_dataloader = DataLoader(\n tokenized_datasets[\"validation\"],\n shuffle=False,\n collate_fn=collate_fn,\n batch_size=EVAL_BATCH_SIZE,\n drop_last=(accelerator.mixed_precision == \"fp8\"),\n )\n\n return train_dataloader, eval_dataloader",
|
30 |
+
"response": "Creates a set of `DataLoader`s for the `glue` dataset,\nusing \"bert-base-cased\" as the tokenizer.\n\nArgs:\n accelerator (`Accelerator`):\n An `Accelerator` object\n batch_size (`int`, *optional*):\n The batch size for the train and validation DataLoaders."
|
31 |
+
}
|
32 |
+
```
|
33 |
+
|
34 |
+
## Usage
|
35 |
+
|
36 |
+
This dataset was used to train the DocuMint model, a fine-tuned variant of Google's CodeGemma-2B that generates high-quality docstrings for Python code functions. For more information on the model and its training procedure, please refer to the model card.
|
37 |
+
Dataset Usecases
|
38 |
+
|
39 |
+
The DocuMint dataset can be used for various purposes related to code documentation and natural language processing tasks. Some potential usecases include:
|
40 |
+
|
41 |
+
Training and evaluating models for automatic docstring generation
|
42 |
+
Studying the characteristics and patterns of high-quality docstrings
|
43 |
+
Analyzing the relationship between code structure and its corresponding documentation
|
44 |
+
Developing tools for assisting developers in writing effective docstrings
|
45 |
+
Conducting research on the challenges and best practices in code documentation
|
46 |
+
|
47 |
+
Researchers, developers, and organizations interested in improving code documentation quality and automating the process of docstring generation can benefit from this dataset.
|
48 |
+
|
49 |
+
## Citation [optional]
|
50 |
+
|
51 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
52 |
+
|
53 |
+
**BibTeX:**
|
54 |
+
|
55 |
+
(TODO)
|
56 |
+
```
|
57 |
+
@misc{poudel2024documint,
|
58 |
+
title={DocuMint: Docstring Generation for Python using Small Language Models},
|
59 |
+
author={Bibek Poudel* and Adam Cook* and Sekou Traore* and Shelah Ameli*},
|
60 |
+
year={2024},
|
61 |
+
}
|
62 |
+
```
|
63 |
+
|
64 |
+
## Model Card Contact
|
65 |
+
|
66 |
+
- For questions or more information, please contact:
|
67 |
+
{bpoudel3,acook46,staore1,oameli}@vols.utk.edu
|