--- license: other language: - code - en task_categories: - text-generation - summarization tags: - code - commit_message_generation pretty_name: CommitChronicle size_categories: - 1M 🔍 For further details, please refer to: > * **Paper**: TODO > * **Repository**: [https://github.com/JetBrains-Research/commit_message_generation](https://github.com/JetBrains-Research/commit_message_generation) We used [GitHub Search](https://seart-ghs.si.usi.ch/) tool and official GitHub API to select relevant repositories with permissive licenses (Apache, BSD 3-clause, MIT). On February 9th, 2023, we collected all commits made since 2017 from these repositories via [PyDriller](https://github.com/ishepard/pydriller). Next, we extensively cleaned the data, including filtering outliers, dropping commits from bot authors, and dropping duplicates. Note: to avoid disclosing personal information, we replaced the commit authors' names and emails with unique identifiers. ## Dataset Structure ### Data Instances Each data instance in the dataset is a commit. [A commit example](https://github.com/saridormi/commit_chronicle/commit/a7fb3b64184f0af5b08285cce14b9139baa94049) would look like the following: ``` { 'repo': 'saridormi/commit_chronicle', 'hash': 'a7fb3b64184f0af5b08285cce14b9139baa94049', 'author': 123, 'date': '05.07.2021 15:10:07', 'timezone': 0, 'license': 'MIT License', 'language': 'Jupyter Notebook', 'message': 'Add license badge to readme', 'original_message': 'Add license badge to readme', 'mods': [{'change_type': 'MODIFY', 'new_path': 'README.md', 'old_path': 'README.md' 'diff': '@@ -1,6 +1,6 @@\n' ' # Commits dataset\n' ' \n' '-> :heavy_exclamation_mark: **TODO:** license\n' '+![GitHub](https://img.shields.io/github/license/saridormi/commits_dataset?style=for-the-badge)\n'}], } ``` ### Data Fields Each example has the following fields: | **Field** | **Description** | |:------------------:|:----------------------------------------:| | `repo` | Commit repository. | | `hash` | Commit hash. | | `author` | Unique id for commit author | | `date` | Commit date (from author). | | `timezone` | Commit timezone (from author). | | `license` | Commit repository's license. | | `language` | Commit repository's main language. | | `message` | Commit message (after processing). | | `original_message` | Commit message (without any processing). | | `mods` | List of file modifications from commit. | Each file modification has the following fields: | **Field** | **Description** | |:-------------:|:-------------------------------------------------------------------------------------------------:| | `change_type` | Type of change to current file. One of: `ADD`, `COPY`, `RENAME`, `DELETE`, `MODIFY` or `UNKNOWN`. | | `old_path` | Path to file before change (might be empty). | | `new_path` | Path to file after change (might be empty). | | `diff` | `git diff` for current file. | ### Data Splits We provide the following configurations: * `default` * `train`: full training split (7.66M commits) * `validation`: full validation split (1.55M commits) * `test`: full test split (1.49M commits) * `subset_cmg` * `test`: test subset used for experiments with CMG approaches (204k commits) * `subset_llm` * `test`: test subset used for experiments with a LLM (4k commits) ## Considerations for Using the Data > Adopted from [the Stack](https://huggingface.co/datasets/bigcode/the-stack). The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub. In the event that the dataset contains personal information, researchers should only use public, non-personal information in support of conducting and publishing their open-access research. Personal information should not be used for spamming purposes, including sending unsolicited emails or selling of personal information. The dataset is a collection of commits from repositories with various licenses. Any use of all or part of the code gathered in this dataset must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point. ## Citation ``` TODO ```