saridormi commited on
Commit
edba662
1 Parent(s): b86584f

Update dataset card

Browse files
Files changed (1) hide show
  1. README.md +110 -2
README.md CHANGED
@@ -1,4 +1,17 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  - config_name: default
4
  features:
@@ -132,6 +145,101 @@ configs:
132
  - split: test
133
  path: subset_llm/test-*
134
  ---
135
- # Dataset Card for "commit-chronicle1"
136
 
137
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: other
3
+ language:
4
+ - code
5
+ - en
6
+ task_categories:
7
+ - text-generation
8
+ - summarization
9
+ tags:
10
+ - code
11
+ - commit_message_generation
12
+ pretty_name: CommitChronicle
13
+ size_categories:
14
+ - 1M<n<10M
15
  dataset_info:
16
  - config_name: default
17
  features:
 
145
  - split: test
146
  path: subset_llm/test-*
147
  ---
148
+ # 📜 CommitChronicle 🔮
149
 
150
+ This is the dataset for commit message generation (and/or completion), introduced in the paper "From Commit Message Generation to History-Aware Commit Message Completion", ASE 2023.
151
+
152
+ Its key features:
153
+ * *large-scale and multilingual*: contains 10.7M commits from 11.9k GitHub repositories in 20 programming languages;
154
+ * *diverse*: avoids restrictive filtering on commit messages or commit diffs structure;
155
+ * *suitable for experiments with commit history*: provides metadata about commit authors and dates and uses split-by-project.
156
+
157
+ ## Dataset Creation
158
+
159
+ > 🔍 For further details, please refer to:
160
+ > * **Paper**: TODO
161
+ > * **Repository**: [https://github.com/JetBrains-Research/commit_message_generation](https://github.com/JetBrains-Research/commit_message_generation)
162
+
163
+ We used [GitHub Search](https://seart-ghs.si.usi.ch/) tool and official GitHub API to select relevant repositories with permissive licenses (Apache, BSD 3-clause, MIT).
164
+ On February 9th, 2023, we collected all commits made since 2017 from these repositories via [PyDriller](https://github.com/ishepard/pydriller).
165
+ Next, we extensively cleaned the data, including filtering outliers, dropping commits from bot authors, and dropping duplicates. Note: to avoid disclosing personal information, we replaced the commit authors' names and emails with unique identifiers.
166
+
167
+ ## Dataset Structure
168
+
169
+ ### Data Instances
170
+
171
+ Each data instance in the dataset is a commit. [A commit example](https://github.com/saridormi/commit_chronicle/commit/a7fb3b64184f0af5b08285cce14b9139baa94049) would look like the following:
172
+ ```
173
+ {
174
+ 'repo': 'saridormi/commit_chronicle',
175
+ 'hash': 'a7fb3b64184f0af5b08285cce14b9139baa94049',
176
+ 'author': 123,
177
+ 'date': '05.07.2021 15:10:07',
178
+ 'timezone': 0,
179
+ 'license': 'MIT License',
180
+ 'language': 'Jupyter Notebook',
181
+ 'message': 'Add license badge to readme',
182
+ 'original_message': 'Add license badge to readme',
183
+ 'mods': [{'change_type': 'MODIFY',
184
+ 'new_path': 'README.md',
185
+ 'old_path': 'README.md'
186
+ 'diff': '@@ -1,6 +1,6 @@\n'
187
+ ' # Commits dataset\n'
188
+ ' \n'
189
+ '-> :heavy_exclamation_mark: **TODO:** license\n'
190
+ '+![GitHub](https://img.shields.io/github/license/saridormi/commits_dataset?style=for-the-badge)\n'}],
191
+ }
192
+ ```
193
+
194
+ ### Data Fields
195
+
196
+ Each example has the following fields:
197
+
198
+ | **Field** | **Description** |
199
+ |:------------------:|:----------------------------------------:|
200
+ | `repo` | Commit repository. |
201
+ | `hash` | Commit hash. |
202
+ | `author` | Unique id for commit author |
203
+ | `date` | Commit date (from author). |
204
+ | `timezone` | Commit timezone (from author). |
205
+ | `license` | Commit repository's license. |
206
+ | `language` | Commit repository's main language. |
207
+ | `message` | Commit message (after processing). |
208
+ | `original_message` | Commit message (without any processing). |
209
+ | `mods` | List of file modifications from commit. |
210
+
211
+ Each file modification has the following fields:
212
+
213
+ | **Field** | **Description** |
214
+ |:-------------:|:-------------------------------------------------------------------------------------------------:|
215
+ | `change_type` | Type of change to current file. One of: `ADD`, `COPY`, `RENAME`, `DELETE`, `MODIFY` or `UNKNOWN`. |
216
+ | `old_path` | Path to file before change (might be empty). |
217
+ | `new_path` | Path to file after change (might be empty). |
218
+ | `diff` | `git diff` for current file. |
219
+
220
+ ### Data Splits
221
+
222
+ We provide the following configurations:
223
+ * `default`
224
+ * `train`: full training split (7.66M commits)
225
+ * `validation`: full validation split (1.55M commits)
226
+ * `test`: full test split (1.49M commits)
227
+ * `subset_cmg`
228
+ * `test`: test subset used for experiments with CMG approaches (204k commits)
229
+ * `subset_llm`
230
+ * `test`: test subset used for experiments with a LLM (4k commits)
231
+
232
+ ## Considerations for Using the Data
233
+
234
+ > Adopted from [the Stack](https://huggingface.co/datasets/bigcode/the-stack).
235
+
236
+ The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub. In the event that the dataset contains personal information, researchers should only use public, non-personal information in support of conducting and publishing their open-access research.
237
+ Personal information should not be used for spamming purposes, including sending unsolicited emails or selling of personal information.
238
+
239
+ The dataset is a collection of commits from repositories with various licenses. Any use of all or part of the code gathered in this dataset must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
240
+
241
+ ## Citation
242
+
243
+ ```
244
+ TODO
245
+ ```