Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:
jenyag's picture
Update citing
9847ee9 verified
metadata
dataset_info:
  - config_name: default
    features:
      - name: hash
        dtype: string
      - name: repo
        dtype: string
      - name: date
        dtype: string
      - name: license
        dtype: string
      - name: message
        dtype: string
      - name: mods
        list:
          - name: change_type
            dtype: string
          - name: old_path
            dtype: string
          - name: new_path
            dtype: string
          - name: diff
            dtype: string
    splits:
      - name: test
        num_examples: 163
  - config_name: labels
    features:
      - name: hash
        dtype: string
      - name: repo
        dtype: string
      - name: date
        dtype: string
      - name: license
        dtype: string
      - name: message
        dtype: string
      - name: label
        dtype: int8
      - name: comment
        dtype: string
    splits:
      - name: test
        num_bytes: 272359
        num_examples: 858
configs:
  - config_name: default
    data_files:
      - split: test
        path: commitchronicle-py-long/test-*
  - config_name: labels
    data_files:
      - split: test
        path: commitchronicle-py-long-labels/test-*
license: apache-2.0

🏟️ Long Code Arena (Commit message generation)

This is the benchmark for the Commit message generation task as part of the 🏟️ Long Code Arena benchmark.

The dataset is a manually curated subset of the Python test set from the πŸ€— CommitChronicle dataset, tailored for larger commits.

All the repositories are published under permissive licenses (MIT, Apache-2.0, and BSD-3-Clause). The datapoints can be removed upon request.

How-to

from datasets import load_dataset

dataset = load_dataset("JetBrains-Research/lca-cmg", split="test")

Note that all the data we have is considered to be in the test split.

Note. Working with git repositories under repos directory is not supported via πŸ€— Datasets. See Git Repositories section for more details.

About

Overview

In total, there are 163 commits from 34 repositories. For length statistics, refer to the notebook in our repository.

Dataset Structure

The dataset contains two kinds of data: data about each commit (under commitchronicle-py-long folder) and compressed git repositories (under repos folder).

Commits

Each example has the following fields:

Field Description
repo Commit repository.
hash Commit hash.
date Commit date.
license Commit repository's license.
message Commit message.
mods List of file modifications from a commit.

Each file modification has the following fields:

Field Description
change_type Type of change to current file. One of: ADD, COPY, RENAME, DELETE, MODIFY or UNKNOWN.
old_path Path to file before change (might be empty).
new_path Path to file after change (might be empty).
diff git diff for current file.

Data point example:

{'hash': 'b76ed0db81b3123ede5dc5e5f1bddf36336f3722',
 'repo': 'apache/libcloud',
 'date': '05.03.2022 17:52:34',
 'license': 'Apache License 2.0',
 'message': 'Add tests which verify that all OpenStack driver can be instantiated\nwith all the supported auth versions.\nNOTE: Those tests will fail right now due to the regressions being\nintroduced recently which breaks auth for some versions.',
 'mods': [{'change_type': 'MODIFY',
    'new_path': 'libcloud/test/compute/test_openstack.py',
    'old_path': 'libcloud/test/compute/test_openstack.py',
    'diff': '@@ -39,6 +39,7 @@ from libcloud.utils.py3 import u\n<...>'}],
}    

Git Repositories

The compressed Git repositories for all the commits in this benchmark are stored under repos directory.

Working with git repositories under repos directory is not supported directly via πŸ€— Datasets. You can use huggingface_hub package to download the repositories. The sample code is provided below:

import tarfile
from huggingface_hub import list_repo_tree, hf_hub_download


data_dir = "..."  # replace with a path to where you want to store repositories locally

for repo_file in list_repo_tree("JetBrains-Research/lca-commit-message-generation", "repos", repo_type="dataset"):
    file_path = hf_hub_download(
        repo_id="JetBrains-Research/lca-commit-message-generation",
        filename=repo_file.path,
        repo_type="dataset",
        local_dir=data_dir,
    )

    with tarfile.open(file_path, "r:gz") as tar:
        tar.extractall(path=os.path.join(data_dir, "extracted_repos"))

For convenience, we also provide a full list of files in paths.json.

After you download and extract the repositories, you can work with each repository either via Git or via Python libraries like GitPython or PyDriller.

🏷️ Extra: commit labels

To facilitate further research, we additionally provide the manual labels for all the 858 commits that made it through initial filtering. The final version of the dataset described above consists of commits labeled either 4 or 5.

How-to

from datasets import load_dataset

dataset = load_dataset("JetBrains-Research/lca-cmg", "labels", split="test")

Note that all the data we have is considered to be in the test split.

About

Dataset Structure

Each example has the following fields:

Field Description
repo Commit repository.
hash Commit hash.
date Commit date.
license Commit repository's license.
message Commit message.
label Label of the current commit as a target for CMG task.
comment Comment for a label for the current commit (optional, might be empty).

Labels are in 1–5 scale, where:

  • 1 – strong no
  • 2 – weak no
  • 3 – unsure
  • 4 – weak yes
  • 5 – strong yes

Data point example:

{'hash': '1559a4c686ddc2947fc3606e1c4279062cc9480f',
 'repo': 'appscale/gts',
 'date': '15.07.2018 21:00:39',
 'license': 'Apache License 2.0',
 'message': 'Add auto_id_policy and logs_path flags\n\nThese changes were introduced in the 1.7.5 SDK.',
 'label': 1,
 'comment': 'no way to know the version'}

Citing

@article{bogomolov2024long,
  title={Long Code Arena: a Set of Benchmarks for Long-Context Code Models},
  author={Bogomolov, Egor and Eliseeva, Aleksandra and Galimzyanov, Timur and Glukhov, Evgeniy and Shapkin, Anton and Tigina, Maria and Golubev, Yaroslav and Kovrigin, Alexander and van Deursen, Arie and Izadi, Maliheh and Bryksin, Timofey},
  journal={arXiv preprint arXiv:2406.11612},
  year={2024}
}

You can find the paper here.