CodeEditSearch / README.md
cassanof's picture
Upload dataset
5b5fb3a verified
|
raw
history blame
7.3 kB
---
dataset_info:
- config_name: c
features:
- name: after
dtype: string
- name: before
dtype: string
- name: diff
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 3072688.515
num_examples: 1590
download_size: 1704183
dataset_size: 3072688.515
- config_name: c++
features:
- name: after
dtype: string
- name: before
dtype: string
- name: diff
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 3667905.02
num_examples: 1690
download_size: 1911542
dataset_size: 3667905.02
- config_name: go
features:
- name: after
dtype: string
- name: before
dtype: string
- name: diff
dtype: string
- name: instruction
dtype: string
- name: license
dtype: string
- name: repos
dtype: string
splits:
- name: train
num_bytes: 4305982.38
num_examples: 1752
download_size: 2177230
dataset_size: 4305982.38
- config_name: java
features:
- name: after
dtype: string
- name: before
dtype: string
- name: diff
dtype: string
- name: instruction
dtype: string
- name: license
dtype: string
- name: repos
dtype: string
splits:
- name: train
num_bytes: 4696621.306
num_examples: 1756
download_size: 2125081
dataset_size: 4696621.306
- config_name: javascript
features:
- name: after
dtype: string
- name: before
dtype: string
- name: diff
dtype: string
- name: instruction
dtype: string
- name: license
dtype: string
- name: repos
dtype: string
splits:
- name: train
num_bytes: 3971779.3755
num_examples: 1711
download_size: 2056197
dataset_size: 3971779.3755
- config_name: php
features:
- name: after
dtype: string
- name: before
dtype: string
- name: diff
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 3720549.4
num_examples: 1745
download_size: 1849519
dataset_size: 3720549.4
- config_name: python
features:
- name: after
dtype: string
- name: before
dtype: string
- name: diff
dtype: string
- name: instruction
dtype: string
- name: license
dtype: string
- name: repos
dtype: string
splits:
- name: train
num_bytes: 4067844.83
num_examples: 1645
download_size: 2069623
dataset_size: 4067844.83
- config_name: ruby
features:
- name: after
dtype: string
- name: before
dtype: string
- name: diff
dtype: string
- name: instruction
dtype: string
- name: license
dtype: string
- name: repos
dtype: string
splits:
- name: train
num_bytes: 4480084.455
num_examples: 1617
download_size: 2117992
dataset_size: 4480084.455
- config_name: rust
features:
- name: after
dtype: string
- name: before
dtype: string
- name: diff
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 3526661.9175
num_examples: 1695
download_size: 1852962
dataset_size: 3526661.9175
- config_name: scala
features:
- name: after
dtype: string
- name: before
dtype: string
- name: diff
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 2888898.6925
num_examples: 1465
download_size: 1457525
dataset_size: 2888898.6925
- config_name: shell
features:
- name: after
dtype: string
- name: before
dtype: string
- name: diff
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 2387398.504
num_examples: 1402
download_size: 1464965
dataset_size: 2387398.504
- config_name: swift
features:
- name: after
dtype: string
- name: before
dtype: string
- name: diff
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 3625333.488
num_examples: 1722
download_size: 1754850
dataset_size: 3625333.488
- config_name: typescript
features:
- name: after
dtype: string
- name: before
dtype: string
- name: diff
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 3630079.695
num_examples: 1705
download_size: 1893196
dataset_size: 3630079.695
configs:
- config_name: c
data_files:
- split: train
path: c/train-*
- config_name: c++
data_files:
- split: train
path: c++/train-*
- config_name: go
data_files:
- split: train
path: go/train-*
- config_name: java
data_files:
- split: train
path: java/train-*
- config_name: javascript
data_files:
- split: train
path: javascript/train-*
- config_name: php
data_files:
- split: train
path: php/train-*
- config_name: python
data_files:
- split: train
path: python/train-*
- config_name: ruby
data_files:
- split: train
path: ruby/train-*
- config_name: rust
data_files:
- split: train
path: rust/train-*
- config_name: scala
data_files:
- split: train
path: scala/train-*
- config_name: shell
data_files:
- split: train
path: shell/train-*
- config_name: swift
data_files:
- split: train
path: swift/train-*
- config_name: typescript
data_files:
- split: train
path: typescript/train-*
---
This is a dataset built from [CommitPackFT](https://huggingface.co/datasets/bigcode/commitpackft), providing ~1500 commits with diffs for several programming languages:
- Python
- JavaScript
- TypeScript
- Go
- Ruby
- Java
- PHP
- C
- C++
- Rust
- Swift
- Scala
- Bash
The goal of this dataset is to evaluate the ability of models to retrieve a diff given its instruction.
### Code To Produce Dataset
Below is the code to reproduce this dataset:
```py
import datasets
from tqdm import tqdm
import difflib
outrepo = "cassanof/CodeEditSearch"
LANGS = ["python", "javascript", "go", "ruby", "java", "php", "c", "c++", "rust", "swift",
"typescript", "scala", "kotlin", "r", "perl", "haskell", "lua", "shell", "dart", "julia"]
processed = []
def get_udiff(a, b):
a = a.splitlines()
b = b.splitlines()
diff = difflib.unified_diff(a, b, lineterm="")
return "\n".join(diff)
for lang in tqdm(LANGS):
print(f"Processing {lang}")
ds = datasets.load_dataset("bigcode/commitpackft", lang, split="train")
ds = ds.shuffle(seed=42)
print(f"{lang}: {len(ds)}")
ds = ds.filter(lambda x: len(
x["new_contents"] + x["old_contents"]) < 2500, num_proc=8)
ds = ds.filter(lambda x: len(x["new_contents"].strip()) > 0 and len(
x["old_contents"].strip()) > 0, num_proc=8)
if len(ds) < 2000:
print(f"Skipping {lang} due to insufficient data")
continue
print(f"{lang} after: {len(ds)}")
ds = ds.select(range(2000))
diffs = [get_udiff(a, b)
for a, b in zip(ds["old_contents"], ds["new_contents"])]
ds = {
"after": ds["new_contents"],
"before": ds["old_contents"],
"diff": diffs,
"instruction": ds["message"],
}
ds = datasets.Dataset.from_dict(ds)
ds = ds.filter(lambda x: len(x["diff"].splitlines()) > 10, num_proc=8)
print(f" ******* Final {lang}: {len(ds)} *******")
ds.push_to_hub(outrepo, lang)
processed.append(lang)
print(processed)
```