Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
code
ArXiv:
Tags:
License:

Are contents downloaded from S3 softwareheritage bucket exactly the same as were used in training?

#6
by yury-zyphra - opened

E.g. paper mentions that for source code with 50% probability the template of what model sees is <repo_name>reponame<file_sep>filepath1\ncode1<file_sep>filepath2\ncode2 ... <|endoftext|> (while the rest do not include repo name).

When I'm randomly looking at contents, it seem like they are not prepended with anything. It appears to me, that these are just the raw files, and to actually construct training samples, I'll need to combine contents from them.

BigCode org

@yury-zyphra the training samples can be generated like so:

def concat_files(rec):
    files = rec["files"]
    random.shuffle(files)
    if random.random() < 0.5:
        text = f"<repo_name>{rec['repo_name']}"
        for f in files:
            text += f"<file_sep>{f['path']}\n{f['content']}"
    else:
        text = ""
        for f in rec["files"]:
            text += f"<file_sep>{f['content']}"
    rec["content"] = text
    return rec

Note that to train StarCoder 2 we've additionally removed PII (names, emails, API keys, etc) from the files, while the original Software Heritage files still contain that information.
the-stack-v2-train-full with included PII-redacted samples will be available soon as a separate dataset.

Hey @anton-l ! Congrats on this great work. Would you be able to provide a rough timeline for the availability of the PII-redacted data?

I am a beginner. How do I view/download the data that was used for training? As of now, it seems this dataset contains mapping of content ids not the content exactly. How does one view/download/filter the training data?

Great work! Just want to confirm the released bigcode/the-stack-v2-train-full/smol-ids sets, despite not being PII-filtered, have gone through decontamination, malware removal, and opt-out filtering, etc. right? TIA!

Sign up or log in to comment