llama 3 self-align experiments
Replicating the pipeline for StarCoder-2 Instruct on Llama-3-8B with some tweaks https://huggingface.co/blog/sc2-instruct
Runtime error📉Note Docker container used to generate both datasets, data-generation-results & functions-filtered. * Pull it down with `docker pull registry.hf.space/muellerzr-llama-3-self-align-data-gen-docker:latest` * Run via `docker run --gpus all -it registry.hf.space/muellerzr-llama-3-self-align-data-gen-docker:latest` * From there follow the steps laid out here: https://github.com/muellerzr/llama-3-8b-self-align/tree/main?tab=readme-ov-file#data-generation-pipeline (including `seed_gathering`)
muellerzr/llama-3-8b-self-align-data-generation-results
Viewer • Updated • 37.2k • 54 • 4Note Contains all steps detailed in the data-generation-pipline after `seed_gathering`. Each branch pertains to one of the steps, using llama-3-8B to do the data generation via vLLM: - Snippet -> Concept (`snippet-to-concept`) - Concept -> Instruction (`concept-to-instruction`) - Instruction -> Response (`instruction-to-response`) - Execution filter (`execution-filter`) - Sanitation & Final Selection (`sanitization-and-selection` & the `main` branch)
muellerzr/python-stack-v1-functions-filtered-llama-3-8B
Viewer • Updated • 224k • 54 • 1Note Contains the results of the `seed_gathering` portion of the data generation pipeline, however using llama-3-8B as a judge
muellerzr/llama-3-8B-self-align
Updated • 1 • 1Note Contains the `config.yaml` for `accelerate launch` and the bash script used to start training