## Installation Instructions As a pre-requisite, make sure you have [ducttape](https://github.com/CoderPat/ducttape) and [(mini)conda](https://docs.conda.io/en/latest/miniconda.html) installed. First, clone this repository. Then, to create a new conda environment with all the necessary dependencies, run the following command: ```bash export CONDA_HOME="/path/to/(mini)conda3" bash setup/conda.sh ``` # Training ## Data format Before training, you must preprocess the training data. Before preprocessing, the data should be a `json` file, with the following format: ```json {"text": ""} {"text": ""} ``` Note that the preprocessing script will pack observations together in vectors of a specified length, and will separate each instance (json line) by the tokenizer's EOS token. Then, run the bash scripts in this order: ```bash ./preprocess_data.sh [OPTIONS] ./convert2megatron.sh [OPTIONS] ./model_sharding.sh [OPTIONS] ./continue_pretraining.sh [OPTIONS] ``` >NOTE: each of these commands may be run with flag `--help`, which will inform the user on how to use each argument. For example, for a continued pretraining run with Llama 2 7B on datasets `d1` and `d2` and 8 GPUs, run the following: ```bash > ./preprocess_data.sh --dataset_json= --dataset_bin= --vocab_file=/tokenizer.model --repo= > ./preprocess_data.sh --dataset_json= --dataset_bin= --vocab_file=/tokenizer.model --repo= > ./convert2megatron.sh --megatron_model= --model_path= --size=7 --repo= > ./model_sharding.sh --megatron_model= --sharded_model= --tp=8 --pp=1 --vocab_size=32000 --repo= > ./continue_pretraining.sh --data_path="1 d1 1 d2" --megatron_model= --model_dir= --tokenizer_path=/tokenizer.model --tp=8 --pp=1 [TRAINING_ARGS] ```