--- license: llama2 language: - en datasets: - teknium/GPT4-LLM-Cleaned --- # Model Card for traclm-v2-7b-instruct An instruction-tuned version of [TRAC-MTRY/traclm-v2-7b-base](https://huggingface.co/TRAC-MTRY/traclm-v2-7b-base) created by further finetuning on the popular "Alpaca" distillation of GPT4 prompts/responses. ## Model Details ### Model Description This model is a research project aimed at exploring whether a pretrained LLM can acquire tangible domain-specific knowledge about the Army domain. - **Developed by:** The Research and Analysis Center - Monterey, Army Futures Command - **License:** Llama-2 Community License - **Model Type:** LlamaForCausalLM - **Finetuned from model:** [TRAC-MTRY/traclm-v2-7b-base](https://huggingface.co/TRAC-MTRY/traclm-v2-7b-base) ### Model Sources [optional] - **Paper:** TBP - **Demo:** TBP ### Downstream Use This model is instruction-tuned, and thus is more capable of following user instructions than the raw '-base' counterpart. However, this model is still capable of extreme hallucination, and thus is only suitable for research purposes. ### Out-of-Scope Use The creation of this model constitutes academic research in partnership with the Naval Postgraduate School. The purpose of this research is to inform future DoD experimentation regarding the development and application of domain-specific language models. Direct application to downstream military tasks is out of scope. ## Prompt Format This model was fine-tuned with the alpaca prompt format. It is *highly* recommended that you use the same format for any interactions with the model. Failure to do so will degrade performance significantly. Standard Alpaca Format: ``` ### System:\nBelow is an instruction that describes a task. Write a response that appropriately completes the request.\n\n\n\n### Instruction:\n{prompt}\n\n### Response:\n " ``` Input Field Variant: ``` ### System:\nBelow is an instruction that describes a task. Write a response that appropriately completes the request.\n\n\n\n### Instruction:\n{prompt}\n\n###Input:\n{input}\n\n### Response:\n " ``` ## Training Details ### Training Data [teknium/GPT4-LLM-Cleaned](https://huggingface.co/datasets/teknium/GPT4-LLM-Cleaned) ### Training Procedure The model was trained using Open Access AI Collective's [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) framework and Microsoft's [DeepSpeed](https://github.com/microsoft/DeepSpeed) framework for model/data parallelism. ### Training Hardware Training was conducted on a single compute node with NPS's Hamming HPC Center. The compute node contained 8x NVIDIA A40 GPUs. ### Training Hyperparameters - base_model: TRAC-MTRY/traclm-v2-7b-base - base_model_config: TRAC-MTRY/traclm-v2-7b-base - model_type: LlamaForCausalLM - tokenizer_type: LlamaTokenizer - sequence_len: 4096 - pad_to_sequence_len: true - gradient_accumulation_steps: 1 - micro_batch_size: 4 - eval_batch_size: 4 - num_epochs: 5 - lr_scheduler: cosine - learning_rate: 0.00003 - bf16: true - gradient_checkpointing: true - flash_attention: true - warmup_steps: 50 - lr_quadratic_warmup: true - special_tokens: {bos_token: "\", eos_token: "\", unk_token: "\"} ### DeepSpeed Configuration ``` { "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu" }, "contiguous_gradients": true, "overlap_comm": true }, "bf16": { "enabled": "auto" }, "fp16": { "enabled": "auto", "auto_cast": false, "loss_scale": 0, "initial_scale_power": 32, "loss_scale_window": 1000, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": [ 0.9, 0.999 ], "eps": 1e-8, "weight_decay": "auto" } }, "scheduler": { "type": "WarmupDecayLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto", "total_num_steps": "auto" } }, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` ## Model Card Contact MAJ Daniel C. Ruiz (daniel.ruiz@nps.edu)