AndrewZeng's picture
Update README.md
c5c0da9
|
raw
history blame
No virus
1.62 kB
metadata
license: apache-2.0

Model Card for Deita Llama2 13B V1.0 SFT

Deita is an open-sourced project designed to facilitate Automatic Data Selection for instruction tuning in Large Language Models (LLMs). Deita Llama2 13B V1.0 SFT is a fine-tuned version of Llama 2 that was trained on 10k automatically selected lightweight, high-quality alignment SFT data: Deita 10K V0.

Model description

  • Model type: Model fine tuned on automatically selected lightweight, high-quality alignment SFT data.
  • Language(s) (NLP): Primarily English
  • Finetuned from model: meta-llama/Llama-2-13b-hf

Model Sources

Performance

Input Format

The model is trained using the vicuna_v1.1 template

A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hello! ASSISTANT: Hi!</s>USER: How are you? ASSISTANT:

Training hyperparameters

The following hyperparameters were used during fine tuning:

  • learning_rate: 2e-05
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3.0