etri-xainlp/llama2-13b-sft-dpo
Model Details
Model Developers ETRI xainlp team
Input text only.
Output text only.
Model Architecture
Base Model meta-llama/Llama-13b-hf
Training Dataset
fully sft: 650k instruction-following set
dpo+lora: 90k user preference set
We use A100 GPU 80GB * 8, when training.
- Downloads last month
- 1,308
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.