stack-llama-2 / README.md
kashif's picture
kashif HF staff
Update README.md
08cf7ac
|
raw
history blame
1.42 kB
metadata
license: bigscience-openrail-m
datasets:
  - lvwerra/stack-exchange-paired
language:
  - en
tags:
  - trl
  - transformers
  - rlhf

Stack-Llama-2

DPO fine-tuned Llama-2 7B model. The model is designed to generate human-like responses to questions in Stack Exchange domains of programming, mathematics, physics, and more. For more info check out the blog post and github example.

Trianing Details

Training Data

Original datasets are described in the LLaMA Model Card. Fine-tuning datasets for this model are based on Stack Exchange Paired, which consists of questions and answers from various domains in Stack Exchange, such as programming, mathematics, physics, and more. Specifically:

Traditional Fine-tuning: https://huggingface.co/datasets/lvwerra/stack-exchange-paired/tree/main/data/finetune

DPO Training: https://huggingface.co/datasets/lvwerra/stack-exchange-paired/tree/main/data/rl