AndrewZeng commited on
Commit
c5c0da9
1 Parent(s): cd23305

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -0
README.md CHANGED
@@ -1,3 +1,40 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # Model Card for Deita Llama2 13B V1.0 SFT
6
+
7
+ Deita is an open-sourced project designed to facilitate Automatic Data Selection for instruction tuning in Large Language Models (LLMs). Deita Llama2 13B V1.0 SFT is a fine-tuned version of Llama 2 that was trained on 10k automatically selected lightweight, high-quality alignment SFT data: [Deita 10K V0](https://huggingface.co/datasets/hkust-nlp/deita-10k-v0).
8
+
9
+ ## Model description
10
+
11
+ - **Model type:** Model fine tuned on automatically selected lightweight, high-quality alignment SFT data.
12
+ - **Language(s) (NLP):** Primarily English
13
+ - **Finetuned from model:** [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)
14
+
15
+
16
+ ### Model Sources
17
+
18
+ - **Repository:** https://github.com/hkust-nlp/deita
19
+ - **Model Family:** Other models and the dataset are found in the [Deita collection](https://huggingface.co/collections/hkust-nlp/deita-6569c198c174808d94cf5bd4).
20
+
21
+ ## Performance
22
+
23
+
24
+ ## Input Format
25
+
26
+ The model is trained using the [vicuna_v1.1 template](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py)
27
+
28
+ ```
29
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hello! ASSISTANT: Hi!</s>USER: How are you? ASSISTANT:
30
+ ```
31
+
32
+ ### Training hyperparameters
33
+
34
+ The following hyperparameters were used during fine tuning:
35
+ - learning_rate: 2e-05
36
+ - total_train_batch_size: 128
37
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
38
+ - lr_scheduler_type: linear
39
+ - lr_scheduler_warmup_ratio: 0.1
40
+ - num_epochs: 3.0