AndrewZeng
commited on
Commit
•
93fbcbd
1
Parent(s):
c5c0da9
Update README.md
Browse files
README.md
CHANGED
@@ -1,10 +1,15 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
|
5 |
# Model Card for Deita Llama2 13B V1.0 SFT
|
6 |
|
7 |
-
Deita is an open-sourced project designed to facilitate Automatic Data Selection for instruction tuning in Large Language Models (LLMs).
|
|
|
8 |
|
9 |
## Model description
|
10 |
|
@@ -37,4 +42,4 @@ The following hyperparameters were used during fine tuning:
|
|
37 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
38 |
- lr_scheduler_type: linear
|
39 |
- lr_scheduler_warmup_ratio: 0.1
|
40 |
-
- num_epochs: 3.0
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- hkust-nlp/deita-10k-v0
|
5 |
+
language:
|
6 |
+
- en
|
7 |
---
|
8 |
|
9 |
# Model Card for Deita Llama2 13B V1.0 SFT
|
10 |
|
11 |
+
Deita is an open-sourced project designed to facilitate **Automatic Data Selection** for instruction tuning in Large Language Models (LLMs).
|
12 |
+
Deita Llama2 13B V1.0 SFT is a fine-tuned version of Llama 2 that was trained on 10k automatically selected lightweight, high-quality alignment SFT data: [Deita 10K V0](https://huggingface.co/datasets/hkust-nlp/deita-10k-v0).
|
13 |
|
14 |
## Model description
|
15 |
|
|
|
42 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
43 |
- lr_scheduler_type: linear
|
44 |
- lr_scheduler_warmup_ratio: 0.1
|
45 |
+
- num_epochs: 3.0
|