Youliang commited on
Commit
617699b
1 Parent(s): 7b62e34

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -0
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: meta-llama/Meta-Llama-3-8B
3
+ tags:
4
+ - generated_from_trainer
5
+ model-index:
6
+ - name: Meta-Llama-3-8B_derta
7
+ results: []
8
+ ---
9
+
10
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
+ should probably proofread and complete it, then remove this comment. -->
12
+
13
+ # Meta-Llama-3-8B-Instruct_derta_100step
14
+
15
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the [Evol-Instruct](https://huggingface.co/datasets/WizardLMTeam/WizardLM_evol_instruct_70k) and [BeaverTails](https://huggingface.co/datasets/PKU-Alignment/BeaverTails) dataset.
16
+
17
+ ## Model description
18
+
19
+ Please refer to the paper [Refuse Whenever You Feel Unsafe: Improving Safety in LLMs via Decoupled Refusal Training](https://arxiv.org/abs/2407.09121) and GitHub [DeRTa](https://github.com/RobustNLP/DeRTa).
20
+
21
+ Input format:
22
+ ```
23
+ [INST] Your Instruction [\INST]
24
+ ```
25
+ ## Intended uses & limitations
26
+
27
+ The model is trained with DeRTa, showing a high safety performance.
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 2e-05
39
+ - train_batch_size: 16
40
+ - weight_decay: 2e-5
41
+ - eval_batch_size: 1
42
+ - seed: 1
43
+ - distributed_type: multi-GPU
44
+ - num_devices: 8
45
+ - total_train_batch_size: 128
46
+ - total_eval_batch_size: 8
47
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+ - lr_scheduler_type: cosine
49
+ - lr_scheduler_warmup_ratio: 0.03
50
+ - num_epochs: 2.0
51
+
52
+ ### Training results
53
+
54
+
55
+
56
+ ### Framework versions
57
+
58
+ - Transformers 4.40.0
59
+ - Pytorch 2.2.0+cu118
60
+ - Datasets 2.10.0
61
+ - Tokenizers 0.19.1