habanoz commited on
Commit
499c499
1 Parent(s): 59593ce

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - habanoz/airoboros-3.1-no-mathjson-max-1k
5
+ language:
6
+ - en
7
+ pipeline_tag: text-generation
8
+ ---
9
+
10
+ Microsoft/phi-1.5 finetuned using airoboros-3.1-no-mathjson-max-1k dataset.
11
+
12
+ Qlora is used. Adapter is merged.
13
+
14
+ SFT code:
15
+ https://github.com/habanoz/qlora.git
16
+
17
+ Command used:
18
+ ```bash
19
+ accelerate launch $BASE_DIR/qlora/train.py \
20
+ --model_name_or_path $BASE_MODEL \
21
+ --working_dir $BASE_DIR/$OUTPUT_NAME-checkpoints \
22
+ --output_dir $BASE_DIR/$OUTPUT_NAME-peft \
23
+ --merged_output_dir $BASE_DIR/$OUTPUT_NAME \
24
+ --final_output_dir $BASE_DIR/$OUTPUT_NAME-final \
25
+ --num_train_epochs 1 \
26
+ --logging_steps 1 \
27
+ --save_strategy steps \
28
+ --save_steps 120 \
29
+ --save_total_limit 2 \
30
+ --data_seed 11422 \
31
+ --evaluation_strategy steps \
32
+ --per_device_eval_batch_size 4 \
33
+ --eval_dataset_size 0.01 \
34
+ --eval_steps 120 \
35
+ --max_new_tokens 1024 \
36
+ --dataloader_num_workers 3 \
37
+ --logging_strategy steps \
38
+ --do_train \
39
+ --do_eval \
40
+ --lora_r 64 \
41
+ --lora_alpha 16 \
42
+ --lora_modules all \
43
+ --bits 4 \
44
+ --double_quant \
45
+ --quant_type nf4 \
46
+ --lr_scheduler_type constant \
47
+ --dataset habanoz/airoboros-3.1-no-mathjson-max-1k \
48
+ --dataset_format airoboros_chat \
49
+ --model_max_len 1024 \
50
+ --per_device_train_batch_size 1 \
51
+ --gradient_accumulation_steps 16 \
52
+ --learning_rate 1e-5 \
53
+ --adam_beta2 0.999 \
54
+ --max_grad_norm 0.3 \
55
+ --lora_dropout 0.0 \
56
+ --weight_decay 0.0 \
57
+ --seed 11422 \
58
+ --gradient_checkpointing False \
59
+ --use_flash_attention_2 \
60
+ --ddp_find_unused_parameters False \
61
+ --trust_remote_code True
62
+ ```