Zangs3011 commited on
Commit
cbb2ebe
1 Parent(s): a975ad3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -3
README.md CHANGED
@@ -1,9 +1,55 @@
1
  ---
2
  library_name: peft
 
 
 
 
 
 
 
 
 
 
3
  ---
4
- ## Training procedure
5
 
6
- ### Framework versions
7
 
 
 
8
 
9
- - PEFT 0.5.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: peft
3
+ tags:
4
+ - meta-llama
5
+ - code
6
+ - instruct
7
+ - WizardLM
8
+ - Mistral-7B-v0.1
9
+ datasets:
10
+ - WizardLM/WizardLM_evol_instruct_70k
11
+ base_model: mistralai/Mistral-7B-v0.1
12
+ license: apache-2.0
13
  ---
 
14
 
15
+ ### Finetuning Overview:
16
 
17
+ **Model Used:** mistralai/Mistral-7B-v0.1
18
+ **Dataset:** WizardLM/WizardLM_evol_instruct_70k
19
 
20
+ #### Dataset Insights:
21
+
22
+ The WizardLM/WizardLM_evol_instruct_70k dataset, tailored specifically for enhancing interactive capabilities, provides valuable instruction-based content. (Note: Additional insights about the dataset, its origin, content, and contributors can be provided here if required).
23
+
24
+ #### Finetuning Details:
25
+
26
+ With the utilization of [MonsterAPI](https://monsterapi.ai)'s [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm), this finetuning:
27
+
28
+ - Was achieved with great cost-effectiveness.
29
+ - Completed in a total duration of 5hrs 18mins for 1 epoch using an A6000 48GB GPU.
30
+ - Costed `$10` for the entire epoch.
31
+
32
+ #### Hyperparameters & Additional Details:
33
+
34
+ - **Epochs:** 1
35
+ - **Cost Per Epoch:** $10
36
+ - **Total Finetuning Cost:** $10
37
+ - **Model Path:** mistralai/Mistral-7B-v0.1
38
+ - **Learning Rate:** 0.0002
39
+ - **Data Split:** 90% train 10% validation
40
+ - **Gradient Accumulation Steps:** 4
41
+
42
+ ---
43
+ ```
44
+ ### INSTRUCTION:
45
+ [instruction]
46
+
47
+ ### RESPONSE:
48
+ [output]
49
+ ```
50
+ Training loss :
51
+ ![training loss](train-loss.png "Training loss")
52
+
53
+ ---
54
+
55
+ license: apache-2.0