Zangs3011 commited on
Commit
2f38e4c
1 Parent(s): f017fd0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -3
README.md CHANGED
@@ -1,9 +1,34 @@
1
  ---
 
 
2
  library_name: peft
 
 
 
 
 
 
 
3
  ---
4
- ## Training procedure
5
 
6
- ### Framework versions
7
 
 
 
8
 
9
- - PEFT 0.5.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ datasets:
3
+ - garage-bAInd/Open-Platypus
4
  library_name: peft
5
+ tags:
6
+ - tiiuae/falcon-7b
7
+ - code
8
+ - instruct
9
+ - instruct-code
10
+ - logical-reasoning
11
+ - Platypus2
12
  ---
 
13
 
14
+ We finetuned TIIUAE/Falcon-7B on the Open-Platypus dataset (garage-bAInd/Open-Platypus) for 3 epochs using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
15
 
16
+ #### About OpenPlatypus Dataset
17
+ OpenPlatypus is focused on improving LLM logical reasoning skills and was used to train the Platypus2 models. The dataset is comprised of various sub-datasets, including PRM800K, ScienceQA, SciBench, ReClor, TheoremQA, among others. These were filtered using keyword search and Sentence Transformers to remove questions with a similarity above 80%. The dataset includes contributions under various licenses like MIT, Creative Commons, and Apache 2.0.
18
 
19
+ The finetuning session got completed in ~ 3 hrs and costed us only `$14` for the entire finetuning run!
20
+
21
+ #### Hyperparameters & Run details:
22
+ - Model Path: tiiuae/falcon-7b
23
+ - Dataset: garage-bAInd/Open-Platypus
24
+ - Learning rate: 0.0003
25
+ - Number of epochs: 3
26
+ - Data split: Training: 90% / Validation: 10%
27
+ - Gradient accumulation steps: 1
28
+
29
+ Loss metrics:
30
+ ![training loss](train-loss.png "Training loss")
31
+
32
+ ---
33
+ license: apache-2.0
34
+ ---