PEFT
code
instruct
code-llama
Files changed (1) hide show
  1. README.md +7 -8
README.md CHANGED
@@ -5,7 +5,7 @@ tags:
5
  - instruct
6
  - code-llama
7
  datasets:
8
- - ehartford/dolphin-2.5-mixtral-8x7b
9
  base_model: codellama/CodeLlama-7b-hf
10
  license: apache-2.0
11
  ---
@@ -14,25 +14,24 @@ license: apache-2.0
14
 
15
  **Model Used:** codellama/CodeLlama-7b-hf
16
 
17
- **Dataset:** ehartford/dolphin-2.5-mixtral-8x7b
18
 
19
  #### Dataset Insights:
20
 
21
- [No Robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots) is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better.
22
 
23
  #### Finetuning Details:
24
 
25
  With the utilization of [MonsterAPI](https://monsterapi.ai)'s [no-code LLM finetuner](https://monsterapi.ai/finetuning), this finetuning:
26
 
27
  - Was achieved with great cost-effectiveness.
28
- - Completed in a total duration of 1h 15m 3s for 2 epochs using an A6000 48GB GPU.
29
- - Costed `$2.525` for the entire 2 epochs.
30
 
31
  #### Hyperparameters & Additional Details:
32
 
33
- - **Epochs:** 2
34
- - **Cost Per Epoch:** $1.26
35
- - **Total Finetuning Cost:** $2.525
36
  - **Model Path:** codellama/CodeLlama-7b-hf
37
  - **Learning Rate:** 0.0002
38
  - **Data Split:** 100% train
 
5
  - instruct
6
  - code-llama
7
  datasets:
8
+ - cognitivecomputations/dolphin-coder
9
  base_model: codellama/CodeLlama-7b-hf
10
  license: apache-2.0
11
  ---
 
14
 
15
  **Model Used:** codellama/CodeLlama-7b-hf
16
 
17
+ **Dataset:** cognitivecomputations/dolphin-coder
18
 
19
  #### Dataset Insights:
20
 
21
+ [Dolphin-Coder](https://huggingface.co/datasets/cognitivecomputations/dolphin-coder) Dolphin-Coder dataset – a high-quality collection of 100,000+ coding questions and responses. It's perfect for supervised fine-tuning (SFT), and teaching language models to improve on coding-based tasks.
22
 
23
  #### Finetuning Details:
24
 
25
  With the utilization of [MonsterAPI](https://monsterapi.ai)'s [no-code LLM finetuner](https://monsterapi.ai/finetuning), this finetuning:
26
 
27
  - Was achieved with great cost-effectiveness.
28
+ - Completed in a total duration of 15hr 31mins for 1 epochs using an A6000 48GB GPU.
29
+ - Costed `$31.31` for the entire 1 epoch.
30
 
31
  #### Hyperparameters & Additional Details:
32
 
33
+ - **Epochs:** 1
34
+ - **Total Finetuning Cost:** $31.31
 
35
  - **Model Path:** codellama/CodeLlama-7b-hf
36
  - **Learning Rate:** 0.0002
37
  - **Data Split:** 100% train