mistral_7b_norobots / README.md
Zangs3011's picture
Update README.md
b0de392
---
library_name: peft
tags:
- code
- instruct
- mistral
datasets:
- HuggingFaceH4/no_robots
base_model: mistralai/Mistral-7B-v0.1
license: apache-2.0
---
### Finetuning Overview:
**Model Used:** mistralai/Mistral-7B-v0.1
**Dataset:** HuggingFaceH4/no_robots
#### Dataset Insights:
[No Robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots) is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better.
#### Finetuning Details:
With the utilization of [MonsterAPI](https://monsterapi.ai)'s [no-code LLM finetuner](https://monsterapi.ai/finetuning), this finetuning:
- Was achieved with great cost-effectiveness.
- Completed in a total duration of 1h 15m 3s for 2 epochs using an A6000 48GB GPU.
- Costed `$2.525` for the entire 2 epochs.
#### Hyperparameters & Additional Details:
- **Epochs:** 2
- **Cost Per Epoch:** $1.26
- **Total Finetuning Cost:** $2.525
- **Model Path:** mistralai/Mistral-7B-v0.1
- **Learning Rate:** 0.0002
- **Data Split:** 100% train
- **Gradient Accumulation Steps:** 64
- **lora r:** 64
- **lora alpha:** 16
#### Prompt Structure
```
<|system|> </s> <|user|> [USER PROMPT] </s> <|assistant|> [ASSISTANT ANSWER] </s>
```
#### Train loss :
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ba46aa0a9866b28cb19a14/Badi_wgZLBsUdeIScEKs9.png)
### Benchmarking results :
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6313732454e6e5d9f0f797cd/ialM-cJygMgMgczskzicX.png)
---
license: apache-2.0