File size: 751 Bytes
ab54f6e 06607c1 ab54f6e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
---
license: llama3
---
# Alpesteibock-Llama-3-8B-Alpha
**Alpesteibock-Llama-3-8B-Alpha** is an experimental QLoRA fine-tune of [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) on a dataset of more than 28 million tokens of Swiss German text from multiple sources.
## Dataset
## Training Details
Hardware: 1x RTX 4090
Duration: ~30 hours in total (~2 hours for first phase and ~28 hours for second phase)
### Hyperparameters
Adapter: QLoRA
Precision: 4 bit
Optimizer: adamw_bnb_8bit
LoRA Rank: 256
LoRA Alpha: 256
Learning Rate: 1e-5
Context Length: 4096 tokens
Batch Size: 1
Gradient Accumulation Steps: 1
Sample Packing: Off for first phase, on for second phase
Epochs: 2
## Limitations
|