Model save
Browse files
README.md
CHANGED
@@ -1,6 +1,5 @@
|
|
1 |
---
|
2 |
base_model: meta-llama/Llama-3.1-8B
|
3 |
-
datasets: trl-lib/Capybara
|
4 |
library_name: transformers
|
5 |
model_name: Llama-3.1-8B-SFT-LoRA-no-packing
|
6 |
tags:
|
@@ -12,7 +11,7 @@ licence: license
|
|
12 |
|
13 |
# Model Card for Llama-3.1-8B-SFT-LoRA-no-packing
|
14 |
|
15 |
-
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B)
|
16 |
It has been trained using [TRL](https://github.com/huggingface/trl).
|
17 |
|
18 |
## Quick start
|
@@ -28,17 +27,17 @@ print(output["generated_text"])
|
|
28 |
|
29 |
## Training procedure
|
30 |
|
31 |
-
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/huggingface/huggingface/runs/
|
32 |
|
33 |
This model was trained with SFT.
|
34 |
|
35 |
### Framework versions
|
36 |
|
37 |
- TRL: 0.11.0.dev0
|
38 |
-
- Transformers: 4.
|
39 |
- Pytorch: 2.4.0
|
40 |
- Datasets: 2.21.0
|
41 |
-
- Tokenizers: 0.
|
42 |
|
43 |
## Citations
|
44 |
|
|
|
1 |
---
|
2 |
base_model: meta-llama/Llama-3.1-8B
|
|
|
3 |
library_name: transformers
|
4 |
model_name: Llama-3.1-8B-SFT-LoRA-no-packing
|
5 |
tags:
|
|
|
11 |
|
12 |
# Model Card for Llama-3.1-8B-SFT-LoRA-no-packing
|
13 |
|
14 |
+
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
|
15 |
It has been trained using [TRL](https://github.com/huggingface/trl).
|
16 |
|
17 |
## Quick start
|
|
|
27 |
|
28 |
## Training procedure
|
29 |
|
30 |
+
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/huggingface/huggingface/runs/shdey2g5)
|
31 |
|
32 |
This model was trained with SFT.
|
33 |
|
34 |
### Framework versions
|
35 |
|
36 |
- TRL: 0.11.0.dev0
|
37 |
+
- Transformers: 4.45.1
|
38 |
- Pytorch: 2.4.0
|
39 |
- Datasets: 2.21.0
|
40 |
+
- Tokenizers: 0.20.0
|
41 |
|
42 |
## Citations
|
43 |
|