Update README.md
Browse files
README.md
CHANGED
@@ -24,22 +24,13 @@ This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/g
|
|
24 |
It achieves the following results on the evaluation set:
|
25 |
- Loss: 2.0206
|
26 |
|
27 |
-
## Model description
|
28 |
-
|
29 |
-
More information needed
|
30 |
-
|
31 |
-
## Intended uses & limitations
|
32 |
-
|
33 |
-
More information needed
|
34 |
-
|
35 |
-
## Training and evaluation data
|
36 |
-
|
37 |
-
More information needed
|
38 |
|
39 |
## Training Hardware
|
40 |
|
41 |
This model was trained using:
|
42 |
-
GPU: Intel(R) Data Center GPU Max 1100
|
|
|
|
|
43 |
|
44 |
## Training procedure
|
45 |
|
@@ -77,7 +68,7 @@ The following hyperparameters were used during training:
|
|
77 |
| 2.0183 | 22.95 | 1400 | 2.0206 |
|
78 |
|
79 |
|
80 |
-
|
81 |
|
82 |
- PEFT 0.10.0
|
83 |
- Transformers 4.39.3
|
|
|
24 |
It achieves the following results on the evaluation set:
|
25 |
- Loss: 2.0206
|
26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
|
28 |
## Training Hardware
|
29 |
|
30 |
This model was trained using:
|
31 |
+
- GPU: Intel(R) Data Center GPU Max 1100
|
32 |
+
- CPU: Intel(R) Xeon(R) Platinum 8480+
|
33 |
+
|
34 |
|
35 |
## Training procedure
|
36 |
|
|
|
68 |
| 2.0183 | 22.95 | 1400 | 2.0206 |
|
69 |
|
70 |
|
71 |
+
## Framework versions
|
72 |
|
73 |
- PEFT 0.10.0
|
74 |
- Transformers 4.39.3
|