Update README.md
Browse files
README.md
CHANGED
@@ -76,16 +76,16 @@ The model was trained on 1201 training samples and 100 validation samples of the
|
|
76 |
### Training hyperparameters
|
77 |
|
78 |
The following hyperparameters were used during training:
|
79 |
-
- learning_rate
|
80 |
-
- train_batch_size
|
81 |
-
- eval_batch_size
|
82 |
-
- seed
|
83 |
-
- gradient_accumulation_steps
|
84 |
-
- total_train_batch_size
|
85 |
-
- optimizer
|
86 |
-
- lr_scheduler_type
|
87 |
-
- num_epochs
|
88 |
-
- mixed_precision_training
|
89 |
|
90 |
### Training results
|
91 |
|
|
|
76 |
### Training hyperparameters
|
77 |
|
78 |
The following hyperparameters were used during training:
|
79 |
+
- `learning_rate`: 2e-05
|
80 |
+
- `train_batch_size`: 1
|
81 |
+
- `eval_batch_size`: 1
|
82 |
+
- `seed`: 42
|
83 |
+
- `gradient_accumulation_steps`: 2
|
84 |
+
- `total_train_batch_size`: 2
|
85 |
+
- `optimizer`: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
86 |
+
- `lr_scheduler_type`: linear
|
87 |
+
- `num_epochs`: 3
|
88 |
+
- `mixed_precision_training`: Native AMP
|
89 |
|
90 |
### Training results
|
91 |
|