Update README.md
Browse files
README.md
CHANGED
@@ -45,7 +45,7 @@ The supervised training tasks datasets can be downloaded on [Link](https://www.d
|
|
45 |
|
46 |
### Multi-task Pretraining
|
47 |
|
48 |
-
The model was trained on a single TPU Pod V3-8 for
|
49 |
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
|
50 |
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
|
51 |
|
|
|
45 |
|
46 |
### Multi-task Pretraining
|
47 |
|
48 |
+
The model was trained on a single TPU Pod V3-8 for 180,000 steps in total, using sequence length 512 (batch size 4096).
|
49 |
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
|
50 |
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
|
51 |
|