redrussianarmy
commited on
Commit
•
67c6d7e
1
Parent(s):
ed0d51e
Update README.md
Browse files
README.md
CHANGED
@@ -14,6 +14,9 @@ With the Tokenizers library, I created a 52K byte-level BPE vocab based on the t
|
|
14 |
|
15 |
After creating the vocab, I could train the GPT-2 for Turkish on two 2080TI over the complete training corpus (five epochs).
|
16 |
|
|
|
|
|
|
|
17 |
## Model weights
|
18 |
|
19 |
Both PyTorch and Tensorflow compatible weights are available.
|
|
|
14 |
|
15 |
After creating the vocab, I could train the GPT-2 for Turkish on two 2080TI over the complete training corpus (five epochs).
|
16 |
|
17 |
+
Logs during training:
|
18 |
+
https://tensorboard.dev/experiment/3AWKv8bBTaqcqZP5frtGkw/#scalars
|
19 |
+
|
20 |
## Model weights
|
21 |
|
22 |
Both PyTorch and Tensorflow compatible weights are available.
|