fgaim commited on
Commit
7e570bb
1 Parent(s): 65606d1

Update readme

Browse files
Files changed (1) hide show
  1. README.md +6 -8
README.md CHANGED
@@ -6,19 +6,17 @@ widget:
6
 
7
  # RoBERTa Pretrained for Tigrinya Language
8
 
9
- We pretrain a RoBERTa Base model on a relatively small dataset for Tigrinya (34M tokens) for 18 epochs.
10
 
11
- Contained in this card is a PyTorch model exported from the original model that was trained on TPU v3.8 with Flax.
12
 
13
 
14
  ## Hyperparameters
15
 
16
  The hyperparameters corresponding to model sizes mentioned above are as follows:
17
 
18
- | Model Size | L | AH | HS | FFN | P |
19
- |------------|----|----|-----|------|------|
20
- | BASE | 12 | 12 | 768 | 3072 | 125M |
21
-
22
- (L = number of layers; AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.)
23
-
24
 
 
 
6
 
7
  # RoBERTa Pretrained for Tigrinya Language
8
 
9
+ We pretrain a RoBERTa base model for Tigrinya on a dataset of 40 million tokens trained for 40 epochs.
10
 
11
+ Contained in this repo are the original pretrained Flax model that was trained on a TPU v3.8 and it's correponding PyTorch version.
12
 
13
 
14
  ## Hyperparameters
15
 
16
  The hyperparameters corresponding to model sizes mentioned above are as follows:
17
 
18
+ | Model Size | L | AH | HS | FFN | P | Seq |
19
+ |------------|----|----|-----|------|------|------|
20
+ | BASE | 12 | 12 | 768 | 3072 | 125M | 128 |
 
 
 
21
 
22
+ (L = number of layers; AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters; Seq = maximum sequence length.)