pere commited on
Commit
9e9877b
1 Parent(s): 5404b01

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -26
README.md CHANGED
@@ -11,33 +11,8 @@ datasets:
11
 
12
  This is a Norwegian T5-base model trained on the Norwegian Colossal Corpus (NCC) on a TPU v3-8. It needs to be finetuned on a specific task before being used for anything.
13
 
14
- Currently the model is training. It is expected that it should be finished by the end of August 2021.
15
 
16
  The following setting were used in training:
17
  ```bash
18
- ./run_t5_mlm_flax_streaming.py \
19
- --output_dir="./" \
20
- --model_type="t5" \
21
- --config_name="./" \
22
- --tokenizer_name="./" \
23
- --dataset_name="pere/norwegian_colossal_corpus_v2_short100k" \
24
- --max_seq_length="512" \
25
- --weight_decay="0.01" \
26
- --per_device_train_batch_size="32" \
27
- --per_device_eval_batch_size="32" \
28
- --learning_rate="8e-3" \
29
- --warmup_steps="5000" \
30
- --overwrite_output_dir \
31
- --cache_dir /mnt/disks/flaxdisk/cache/ \
32
- --num_train_epochs="5" \
33
- --adam_beta1="0.9" \
34
- --adam_beta2="0.98" \
35
- --logging_steps="500" \
36
- --num_train_steps="1000000" \
37
- --num_eval_samples="5000" \
38
- --save_steps="5000" \
39
- --eval_steps="5000" \
40
- --preprocessing_num_workers 96 \
41
- --adafactor \
42
- --push_to_hub
43
  ```
 
11
 
12
  This is a Norwegian T5-base model trained on the Norwegian Colossal Corpus (NCC) on a TPU v3-8. It needs to be finetuned on a specific task before being used for anything.
13
 
 
14
 
15
  The following setting were used in training:
16
  ```bash
17
+ ./run_t5_mlm_flax_streaming.py \\n --output_dir="./" \\n --model_type="t5" \\n --config_name="./" \\n --tokenizer_name="./" \\n --dataset_name="pere/norwegian_colossal_corpus_v2_short100k" \\n --max_seq_length="512" \\n --weight_decay="0.01" \\n --per_device_train_batch_size="32" \\n --per_device_eval_batch_size="32" \\n --learning_rate="8e-3" \\n --warmup_steps="5000" \\n --overwrite_output_dir \\n --cache_dir /mnt/disks/flaxdisk/cache/ \\n --num_train_epochs="5" \\n --adam_beta1="0.9" \\n --adam_beta2="0.98" \\n --logging_steps="500" \\n --num_train_steps="1000000" \\n --num_eval_samples="5000" \\n --save_steps="5000" \\n --eval_steps="5000" \\n --preprocessing_num_workers 96 \\n --adafactor \\n --push_to_hub
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ```