--- language: no license: CC-BY 4.0 tags: - seq2seq datasets: - Norwegian Nynorsk/Bokmål --- # 🇳🇴 Norwegian T5 Base model Trained on the NCC🇳🇴 This is a Norwegian T5-base model trained on the Norwegian Colossal Corpus (NCC) on a TPU v3-8. It needs to be finetuned on a specific task before being used for anything. The following setting were used in training: ```bash ./run_t5_mlm_flax_streaming.py \\n --output_dir="./" \\n --model_type="t5" \\n --config_name="./" \\n --tokenizer_name="./" \\n --dataset_name="pere/norwegian_colossal_corpus_v2_short100k" \\n --max_seq_length="512" \\n --weight_decay="0.01" \\n --per_device_train_batch_size="32" \\n --per_device_eval_batch_size="32" \\n --learning_rate="8e-3" \\n --warmup_steps="5000" \\n --overwrite_output_dir \\n --cache_dir /mnt/disks/flaxdisk/cache/ \\n --num_train_epochs="5" \\n --adam_beta1="0.9" \\n --adam_beta2="0.98" \\n --logging_steps="500" \\n --num_train_steps="1000000" \\n --num_eval_samples="5000" \\n --save_steps="5000" \\n --eval_steps="5000" \\n --preprocessing_num_workers 96 \\n --adafactor \\n --push_to_hub ```