Asier Gutiérrez Fandiño
Initial commit
97db817
raw
history blame
832 Bytes
Executing train_tokenizer.py
------------------------------
training bbpe tokenizer
Initialize an empty tokenizer
training
saving model tokenizer to /home/usuaris/veu/casimiro.pio.carrino/projects/corpus-utils-lm/output/model-ready_output/bio-biomedical-clinical-vocab-52k-2021-04-26-0955-3a71-240f/train_tokenizer_output/train-tokenizer-2021-04-26-1009-3a71-e9ca
saving pretrained to /home/usuaris/veu/casimiro.pio.carrino/projects/corpus-utils-lm/output/model-ready_output/bio-biomedical-clinical-vocab-52k-2021-04-26-0955-3a71-240f/train_tokenizer_output/train-tokenizer-2021-04-26-1009-3a71-e9ca
saving config to /home/usuaris/veu/casimiro.pio.carrino/projects/corpus-utils-lm/output/model-ready_output/bio-biomedical-clinical-vocab-52k-2021-04-26-0955-3a71-240f/train_tokenizer_output/train-tokenizer-2021-04-26-1009-3a71-e9ca