File size: 8,085 Bytes
636bafe |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 |
INFO:transformers.configuration_utils:loading configuration file ../../Multilingual-MiniLM-L12-H384/config.json INFO:transformers.configuration_utils:Model config BertConfig { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 384, "initializer_range": 0.02, "intermediate_size": 1536, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "type_vocab_size": 2, "vocab_size": 250037 } INFO:transformers.modeling_utils:loading weights file ../../Multilingual-MiniLM-L12-H384/pytorch_model.bin INFO:transformers.modeling_utils:Weights of BertModel not initialized from pretrained model: ['bert.pooler.dense.weight', 'bert.pooler.dense.bias'] INFO:transformers.configuration_utils:loading configuration file ../../Multilingual-MiniLM-L12-H384/config.json INFO:transformers.configuration_utils:Model config BertConfig { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 384, "initializer_range": 0.02, "intermediate_size": 1536, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "type_vocab_size": 2, "vocab_size": 250037 } INFO:transformers.modeling_utils:loading weights file ../../Multilingual-MiniLM-L12-H384/pytorch_model.bin INFO:transformers.configuration_utils:loading configuration file ../../Multilingual-MiniLM-L12-H384/config.json INFO:transformers.configuration_utils:Model config BertConfig { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 384, "initializer_range": 0.02, "intermediate_size": 1536, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "type_vocab_size": 2, "vocab_size": 250037 } INFO:transformers.modeling_utils:loading weights file ../../Multilingual-MiniLM-L12-H384/pytorch_model.bin INFO:transformers.modeling_utils:Weights of BertModel not initialized from pretrained model: ['bert.pooler.dense.weight', 'bert.pooler.dense.bias'] INFO:transformers.configuration_utils:loading configuration file ../../Multilingual-MiniLM-L12-H384/config.json INFO:transformers.configuration_utils:Model config BertConfig { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 384, "initializer_range": 0.02, "intermediate_size": 1536, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "type_vocab_size": 2, "vocab_size": 250037 } INFO:transformers.modeling_tf_utils:loading weights file ../../Multilingual-MiniLM-L12-H384/pytorch_model.bin INFO:transformers.modeling_tf_pytorch_utils:Loading PyTorch weights from /home/patrick/hugging_face/models/Multilingual-MiniLM-L12-H384/pytorch_model.bin INFO:transformers.modeling_tf_pytorch_utils:PyTorch checkpoint contains 117,904,565 parameters INFO:transformers.modeling_tf_pytorch_utils:Loaded 117,505,920 parameters in the TF 2.0 model. INFO:transformers.modeling_tf_pytorch_utils:Weights or buffers not loaded from PyTorch model: {'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.bias', 'cls.predictions.transform.dense.bias'} INFO:transformers.configuration_utils:Configuration saved in ./config.json INFO:transformers.modeling_utils:Model weights saved in ./pytorch_model.bin INFO:transformers.configuration_utils:Configuration saved in ./config.json INFO:transformers.modeling_tf_utils:Model weights saved in ./tf_model.h5 INFO:transformers.tokenization_utils_base:Model name '../../MiniLM-L12-H384-uncased/' not found in model shortcut name list (xlm-roberta-base, xlm-roberta-large, xlm-roberta-large-finetuned-conll02-dutch, xlm-roberta-large-finetuned-conll02-spanish, xlm-roberta-large-finetuned-conll03-english, xlm-roberta-large-finetuned-conll03-german). Assuming '../../MiniLM-L12-H384-uncased/' is a path, a model identifier, or url to a directory containing tokenizer files. INFO:transformers.tokenization_utils_base:Didn't find file ../../MiniLM-L12-H384-uncased/sentencepiece.bpe.model. We won't load it. INFO:transformers.tokenization_utils_base:Didn't find file ../../MiniLM-L12-H384-uncased/added_tokens.json. We won't load it. INFO:transformers.tokenization_utils_base:Didn't find file ../../MiniLM-L12-H384-uncased/special_tokens_map.json. We won't load it. INFO:transformers.tokenization_utils_base:Didn't find file ../../MiniLM-L12-H384-uncased/tokenizer_config.json. We won't load it. INFO:transformers.configuration_utils:loading configuration file ../../Multilingual-MiniLM-L12-H384/config.json INFO:transformers.configuration_utils:Model config BertConfig { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 384, "initializer_range": 0.02, "intermediate_size": 1536, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "type_vocab_size": 2, "vocab_size": 250037 } INFO:transformers.modeling_utils:loading weights file ../../Multilingual-MiniLM-L12-H384/pytorch_model.bin INFO:transformers.modeling_utils:Weights of BertModel not initialized from pretrained model: ['bert.pooler.dense.weight', 'bert.pooler.dense.bias'] INFO:transformers.configuration_utils:loading configuration file ../../Multilingual-MiniLM-L12-H384/config.json INFO:transformers.configuration_utils:Model config BertConfig { "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 384, "initializer_range": 0.02, "intermediate_size": 1536, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "type_vocab_size": 2, "vocab_size": 250037 } INFO:transformers.modeling_tf_utils:loading weights file ../../Multilingual-MiniLM-L12-H384/pytorch_model.bin INFO:transformers.modeling_tf_pytorch_utils:Loading PyTorch weights from /home/patrick/hugging_face/models/Multilingual-MiniLM-L12-H384/pytorch_model.bin INFO:transformers.modeling_tf_pytorch_utils:PyTorch checkpoint contains 117,904,565 parameters INFO:transformers.modeling_tf_pytorch_utils:Loaded 117,505,920 parameters in the TF 2.0 model. INFO:transformers.modeling_tf_pytorch_utils:Weights or buffers not loaded from PyTorch model: {'cls.predictions.transform.dense.weight', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.bias'} INFO:transformers.configuration_utils:Configuration saved in ./config.json INFO:transformers.modeling_utils:Model weights saved in ./pytorch_model.bin INFO:transformers.configuration_utils:Configuration saved in ./config.json INFO:transformers.modeling_tf_utils:Model weights saved in ./tf_model.h5 INFO:transformers.tokenization_utils_base:Model name '../../Multilingual-MiniLM-L12-H384/sentencepiece.bpe.model' not found in model shortcut name list (xlm-roberta-base, xlm-roberta-large, xlm-roberta-large-finetuned-conll02-dutch, xlm-roberta-large-finetuned-conll02-spanish, xlm-roberta-large-finetuned-conll03-english, xlm-roberta-large-finetuned-conll03-german). Assuming '../../Multilingual-MiniLM-L12-H384/sentencepiece.bpe.model' is a path, a model identifier, or url to a directory containing tokenizer files. WARNING:transformers.tokenization_utils_base:Calling XLMRobertaTokenizer.from_pretrained() with the path to a single file or url is deprecated INFO:transformers.tokenization_utils_base:loading file ../../Multilingual-MiniLM-L12-H384/sentencepiece.bpe.model |