yoshitomo-matsubara commited on
Commit
e77239d
1 Parent(s): 6b69c03

initial commit

Browse files
README.md ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - bert
5
+ - wnli
6
+ - glue
7
+ - kd
8
+ - torchdistill
9
+ license: apache-2.0
10
+ datasets:
11
+ - wnli
12
+ metrics:
13
+ - accuracy
14
+ ---
15
+
16
+ `bert-base-uncased` fine-tuned on WNLI dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation.
17
+ The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/wnli/kd/bert_base_uncased_from_bert_large_uncased.yaml).
18
+ I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "bert-base-uncased",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "finetuning_task": "wnli",
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 0,
20
+ "position_embedding_type": "absolute",
21
+ "problem_type": "single_label_classification",
22
+ "transformers_version": "4.6.1",
23
+ "type_vocab_size": 2,
24
+ "use_cache": true,
25
+ "vocab_size": 30522
26
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e3c9ed5f855deb4db085415f840726e1c8f3fb4586ecbe66519f8ddb36ae7c2
3
+ size 438024457
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "do_lower": true, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "bert-base-uncased"}
training.log ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2021-05-31 18:22:29,298 INFO __main__ Namespace(adjust_lr=False, config='torchdistill/configs/sample/glue/wnli/kd/bert_base_uncased_from_bert_large_uncased.yaml', log='log/glue/wnli/kd/bert_base_uncased_from_bert_large_uncased.txt', private_output='leaderboard/glue/kd/bert_base_uncased_from_bert_large_uncased/', seed=None, student_only=False, task_name='wnli', test_only=False, world_size=1)
2
+ 2021-05-31 18:22:29,328 INFO __main__ Distributed environment: NO
3
+ Num processes: 1
4
+ Process index: 0
5
+ Local process index: 0
6
+ Device: cuda
7
+ Use FP16 precision: True
8
+
9
+ 2021-05-31 18:22:39,133 WARNING datasets.builder Reusing dataset glue (/root/.cache/huggingface/datasets/glue/wnli/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)
10
+ 2021-05-31 18:22:39,753 INFO __main__ Start training
11
+ 2021-05-31 18:22:39,753 INFO torchdistill.models.util [teacher model]
12
+ 2021-05-31 18:22:39,754 INFO torchdistill.models.util Using the original teacher model
13
+ 2021-05-31 18:22:39,754 INFO torchdistill.models.util [student model]
14
+ 2021-05-31 18:22:39,754 INFO torchdistill.models.util Using the original student model
15
+ 2021-05-31 18:22:39,754 INFO torchdistill.core.distillation Loss = 1.0 * OrgLoss
16
+ 2021-05-31 18:22:39,754 INFO torchdistill.core.distillation Freezing the whole teacher model
17
+ 2021-05-31 18:22:43,520 INFO torchdistill.misc.log Epoch: [0] [ 0/20] eta: 0:00:11 lr: 2.97e-05 sample/s: 7.107315395186558 loss: 0.0587 (0.0587) time: 0.5682 data: 0.0054 max mem: 2861
18
+ 2021-05-31 18:22:53,674 INFO torchdistill.misc.log Epoch: [0] Total time: 0:00:10
19
+ 2021-05-31 18:22:53,905 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/wnli/default_experiment-1-0.arrow
20
+ 2021-05-31 18:22:53,905 INFO __main__ Validation: accuracy = 0.5492957746478874
21
+ 2021-05-31 18:22:53,905 INFO __main__ Updating ckpt at ./resource/ckpt/glue/wnli/kd/wnli-bert-base-uncased_from_bert-large-uncased
22
+ 2021-05-31 18:22:55,500 INFO torchdistill.misc.log Epoch: [1] [ 0/20] eta: 0:00:10 lr: 2.37e-05 sample/s: 7.659786933102254 loss: 0.0016 (0.0016) time: 0.5269 data: 0.0047 max mem: 4680
23
+ 2021-05-31 18:23:05,780 INFO torchdistill.misc.log Epoch: [1] Total time: 0:00:10
24
+ 2021-05-31 18:23:06,009 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/wnli/default_experiment-1-0.arrow
25
+ 2021-05-31 18:23:06,010 INFO __main__ Validation: accuracy = 0.5774647887323944
26
+ 2021-05-31 18:23:06,010 INFO __main__ Updating ckpt at ./resource/ckpt/glue/wnli/kd/wnli-bert-base-uncased_from_bert-large-uncased
27
+ 2021-05-31 18:23:07,623 INFO torchdistill.misc.log Epoch: [2] [ 0/20] eta: 0:00:09 lr: 1.77e-05 sample/s: 8.39442113663712 loss: 0.0000 (0.0000) time: 0.4807 data: 0.0042 max mem: 4680
28
+ 2021-05-31 18:23:18,217 INFO torchdistill.misc.log Epoch: [2] Total time: 0:00:11
29
+ 2021-05-31 18:23:18,447 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/wnli/default_experiment-1-0.arrow
30
+ 2021-05-31 18:23:18,448 INFO __main__ Validation: accuracy = 0.5633802816901409
31
+ 2021-05-31 18:23:19,063 INFO torchdistill.misc.log Epoch: [3] [ 0/20] eta: 0:00:12 lr: 1.1700000000000001e-05 sample/s: 6.543501026345667 loss: -0.0000 (-0.0000) time: 0.6152 data: 0.0039 max mem: 4680
32
+ 2021-05-31 18:23:29,640 INFO torchdistill.misc.log Epoch: [3] Total time: 0:00:11
33
+ 2021-05-31 18:23:29,870 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/wnli/default_experiment-1-0.arrow
34
+ 2021-05-31 18:23:29,870 INFO __main__ Validation: accuracy = 0.5633802816901409
35
+ 2021-05-31 18:23:30,396 INFO torchdistill.misc.log Epoch: [4] [ 0/20] eta: 0:00:10 lr: 5.7000000000000005e-06 sample/s: 7.676525791092471 loss: 0.0000 (0.0000) time: 0.5252 data: 0.0041 max mem: 4680
36
+ 2021-05-31 18:23:40,820 INFO torchdistill.misc.log Epoch: [4] Total time: 0:00:10
37
+ 2021-05-31 18:23:41,051 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/wnli/default_experiment-1-0.arrow
38
+ 2021-05-31 18:23:41,051 INFO __main__ Validation: accuracy = 0.5633802816901409
39
+ 2021-05-31 18:23:41,088 INFO __main__ [Teacher: bert-large-uncased]
40
+ 2021-05-31 18:23:41,729 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/wnli/default_experiment-1-0.arrow
41
+ 2021-05-31 18:23:41,729 INFO __main__ Test: accuracy = 0.5633802816901409
42
+ 2021-05-31 18:23:44,813 INFO __main__ [Student: bert-base-uncased]
43
+ 2021-05-31 18:23:45,056 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/wnli/default_experiment-1-0.arrow
44
+ 2021-05-31 18:23:45,056 INFO __main__ Test: accuracy = 0.5774647887323944
45
+ 2021-05-31 18:23:45,056 INFO __main__ Start prediction for private dataset(s)
46
+ 2021-05-31 18:23:45,057 INFO __main__ wnli/test: 146 samples
vocab.txt ADDED
The diff for this file is too large to render. See raw diff