yoshitomo-matsubara commited on
Commit
62ab3f0
1 Parent(s): c625e0d

initial commit

Browse files
README.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - bert
5
+ - stsb
6
+ - glue
7
+ - kd
8
+ - torchdistill
9
+ license: apache-2.0
10
+ datasets:
11
+ - stsb
12
+ metrics:
13
+ - pearson correlation
14
+ - spearman correlation
15
+ ---
16
+
17
+ `bert-base-uncased` fine-tuned on STS-B dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation.
18
+ The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/stsb/kd/bert_base_uncased_from_bert_large_uncased.yaml).
19
+ I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "bert-base-uncased",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "finetuning_task": "stsb",
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "LABEL_0"
14
+ },
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 3072,
17
+ "label2id": {
18
+ "LABEL_0": 0
19
+ },
20
+ "layer_norm_eps": 1e-12,
21
+ "max_position_embeddings": 512,
22
+ "model_type": "bert",
23
+ "num_attention_heads": 12,
24
+ "num_hidden_layers": 12,
25
+ "pad_token_id": 0,
26
+ "position_embedding_type": "absolute",
27
+ "problem_type": "regression",
28
+ "transformers_version": "4.6.1",
29
+ "type_vocab_size": 2,
30
+ "use_cache": true,
31
+ "vocab_size": 30522
32
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cee4f2158255c55c4d80bdd5992feca635e97b2bfddbdc32cfbcf567e37e9c31
3
+ size 438021385
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "do_lower": true, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "bert-base-uncased"}
training.log ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2021-05-31 17:22:31,278 INFO __main__ Namespace(adjust_lr=False, config='torchdistill/configs/sample/glue/stsb/kd/bert_base_uncased_from_bert_large_uncased.yaml', log='log/glue/stsb/kd/bert_base_uncased_from_bert_large_uncased.txt', private_output='leaderboard/glue/kd/bert_base_uncased_from_bert_large_uncased/', seed=None, student_only=False, task_name='stsb', test_only=False, world_size=1)
2
+ 2021-05-31 17:22:31,313 INFO __main__ Distributed environment: NO
3
+ Num processes: 1
4
+ Process index: 0
5
+ Local process index: 0
6
+ Device: cuda
7
+ Use FP16 precision: True
8
+
9
+ 2021-05-31 17:22:43,561 WARNING datasets.builder Reusing dataset glue (/root/.cache/huggingface/datasets/glue/stsb/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)
10
+ 2021-05-31 17:22:44,918 INFO __main__ Start training
11
+ 2021-05-31 17:22:44,918 INFO torchdistill.models.util [teacher model]
12
+ 2021-05-31 17:22:44,918 INFO torchdistill.models.util Using the original teacher model
13
+ 2021-05-31 17:22:44,919 INFO torchdistill.models.util [student model]
14
+ 2021-05-31 17:22:44,919 INFO torchdistill.models.util Using the original student model
15
+ 2021-05-31 17:22:44,919 INFO torchdistill.core.distillation Loss = 1.0 * OrgLoss + 1.0 * MSELoss()
16
+ 2021-05-31 17:22:44,919 INFO torchdistill.core.distillation Freezing the whole teacher model
17
+ 2021-05-31 17:22:48,242 INFO torchdistill.misc.log Epoch: [0] [ 0/180] eta: 0:01:16 lr: 2.9944444444444443e-05 sample/s: 9.576304360110528 loss: 120.2613 (120.2613) time: 0.4228 data: 0.0051 max mem: 2371
18
+ 2021-05-31 17:23:12,289 INFO torchdistill.misc.log Epoch: [0] [ 50/180] eta: 0:01:02 lr: 2.716666666666667e-05 sample/s: 8.421417843919912 loss: 25.1578 (75.2308) time: 0.4905 data: 0.0035 max mem: 5106
19
+ 2021-05-31 17:23:36,182 INFO torchdistill.misc.log Epoch: [0] [100/180] eta: 0:00:38 lr: 2.438888888888889e-05 sample/s: 8.415571491057348 loss: 16.4009 (46.5507) time: 0.4644 data: 0.0034 max mem: 5106
20
+ 2021-05-31 17:23:59,828 INFO torchdistill.misc.log Epoch: [0] [150/180] eta: 0:00:14 lr: 2.161111111111111e-05 sample/s: 11.967236571894867 loss: 13.4210 (35.5925) time: 0.4481 data: 0.0035 max mem: 5106
21
+ 2021-05-31 17:24:13,337 INFO torchdistill.misc.log Epoch: [0] Total time: 0:01:25
22
+ 2021-05-31 17:24:16,240 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/stsb/default_experiment-1-0.arrow
23
+ 2021-05-31 17:24:16,240 INFO __main__ Validation: pearson = 0.8895823514826617, spearmanr = 0.8891221171282753
24
+ 2021-05-31 17:24:16,241 INFO __main__ Updating ckpt at ./resource/ckpt/glue/stsb/kd/stsb-bert-base-uncased_from_bert-large-uncased
25
+ 2021-05-31 17:24:17,947 INFO torchdistill.misc.log Epoch: [1] [ 0/180] eta: 0:01:17 lr: 1.9944444444444447e-05 sample/s: 9.436731626677647 loss: 3.8620 (3.8620) time: 0.4306 data: 0.0067 max mem: 5106
26
+ 2021-05-31 17:24:42,457 INFO torchdistill.misc.log Epoch: [1] [ 50/180] eta: 0:01:03 lr: 1.7166666666666666e-05 sample/s: 7.082230403175797 loss: 4.8790 (5.6907) time: 0.5025 data: 0.0035 max mem: 5106
27
+ 2021-05-31 17:25:07,011 INFO torchdistill.misc.log Epoch: [1] [100/180] eta: 0:00:39 lr: 1.438888888888889e-05 sample/s: 8.415195811165377 loss: 4.9423 (5.5366) time: 0.4891 data: 0.0040 max mem: 5106
28
+ 2021-05-31 17:25:30,260 INFO torchdistill.misc.log Epoch: [1] [150/180] eta: 0:00:14 lr: 1.161111111111111e-05 sample/s: 11.947873559412079 loss: 5.0895 (5.4334) time: 0.4671 data: 0.0035 max mem: 5106
29
+ 2021-05-31 17:25:43,919 INFO torchdistill.misc.log Epoch: [1] Total time: 0:01:26
30
+ 2021-05-31 17:25:46,822 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/stsb/default_experiment-1-0.arrow
31
+ 2021-05-31 17:25:46,822 INFO __main__ Validation: pearson = 0.8947956931797695, spearmanr = 0.8914128136254597
32
+ 2021-05-31 17:25:46,822 INFO __main__ Updating ckpt at ./resource/ckpt/glue/stsb/kd/stsb-bert-base-uncased_from_bert-large-uncased
33
+ 2021-05-31 17:25:48,430 INFO torchdistill.misc.log Epoch: [2] [ 0/180] eta: 0:01:42 lr: 9.944444444444445e-06 sample/s: 7.080783708625091 loss: 2.3258 (2.3258) time: 0.5692 data: 0.0043 max mem: 5109
34
+ 2021-05-31 17:26:12,502 INFO torchdistill.misc.log Epoch: [2] [ 50/180] eta: 0:01:02 lr: 7.166666666666667e-06 sample/s: 8.417454616972364 loss: 1.8558 (2.0434) time: 0.4910 data: 0.0036 max mem: 5109
35
+ 2021-05-31 17:26:36,697 INFO torchdistill.misc.log Epoch: [2] [100/180] eta: 0:00:38 lr: 4.388888888888889e-06 sample/s: 7.668799172107633 loss: 2.1584 (2.0183) time: 0.4890 data: 0.0035 max mem: 5109
36
+ 2021-05-31 17:27:00,193 INFO torchdistill.misc.log Epoch: [2] [150/180] eta: 0:00:14 lr: 1.6111111111111111e-06 sample/s: 8.416690287143053 loss: 1.3802 (1.9481) time: 0.4709 data: 0.0038 max mem: 5109
37
+ 2021-05-31 17:27:14,506 INFO torchdistill.misc.log Epoch: [2] Total time: 0:01:26
38
+ 2021-05-31 17:27:17,408 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/stsb/default_experiment-1-0.arrow
39
+ 2021-05-31 17:27:17,409 INFO __main__ Validation: pearson = 0.8971237948141647, spearmanr = 0.8935765996632005
40
+ 2021-05-31 17:27:17,409 INFO __main__ Updating ckpt at ./resource/ckpt/glue/stsb/kd/stsb-bert-base-uncased_from_bert-large-uncased
41
+ 2021-05-31 17:27:18,654 INFO __main__ [Teacher: bert-large-uncased]
42
+ 2021-05-31 17:27:26,678 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/stsb/default_experiment-1-0.arrow
43
+ 2021-05-31 17:27:26,678 INFO __main__ Test: pearson = 0.9034287890218271, spearmanr = 0.9010997201222941
44
+ 2021-05-31 17:27:29,798 INFO __main__ [Student: bert-base-uncased]
45
+ 2021-05-31 17:27:32,709 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/stsb/default_experiment-1-0.arrow
46
+ 2021-05-31 17:27:32,709 INFO __main__ Test: pearson = 0.8971237948141647, spearmanr = 0.8935765996632005
47
+ 2021-05-31 17:27:32,709 INFO __main__ Start prediction for private dataset(s)
48
+ 2021-05-31 17:27:32,710 INFO __main__ stsb/test: 1379 samples
vocab.txt ADDED
The diff for this file is too large to render. See raw diff