yoshitomo-matsubara
commited on
Commit
•
14f76ca
1
Parent(s):
36c4345
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
tags:
|
4 |
+
- mrpc
|
5 |
+
- glue
|
6 |
+
- torchdistill
|
7 |
+
license: apache-2.0
|
8 |
+
datasets:
|
9 |
+
- mrpc
|
10 |
+
metrics:
|
11 |
+
- f1
|
12 |
+
- accuracy
|
13 |
+
---
|
14 |
+
|
15 |
+
`bert-large-uncased` fine-tuned on MRPC dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
|
16 |
+
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/mrpc/ce/bert_large_uncased.yaml).
|
17 |
+
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **79.1**.
|