Update README.md
Browse files
README.md
CHANGED
@@ -1 +1,15 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
* Continue pre-training RoBERTa-base using discharge summaries from MIMIC-III datasets.
|
2 |
+
|
3 |
+
* Details can be found in the following paper
|
4 |
+
|
5 |
+
> Xiang Dai and Ilias Chalkidis and Sune Darkner and Desmond Elliott. 2022. Revisiting Transformer-based Models for Long Document Classification. (https://arxiv.org/abs/2204.06683)
|
6 |
+
|
7 |
+
* Important hyper-parameters
|
8 |
+
|
9 |
+
| | |
|
10 |
+
|---|---|
|
11 |
+
| Max sequence | 128 |
|
12 |
+
| Batch size | 128 |
|
13 |
+
| Learning rate | 5e-5 |
|
14 |
+
| Training epochs | 15 |
|
15 |
+
| Training time | 40 GPU-hours |
|