Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: "en"
|
3 |
+
tags:
|
4 |
+
- BigBird
|
5 |
+
- clinical
|
6 |
+
|
7 |
+
---
|
8 |
+
|
9 |
+
<span style="font-size:larger;">**Clinical-BigBird**</span> is a clinical knowledge enriched version of BigBird that was further pre-trained using MIMIC-III clinical notes. It allows up to 4,096 tokens as the model input. Clinical-BigBird consistently out-performs ClinicalBERT across 10 baseline dataset. Those downstream experiments broadly cover named entity recognition (NER), question answering (QA), natural language inference (NLI) and text classification tasks. For more details, please refer to [our paper](https://arxiv.org/pdf/2201.11838.pdf).
|
10 |
+
We also provide a sister model at [Clinical-Longformer](https://huggingface.co/yikuan8/Clinical-Longformer)
|
11 |
+
|
12 |
+
|
13 |
+
### Pre-training
|
14 |
+
We initialized Clinical-BigBird from the pre-trained weights of the base version of BigBird. The pre-training process was distributed in parallel to 6 32GB Tesla V100 GPUs. FP16 precision was enabled to accelerate training. We pre-trained Clinical-BigBird for 300,000 steps with batch size of 6×2. The learning rates were 3e-5 for both models. The entire pre-training process took more than 2 weeks.
|
15 |
+
|
16 |
+
|
17 |
+
### Usage
|
18 |
+
Load the model directly from Transformers:
|
19 |
+
```
|
20 |
+
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
21 |
+
tokenizer = AutoTokenizer.from_pretrained("yikuan8/Clinical-BigBird")
|
22 |
+
model = AutoModelForMaskedLM.from_pretrained("yikuan8/Clinical-BigBird")
|
23 |
+
```
|
24 |
+
### Citing
|
25 |
+
If you find our model helps, please consider citing this :)
|
26 |
+
```
|
27 |
+
@article{li2022clinical,
|
28 |
+
title={Clinical-Longformer and Clinical-BigBird: Transformers for long clinical sequences},
|
29 |
+
author={Li, Yikuan and Wehbe, Ramsey M and Ahmad, Faraz S and Wang, Hanyin and Luo, Yuan},
|
30 |
+
journal={arXiv preprint arXiv:2201.11838},
|
31 |
+
year={2022}
|
32 |
+
}
|
33 |
+
```
|
34 |
+
|
35 |
+
### Questions
|
36 |
+
Please email yikuanli2018@u.northwestern.edu
|
37 |
+
|
38 |
+
|
39 |
+
|