Update README.md
Browse files
README.md
CHANGED
@@ -1,10 +1,17 @@
|
|
1 |
## RadBERT-RoBERTa-4m
|
2 |
|
3 |
-
This is one variant of our RadBERT models trained with 4 million VA
|
|
|
|
|
|
|
|
|
|
|
4 |
|
5 |
For details, check out the paper here:
|
6 |
[RadBERT: Adapting transformer-based language models to radiology](https://pubs.rsna.org/doi/abs/10.1148/ryai.210258)
|
7 |
|
|
|
|
|
8 |
### How to use
|
9 |
|
10 |
Here is an example of how to use this model to extract the features of a given text in PyTorch:
|
|
|
1 |
## RadBERT-RoBERTa-4m
|
2 |
|
3 |
+
This is one variant of our RadBERT models trained with 4 million deidentified medical reports from US VA hospital, which achieves stronger medical language understanding performance than previous medical domain models such as BioBERT, Clinical-BERT, BLUE-BERT and BioMed-RoBERTa.
|
4 |
+
|
5 |
+
Performances are evaluated on three tasks:
|
6 |
+
(a) abnormal sentence classification: sentence classification in radiology reports as reporting abnormal or normal findings;
|
7 |
+
(b) report coding: Assign a diagnostic code to a given radiology report for five different coding systems;
|
8 |
+
(c) report summarization: given the findings section of a radiology report, extractively select key sentences that summarized the findings.
|
9 |
|
10 |
For details, check out the paper here:
|
11 |
[RadBERT: Adapting transformer-based language models to radiology](https://pubs.rsna.org/doi/abs/10.1148/ryai.210258)
|
12 |
|
13 |
+
Code repeated to the paper is released at [this GitHub repo](https://github.com/zzxslp/RadBERT).
|
14 |
+
|
15 |
### How to use
|
16 |
|
17 |
Here is an example of how to use this model to extract the features of a given text in PyTorch:
|