Ozan Oktay
commited on
Commit
•
eccac43
1
Parent(s):
fa82ced
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,103 @@
|
|
1 |
---
|
|
|
|
|
|
|
2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language: en
|
3 |
+
tags:
|
4 |
+
- exbert
|
5 |
license: mit
|
6 |
+
widget:
|
7 |
+
- text: "Left pleural effusion with adjacent [MASK]."
|
8 |
+
example_title: "Radiology 1"
|
9 |
+
- text: "Heart size normal and lungs are [MASK]."
|
10 |
+
example_title: "Radiology 2"
|
11 |
---
|
12 |
+
|
13 |
+
# CXR-BERT-specialized
|
14 |
+
|
15 |
+
[CXR-BERT](https://arxiv.org/abs/2204.09817) is a chest X-ray (CXR) domain-specific language model that makes use of an improved vocabulary, novel pretraining procedure, weight regularization, and text augmentations. The resulting model demonstrates improved performance on radiology natural language inference, radiology masked language model token prediction, and downstream vision-language processing tasks such as zero-shot phrase grounding and image classification.
|
16 |
+
|
17 |
+
First, we pretrain [**CXR-BERT-general**](https://huggingface.co/microsoft/BiomedVLP-CXR-BERT-general) from a randomly initialized BERT model via Masked Language Modeling (MLM) on abstracts [PubMed](https://pubmed.ncbi.nlm.nih.gov/) and clinical notes from the publicly-available [MIMIC-III](https://physionet.org/content/mimiciii/1.4/) and [MIMIC-CXR](https://physionet.org/content/mimic-cxr/). In that regard, the general model is expected be applicable for research in clinical domains other than the chest radiology through domain specific fine-tuning.
|
18 |
+
|
19 |
+
**CXR-BERT-specialized** is continually pretrained from CXR-BERT-general to further specialize in the chest X-ray domain. At the final stage, CXR-BERT is trained in a multi-modal contrastive learning framework, similar to the [CLIP](https://arxiv.org/abs/2103.00020) framework. The latent representation of [CLS] token is utilized to align text/image embeddings.
|
20 |
+
|
21 |
+
## Model variations
|
22 |
+
|
23 |
+
| Model | Model identifier on HuggingFace | Vocabulary | Note |
|
24 |
+
|---------------------------------------------------|------------------------------------------|----------------|-----------------------------------------------------------|
|
25 |
+
| CXR-BERT-general | microsoft/BiomedVLP-CXR-BERT-general | PubMed & MIMIC | Pretrained for biomedical literature and clinical domains |
|
26 |
+
| CXR-BERT-specialized (after multi-modal training) | microsoft/BiomedVLP-CXR-BERT-specialized | PubMed & MIMIC | Pretrained for chest X-ray domain |
|
27 |
+
|
28 |
+
## Citation
|
29 |
+
|
30 |
+
```
|
31 |
+
@misc{https://doi.org/10.48550/arxiv.2204.09817,
|
32 |
+
title = {Making the Most of Text Semantics to Improve Biomedical Vision-Language Processing},
|
33 |
+
author = {Boecking, Benedikt and Usuyama, Naoto and Bannur, Shruthi and Castro, Daniel C. and Schwaighofer, Anton and Hyland, Stephanie and Wetscherek, Maria and Naumann, Tristan and Nori, Aditya and Alvarez-Valle, Javier and Poon, Hoifung and Oktay, Ozan},
|
34 |
+
publisher = {arXiv},
|
35 |
+
year = {2022},
|
36 |
+
url = {https://arxiv.org/abs/2204.09817},
|
37 |
+
doi = {10.48550/ARXIV.2204.09817},
|
38 |
+
}
|
39 |
+
```
|
40 |
+
|
41 |
+
## Model Use
|
42 |
+
|
43 |
+
### Intended Use
|
44 |
+
|
45 |
+
This model is intended to be used solely for (I) future research on visual-language processing and (II) reproducibility of the experimental results reported in the reference paper.
|
46 |
+
|
47 |
+
#### Primary Intended Use
|
48 |
+
|
49 |
+
The primary intended use is to support AI researchers building on top of this work. CXR-BERT and its associated models should be helpful for exploring various clinical NLP & VLP research questions, especially in the radiology domain.
|
50 |
+
|
51 |
+
#### Out-of-Scope Use
|
52 |
+
|
53 |
+
**Any** deployed use case of the model --- commercial or otherwise --- is currently out of scope. Although we evaluated the models using a broad set of publicly-available research benchmarks, the models and evaluations are not intended for deployed use cases. Please refer to [the associated paper](https://arxiv.org/abs/2204.09817) for more details.
|
54 |
+
|
55 |
+
## Data
|
56 |
+
|
57 |
+
This model builds upon existing publicly-available datasets:
|
58 |
+
|
59 |
+
- [PubMed](https://pubmed.ncbi.nlm.nih.gov/)
|
60 |
+
- [MIMIC-III](https://physionet.org/content/mimiciii/)
|
61 |
+
- [MIMIC-CXR](https://physionet.org/content/mimic-cxr/)
|
62 |
+
|
63 |
+
These datasets reflect a broad variety of sources ranging from biomedical abstracts to intensive care unit notes to chest X-ray radiology notes. The radiology notes are accompanied with their associated chest x-ray DICOM images in MIMIC-CXR dataset.
|
64 |
+
|
65 |
+
## Performance
|
66 |
+
|
67 |
+
We demonstrate that this language model achieves state-of-the-art results in radiology natural language inference through its improved vocabulary and novel language pretraining objective leveraging semantics and discourse characteristics in radiology reports.
|
68 |
+
|
69 |
+
A highlight of comparison to other common models, including [ClinicalBERT](https://aka.ms/clinicalbert) and [PubMedBERT](https://aka.ms/pubmedbert):
|
70 |
+
|
71 |
+
| | RadNLI accuracy (MedNLI transfer) | Mask prediction accuracy | Avg. # tokens after tokenization | Vocabulary size |
|
72 |
+
| ----------------------------------------------- | :-------------------------------: | :----------------------: | :------------------------------: | :-------------: |
|
73 |
+
| RadNLI baseline | 53.30 | - | - | - |
|
74 |
+
| ClinicalBERT | 47.67 | 39.84 | 78.98 (+38.15%) | 28,996 |
|
75 |
+
| PubMedBERT | 57.71 | 35.24 | 63.55 (+11.16%) | 28,895 |
|
76 |
+
| CXR-BERT (after Phase-III) | 60.46 | 77.72 | 58.07 (+1.59%) | 30,522 |
|
77 |
+
| **CXR-BERT (after Phase-III + Joint Training)** | **65.21** | **81.58** | **58.07 (+1.59%)** | 30,522 |
|
78 |
+
|
79 |
+
CXR-BERT also contributes to better vision-language representation learning through its improved text encoding capability. Below is the zero-shot phrase grounding performance on the **MS-CXR** dataset, which evaluates the quality of image-text latent representations.
|
80 |
+
|
81 |
+
| Vision–Language Pretraining Method | Text Encoder | MS-CXR Phrase Grounding (Avg. CNR Score) |
|
82 |
+
| ---------------------------------- | ------------ | :--------------------------------------: |
|
83 |
+
| Baseline | ClinicalBERT | 0.769 |
|
84 |
+
| Baseline | PubMedBERT | 0.773 |
|
85 |
+
| ConVIRT | ClinicalBERT | 0.818 |
|
86 |
+
| GLoRIA | ClinicalBERT | 0.930 |
|
87 |
+
| **BioViL** | **CXR-BERT** | **1.027** |
|
88 |
+
| **BioViL-L** | **CXR-BERT** | **1.142** |
|
89 |
+
|
90 |
+
Additional details about performance can be found in the corresponding paper, [Making the Most of Text Semantics to Improve Biomedical Vision-Language Processing](https://arxiv.org/abs/2204.09817).
|
91 |
+
|
92 |
+
## Limitations
|
93 |
+
|
94 |
+
This model was developed using English corpora, and thus can be considered English-only.
|
95 |
+
|
96 |
+
## Further information
|
97 |
+
|
98 |
+
Please refer to the corresponding paper, [Making the Most of Text Semantics to Improve Biomedical Vision-Language Processing](https://arxiv.org/abs/2204.09817) for additional details on the model training and evaluation.
|
99 |
+
|
100 |
+
For additional inference pipelines with CXR-BERT, please refer to the [HI-ML GitHub](https://github.com/microsoft/hi-ml/blob/main/multimodal/README.md) repository. The associated source files will soon be accessible through this link.
|
101 |
+
|
102 |
+
|
103 |
+
|