File size: 5,128 Bytes
8f0d597
 
 
 
 
9c68f18
8f0d597
3653b0d
8f0d597
 
 
 
 
 
3653b0d
8f0d597
44683bb
8f0d597
 
 
 
 
 
 
 
 
 
122e499
 
 
1cf4bd8
122e499
 
 
 
 
 
 
 
 
 
 
1cf4bd8
122e499
 
 
 
 
 
 
8f0d597
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bcbee56
 
 
 
 
 
52f25c5
bcbee56
 
52f25c5
 
 
 
 
 
 
 
 
 
 
 
 
 
42a2a72
bcbee56
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
---
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: PubMedBERT
model-index:
- name: PubMedBERT-MNLI-MedNLI
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# PubMedBERT-MNLI-MedNLI

This model is a fine-tuned version of [PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the [MNLI](https://huggingface.co/datasets/multi_nli) dataset first and then on the [MedNLI](https://physionet.org/content/mednli/1.0.0/) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9501
- Accuracy: 0.8667

## Model description

More information needed

## Intended uses & limitations

The model can be used for NLI tasks related to biomedical data and even be adapted to fact-checking tasks. It can be used from the Huggingface pipeline method as follows:


```python
from transformers import TextClassificationPipeline, AutoModel, AutoTokenizer, AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("pritamdeka/PubMedBERT-MNLI-MedNLI", num_labels=3, id2label = {1: 'entailment', 0: 'contradiction',2:'neutral'})
tokenizer = AutoTokenizer.from_pretrained("pritamdeka/PubMedBERT-MNLI-MedNLI")
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True, device=0, batch_size=128)

pipe(['ALDH1 expression is associated with better breast cancer outcomes',
      'In a series of 577 breast carcinomas, expression of ALDH1 detected by immunostaining correlated with poor prognosis.'])
```

The output for the above will be:

```python
[[{'label': 'contradiction', 'score': 0.10193759202957153},
  {'label': 'entailment', 'score': 0.2933262586593628},
  {'label': 'neutral', 'score': 0.6047361493110657}],
 [{'label': 'contradiction', 'score': 0.21726925671100616},
  {'label': 'entailment', 'score': 0.24485822021961212},
  {'label': 'neutral', 'score': 0.5378724932670593}]]
```

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5673        | 1.42  | 500  | 0.4358          | 0.8437   |
| 0.2898        | 2.85  | 1000 | 0.4845          | 0.8523   |
| 0.1669        | 4.27  | 1500 | 0.6233          | 0.8573   |
| 0.1087        | 5.7   | 2000 | 0.7263          | 0.8573   |
| 0.0728        | 7.12  | 2500 | 0.8841          | 0.8638   |
| 0.0512        | 8.55  | 3000 | 0.9501          | 0.8667   |
| 0.0372        | 9.97  | 3500 | 1.0440          | 0.8566   |
| 0.0262        | 11.4  | 4000 | 1.0770          | 0.8609   |
| 0.0243        | 12.82 | 4500 | 1.0931          | 0.8616   |
| 0.023         | 14.25 | 5000 | 1.1088          | 0.8631   |
| 0.0163        | 15.67 | 5500 | 1.1264          | 0.8581   |
| 0.0111        | 17.09 | 6000 | 1.1541          | 0.8616   |
| 0.0098        | 18.52 | 6500 | 1.1542          | 0.8631   |
| 0.0074        | 19.94 | 7000 | 1.1653          | 0.8638   |


### Framework versions

- Transformers 4.22.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1


## Citing & Authors

<!--- Describe where people can find more information -->

If you use the model kindly cite the following work

```
@inproceedings{deka-etal-2023-multiple,
    title = "Multiple Evidence Combination for Fact-Checking of Health-Related Information",
    author = "Deka, Pritam  and
      Jurek-Loughrey, Anna  and
      P, Deepak",
    booktitle = "The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.bionlp-1.20",
    pages = "237--247",
    abstract = "Fact-checking of health-related claims has become necessary in this digital age, where any information posted online is easily available to everyone. The most effective way to verify such claims is by using evidences obtained from reliable sources of medical knowledge, such as PubMed. Recent advances in the field of NLP have helped automate such fact-checking tasks. In this work, we propose a domain-specific BERT-based model using a transfer learning approach for the task of predicting the veracity of claim-evidence pairs for the verification of health-related facts. We also improvise on a method to combine multiple evidences retrieved for a single claim, taking into consideration conflicting evidences as well. We also show how our model can be exploited when labelled data is available and how back-translation can be used to augment data when there is data scarcity.",
}

```