pritamdeka's picture
Update README.md
bcbee56
|
raw
history blame
3.82 kB
metadata
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: PubMedBERT-MNLI-MedNLI
    results: []

PubMedBERT-MNLI-MedNLI

This model is a fine-tuned version of PubMedBERT on the MNLI dataset first and then on the MedNLI dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9501
  • Accuracy: 0.8667

Model description

More information needed

Intended uses & limitations

The model can be used for NLI tasks related to biomedical data and even be adapted to fact-checking tasks. It can be used from the Huggingface pipeline method as follows:

from transformers import TextClassificationPipeline, AutoModel, AutoTokenizer, AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("pritamdeka/PubMedBERT-MNLI-MedNLI", num_labels=3, id2label = {1: 'entailment', 0: 'contradiction',2:'neutral'})
tokenizer = AutoTokenizer.from_pretrained("pritamdeka/PubMedBERT-MNLI-MedNLI")
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True, device=0, batch_size=128)

pipe(['ALDH1 expression is associated with better breast cancer outcomes',
      'In a series of 577 breast carcinomas, expression of ALDH1 detected by immunostaining correlated with poor prognosis.'])

The output for the above will be:

[[{'label': 'contradiction', 'score': 0.10193759202957153},
  {'label': 'entailment', 'score': 0.2933262586593628},
  {'label': 'neutral', 'score': 0.6047361493110657}],
 [{'label': 'contradiction', 'score': 0.21726925671100616},
  {'label': 'entailment', 'score': 0.24485822021961212},
  {'label': 'neutral', 'score': 0.5378724932670593}]]

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20.0

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.5673 1.42 500 0.4358 0.8437
0.2898 2.85 1000 0.4845 0.8523
0.1669 4.27 1500 0.6233 0.8573
0.1087 5.7 2000 0.7263 0.8573
0.0728 7.12 2500 0.8841 0.8638
0.0512 8.55 3000 0.9501 0.8667
0.0372 9.97 3500 1.0440 0.8566
0.0262 11.4 4000 1.0770 0.8609
0.0243 12.82 4500 1.0931 0.8616
0.023 14.25 5000 1.1088 0.8631
0.0163 15.67 5500 1.1264 0.8581
0.0111 17.09 6000 1.1541 0.8616
0.0098 18.52 6500 1.1542 0.8631
0.0074 19.94 7000 1.1653 0.8638

Framework versions

  • Transformers 4.22.0.dev0
  • Pytorch 1.12.1+cu113
  • Datasets 2.4.0
  • Tokenizers 0.12.1

Citing & Authors

If you use the model kindly cite the following work (Paper accepted at BioNLP2023@ACL2023, citation will be updated)

Multiple Evidence Combination for Fact-Checking of Health-Related Information
}