|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- nyu-mll/multi_nli |
|
language: |
|
- en |
|
metrics: |
|
- accuracy |
|
library_name: adapter-transformers |
|
pipeline_tag: text-classification |
|
tags: |
|
- code |
|
base_model: |
|
- sinancavdar/BertForSequenceClassification |
|
--- |
|
# Entailment Detection by Fine-tuning BERT |
|
---------------------------------------------- |
|
<li>The model in this repository is fine-tuned on Google's encoder-decoder transformer-based model BERT.</li> |
|
<li>New York University's Multi-NLI dataset is used for fine-tuning.</li> |
|
<li>Accuracy achieved: ~74% |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/66459d9b9a74ece3a312e380/X1RdqHS6zLI874J4bz1Kb.png) |
|
|
|
</li> |
|
<li>Notebook used for fine-tuning: <a href='https://huggingface.co/ArghaKamalSamanta/ema_task_entailment/blob/main/ema-task-bert-finetuning.ipynb'>here</a></li> |
|
<p></p><p></p> |
|
<i><b>N.B.:</b> Due to computational resource constraints, only 11K samples are used for fine-tuning. There is room for accuracy improvement if a model is trained on all the 390K samples available in the dataset.</i> |