File size: 1,053 Bytes
91884de
 
 
 
 
 
 
 
9b0cef2
91884de
 
 
9b0cef2
136fb6c
91884de
b4ac079
 
 
600f478
a898976
 
 
 
 
5d8148d
b4ac079
650c216
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
---
license: apache-2.0
datasets:
- nyu-mll/multi_nli
language:
- en
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: text-classification
tags:
- code
base_model:
- sinancavdar/BertForSequenceClassification
---
# Entailment Detection by Fine-tuning BERT
----------------------------------------------
<li>The model in this repository is fine-tuned on Google's encoder-decoder transformer-based model BERT.</li>
<li>New York University's Multi-NLI dataset is used for fine-tuning.</li>
<li>Accuracy achieved: ~74% 

![image/png](https://cdn-uploads.huggingface.co/production/uploads/66459d9b9a74ece3a312e380/X1RdqHS6zLI874J4bz1Kb.png)

</li>
<li>Notebook used for fine-tuning: <a href='https://huggingface.co/ArghaKamalSamanta/ema_task_entailment/blob/main/ema-task-bert-finetuning.ipynb'>here</a></li>
<p></p><p></p>
<i><b>N.B.:</b> Due to computational resource constraints, only 11K samples are used for fine-tuning. There is room for accuracy improvement if a model is trained on all the 390K samples available in the dataset.</i>