ArghaKamalSamanta commited on
Commit
91884de
1 Parent(s): 600f478

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -1
README.md CHANGED
@@ -1,7 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Entailment Detection by Fine-tuning BERT
2
  ----------------------------------------------
3
  <li>The model in this repository is fine-tuned on Google's encoder-decoder transformer-based model BERT.</li>
4
  <li>New York University's Multi-NLI dataset is used for fine-tuning.</li>
5
  <li>Accuracy achieved: ~73%</li>
6
  <p></p><p></p>
7
- <i><b>N.B.:</b> Due to computational resource constraints, only 11K samples are used for fine-tuning. There is room for accuracy improvement if a model is trained on all the 390K samples in the dataset.</i>
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - nyu-mll/multi_nli
5
+ language:
6
+ - en
7
+ metrics:
8
+ - accuracy
9
+ library_name: adapter-transformers
10
+ pipeline_tag: text-classification
11
+ tags:
12
+ - code
13
+ base_model: bert
14
+ ---
15
  # Entailment Detection by Fine-tuning BERT
16
  ----------------------------------------------
17
  <li>The model in this repository is fine-tuned on Google's encoder-decoder transformer-based model BERT.</li>
18
  <li>New York University's Multi-NLI dataset is used for fine-tuning.</li>
19
  <li>Accuracy achieved: ~73%</li>
20
  <p></p><p></p>
21
+ <i><b>N.B.:</b> Due to computational resource constraints, only 11K samples are used for fine-tuning. There is room for accuracy improvement if a model is trained on all the 390K samples in the dataset.</i>