versae commited on
Commit
dfa6cb5
1 Parent(s): 5abdd67
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -26,13 +26,13 @@ widget:
26
  # NB-Bert base model finetuned on Norwegian machine translated MNLI
27
 
28
  ## Description
29
- The most effective way of creating a good classifier is to finetune it for this specific task. However, in many cases this is simply impossible.
30
- [Yin et al.](https://arxiv.org/abs/1909.00161) has proposed a very clever way of using pre-trained MNLI model as a zero-shot sequence classifiers. The methods works by reformulating the question to an MNLI hypothesis. If we want to figure out if a text is about "sport", we simply state that "This text is about sport" ("Denne teksten handler om sport").
31
 
32
  When the model is finetuned on the 400k large MNLI task, it is in many cases able to solve this classification tasks. There are no MNLI-set of this size in Norwegian but we have trained it on a machine translated version of the original MNLI-set.
33
 
34
  ## Hugging Face zero-shot-classification pipeline
35
- The easiest way to try this out is using the Hugging Face pipeline. Please note that you will improve the results by overriding the English hypothesis template.
36
  ```python
37
  from transformers import pipeline
38
  classifier = pipeline("zero-shot-classification", model="NBAiLab/nb-bert-base-mnli")
 
26
  # NB-Bert base model finetuned on Norwegian machine translated MNLI
27
 
28
  ## Description
29
+ The most effective way of creating a good classifier is to finetune a pre-trained model for the specific task at hand. However, in many cases this is simply impossible.
30
+ [Yin et al.](https://arxiv.org/abs/1909.00161) proposed a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers. The methods works by reformulating the question to an MNLI hypothesis. If we want to figure out if a text is about "sport", we simply state that "This text is about sport" ("Denne teksten handler om sport").
31
 
32
  When the model is finetuned on the 400k large MNLI task, it is in many cases able to solve this classification tasks. There are no MNLI-set of this size in Norwegian but we have trained it on a machine translated version of the original MNLI-set.
33
 
34
  ## Hugging Face zero-shot-classification pipeline
35
+ The easiest way to try this out is by using the Hugging Face pipeline. Please, note that you will get better results when using Norwegian hypothesis template instead of the default English one.
36
  ```python
37
  from transformers import pipeline
38
  classifier = pipeline("zero-shot-classification", model="NBAiLab/nb-bert-base-mnli")