Update README.md
Browse files
README.md
CHANGED
@@ -27,6 +27,9 @@ The most effective way of creating a good classifier is to finetune a pre-traine
|
|
27 |
|
28 |
When the model is finetuned on the 400k large MNLI task, it is in many cases able to solve this classification tasks. There are no MNLI-set of this size in Norwegian but we have trained it on a machine translated version of the original MNLI-set.
|
29 |
|
|
|
|
|
|
|
30 |
## Hugging Face zero-shot-classification pipeline
|
31 |
The easiest way to try this out is by using the Hugging Face pipeline. Please, note that you will get better results when using Norwegian hypothesis template instead of the default English one.
|
32 |
```python
|
|
|
27 |
|
28 |
When the model is finetuned on the 400k large MNLI task, it is in many cases able to solve this classification tasks. There are no MNLI-set of this size in Norwegian but we have trained it on a machine translated version of the original MNLI-set.
|
29 |
|
30 |
+
## Testing the model
|
31 |
+
For testing the model, we recommend the [NbAiLab Colab Notebook](https://colab.research.google.com/gist/peregilk/769b5150a2f807219ab8f15dd11ea449/nbailab-mnli-norwegian-demo.ipynb)
|
32 |
+
|
33 |
## Hugging Face zero-shot-classification pipeline
|
34 |
The easiest way to try this out is by using the Hugging Face pipeline. Please, note that you will get better results when using Norwegian hypothesis template instead of the default English one.
|
35 |
```python
|