Update README.md
Browse files
README.md
CHANGED
@@ -56,7 +56,7 @@ And now the hypothesis in French and the premise in English (cross-language cont
|
|
56 |
|
57 |
# Zero-shot Classification
|
58 |
The primary interest of training such models lies in their zero-shot classification performance. This means that the model is able to classify any text with any label
|
59 |
-
without a specific training. What sets the Bloomz-
|
60 |
and lengthy text structures compared to models like BERT, RoBERTa, or CamemBERT.
|
61 |
|
62 |
The zero-shot classification task can be summarized by:
|
|
|
56 |
|
57 |
# Zero-shot Classification
|
58 |
The primary interest of training such models lies in their zero-shot classification performance. This means that the model is able to classify any text with any label
|
59 |
+
without a specific training. What sets the Bloomz-7b1-mt-NLI LLMs apart in this domain is their ability to model and extract information from significantly more complex
|
60 |
and lengthy text structures compared to models like BERT, RoBERTa, or CamemBERT.
|
61 |
|
62 |
The zero-shot classification task can be summarized by:
|