Zero-Shot Classification
Transformers
PyTorch
Safetensors
27 languages
deberta-v2
text-classification
mdeberta-v3-base
nli
natural-language-inference
multitask
multi-task
pipeline
extreme-multi-task
extreme-mtl
tasksource
zero-shot
rlhf
Inference Endpoints
File size: 2,576 Bytes
7406f02
cce7dd1
18bfcee
 
 
 
 
 
 
 
 
 
 
 
 
 
cce7dd1
40f8194
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7cdc903
 
40f8194
18bfcee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f1b713e
1bf73e9
 
 
 
4b26406
1bf73e9
 
 
 
 
8bad607
 
2471bf7
1bf73e9
 
 
a69ddd2
 
d4f0a5e
1bf73e9
 
 
d4f0a5e
36def8f
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
---
license: apache-2.0
tags:
- mdeberta-v3-base
- text-classification
- nli
- natural-language-inference
- multilingual
- multitask
- multi-task
- pipeline
- extreme-multi-task
- extreme-mtl
- tasksource
- zero-shot
- rlhf
datasets:
- xnli
- metaeval/xnli
- americas_nli
- MoritzLaurer/multilingual-NLI-26lang-2mil7
- stsb_multi_mt
- paws-x
- miam
- strombergnlp/x-stance
- tyqiangz/multilingual-sentiments
- metaeval/universal-joy
- amazon_reviews_multi
- cardiffnlp/tweet_sentiment_multilingual
- strombergnlp/offenseval_2020
- offenseval_dravidian
- nedjmaou/MLMA_hate_speech
- xglue
- ylacombe/xsum_factuality
- metaeval/x-fact
- pasinit/xlwic
- tasksource/oasst1_dense_flat
- papluca/language-identification
- wili_2018
- exams
- xcsr
- xcopa
- juletxara/xstory_cloze
- Anthropic/hh-rlhf
- universal_dependencies
- tasksource/oasst1_pairwise_rlhf_reward
- OpenAssistant/oasst1
language:
- multilingual
- zh
- ja
- ar
- ko
- de
- fr
- es
- pt
- hi
- id
- it
- tr
- ru
- bn
- ur
- mr
- ta
- vi
- fa
- pl
- uk
- nl
- sv
- he
- sw
- ps
pipeline_tag: zero-shot-classification
---

# Model Card for mDeBERTa-v3-base-tasksource-nli

Multilingual [mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) with 30k steps multi-task training on [mtasksource](https://github.com/sileod/tasksource/blob/main/mtasks.md)
This model can be used as a stable starting-point for further fine-tuning, or directly in zero-shot NLI model or a zero-shot pipeline.
In addition, you can use the provided [adapters](https://huggingface.co/sileod/mdeberta-v3-base-tasksource-adapters) to directly load a model for hundreds of tasks. 
```python
!pip install tasknet, tasksource -q
import tasknet as tn 
pipe=tn.load_pipeline(
  'sileod/mdeberta-v3-base-tasksource-nli',
  'miam/dihana')
pipe(['si','como esta?'])
```

For more details, see [deberta-v3-base-tasksource-nli](https://huggingface.co/sileod/deberta-v3-base-tasksource-nli) and replace tasksource by mtasksource.

# Software
https://github.com/sileod/tasksource/
https://github.com/sileod/tasknet/

# Contact and citation
For help integrating tasksource into your experiments, please contact [damien.sileo@inria.fr](mailto:damien.sileo@inria.fr).

For more details, refer to this [article:](https://arxiv.org/abs/2301.05948) 
```bib
@article{sileo2023tasksource,
  title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation},
  author={Sileo, Damien},
  url= {https://arxiv.org/abs/2301.05948},
  journal={arXiv preprint arXiv:2301.05948},
  year={2023}
}
```