ONNX version of papluca/xlm-roberta-base-language-detection
This model is a conversion of papluca/xlm-roberta-base-language-detection to ONNX format using the 🤗 Optimum library.
Model description
This model is a fine-tuned version of xlm-roberta-base on the Language Identification dataset.
This model is an XLM-RoBERTa transformer model with a classification head on top (i.e. a linear layer on top of the pooled output). For additional information please refer to the xlm-roberta-base model card or to the paper Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al.
Intended uses & limitations
You can directly use this model as a language detector, i.e. for sequence classification tasks. Currently, it supports the following 20 languages:
arabic (ar), bulgarian (bg), german (de), modern greek (el), english (en), spanish (es), french (fr), hindi (hi), italian (it), japanese (ja), dutch (nl), polish (pl), portuguese (pt), russian (ru), swahili (sw), thai (th), turkish (tr), urdu (ur), vietnamese (vi), and chinese (zh)
Usage
Optimum
Loading the model requires the 🤗 Optimum library installed.
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("laiyer/xlm-roberta-base-language-detection-onnx")
model = ORTModelForSequenceClassification.from_pretrained("laiyer/xlm-roberta-base-language-detection-onnx")
classifier = pipeline(
task="text-classification",
model=model,
tokenizer=tokenizer,
top_k=None,
)
classifier_output = ner("It's not toxic comment")
print(classifier_output)
LLM Guard
Community
Join our Slack to give us feedback, connect with the maintainers and fellow users, ask questions, or engage in discussions about LLM security!
- Downloads last month
- 21,772
Model tree for protectai/xlm-roberta-base-language-detection-onnx
Base model
FacebookAI/xlm-roberta-base