xlmr-large-hi-be-MLM-SQuAD-TyDi-MLQA Model Card
Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("question-answering", model="hapandya/xlmr-large-hi-be-MLM-SQuAD-TyDi-MLQA")
Load model directly
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("hapandya/xlmr-large-hi-be-MLM-SQuAD-TyDi-MLQA") model = AutoModelForQuestionAnswering.from_pretrained("hapandya/xlmr-large-hi-be-MLM-SQuAD-TyDi-MLQA"
- Downloads last month
- 115
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.