Japanese Natural Language Inference Model
This model was trained using SentenceTransformers Cross-Encoder class, gradient accumulation PR, and the code from CyberAgentAILab/japanese-nli-model.
Training Data
The model was trained on the JGLUE-JNLI and JSICK datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('cyberagent/xlm-roberta-large-jnli-jsick')
model = AutoModelForSequenceClassification.from_pretrained('cyberagent/xlm-roberta-large-jnli-jsick')
features = tokenizer(["εδΎγθ΅°γ£γ¦γγη«γθ¦γ¦γγ", "η«γθ΅°γ£γ¦γγ"], ["η«γθ΅°γ£γ¦γγ", "εδΎγθ΅°γ£γ¦γγ"], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
- Downloads last month
- 84
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.