Text Classification
Transformers
Safetensors
English
Indonesian
xlm-roberta
Inference Endpoints
Edit model card

Model Description

Finetuned xlm-roberta-large for Sentiment Analysis in English and Bahasa Indonesia

Training results

Trained on TPU VM v4-8 for ~5 hours

epoch step train_accuracy train_loss val_accuracy val_loss
0 10782 0.964588165 0.095930442 0.967545867 0.08873909
1 21565 0.970602274 0.079982288 0.968977571 0.08539474

Training procedure

For replication, go to GitHub page

Acknowledgement

  1. Google’s TPU Research Cloud (TRC) for providing Cloud TPU VM.
  2. carlesoctav for making the training script on TPU VM
  3. thonyyy for gathering the sentiment dataset
Downloads last month
8
Safetensors
Model size
560M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train carant-ai/xlm-roberta-sentiment-large