YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Model Performance
Classification Report
Class | Precision | Recall | F1-Score | Support |
---|---|---|---|---|
Negative | 0.90 | 0.89 | 0.90 | 14692 |
Neutral | 0.90 | 0.88 | 0.89 | 16970 |
Positive | 0.89 | 0.92 | 0.90 | 16861 |
- Accuracy: 90%
- Macro Avg Precision: 0.90
- Macro Avg Recall: 0.90
- Macro Avg F1-Score: 0.90
Summary
This model achieves a balanced performance across all sentiment classes, with high precision and recall, especially in the positive and negative classes.
tags: - text-classification - sentiment-analysis pipeline_tag: text-classification
Model Name
ft-Malay-bert
Model Description
This model is a fine-tuned version of BERT for the Malay language. It has been trained on [describe your dataset] to perform [specific task, e.g., sentiment analysis, classification, etc.].
Intended Uses & Limitations
This model is intended for [describe intended uses]. It should be noted that [mention any limitations or biases].
How to Use
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("rmtariq/ft-Malay-bert")
tokenizer = AutoTokenizer.from_pretrained("rmtariq/ft-Malay-bert")
inputs = tokenizer("Your text here", return_tensors="pt")
outputs = model(**inputs)
- Downloads last month
- 59
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.