egyptian_sentiment_analysis
This model is a fine-tuned version of CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.2481
- Accuracy: 0.9519
- F1: 0.9520
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
---|---|---|---|---|---|
No log | 1.0 | 291 | 0.1533 | 0.9467 | 0.9466 |
0.2224 | 2.0 | 582 | 0.2004 | 0.9467 | 0.9469 |
0.2224 | 3.0 | 873 | 0.2178 | 0.9553 | 0.9553 |
0.0393 | 4.0 | 1164 | 0.2400 | 0.9553 | 0.9552 |
0.0393 | 5.0 | 1455 | 0.2481 | 0.9519 | 0.9520 |
Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Tokenizers 0.21.0
How to use:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_path = "ehab215/egyptian_sentiment_analysis"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
# Ensure model is on GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
# Step 2: Prepare test examples
examples = [
add any examples you would
]
# Tokenize the examples
inputs = tokenizer(examples, truncation=True, padding=True, return_tensors="pt", max_length=256)
inputs = {key: val.to(device) for key, val in inputs.items()}
# Step 3: Make predictions
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predictions = torch.argmax(logits, dim=-1).cpu().numpy()
# Step 4: Interpret results
label_map = {0: "negative", 1: "neutral", 2: "positive"}
predicted_labels = [label_map[p] for p in predictions]
# Display results
for text, label in zip(examples, predicted_labels):
print(f"Text: {text}")
print(f"Predicted Sentiment: {label}")
print("-" * 50)
- Downloads last month
- 109
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.