metadata
language:
- tr
thumbnail: >-
https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- emotion
- pytorch
datasets:
- emotion (Translated to Turkish)
metrics:
- Accuracy, F1 Score
distilbert-base-turkish-cased-emotion
Model description:
Distilbert-base-turkish-cased finetuned on the emotion dataset (Translated to Turkish via Google Translate API) using HuggingFace Trainer with below Hyperparameters
learning rate 2e-5,
batch size 64,
num_train_epochs=8,
Model Performance Comparision on Emotion Dataset from Twitter:
Model | Accuracy | F1 Score | Test Sample per Second |
---|---|---|---|
Distilbert-base-turkish-cased-emotion | 83.25 | 83.17 | 232.197 |
How to Use the model:
from transformers import pipeline
classifier = pipeline("text-classification",
model='zafercavdar/distilbert-base-turkish-cased-emotion',
return_all_scores=True)
prediction = classifier("Bu kütüphaneyi seviyorum, en iyi yanı kolay kullanımı.", )
print(prediction)
"""
Output:
[
[
{'label': 'sadness', 'score': 0.0026786490343511105},
{'label': 'joy', 'score': 0.6600754261016846},
{'label': 'love', 'score': 0.3203163146972656},
{'label': 'anger', 'score': 0.004358913749456406},
{'label': 'fear', 'score': 0.002354539930820465},
{'label': 'surprise', 'score': 0.010216088965535164}
]
]
"""
Dataset:
Eval results
{
'eval_accuracy': 0.8325,
'eval_f1': 0.8317301441160213,
'eval_loss': 0.5021793842315674,
'eval_runtime': 8.6167,
'eval_samples_per_second': 232.108,
'eval_steps_per_second': 3.714
}