license: cc-by-4.0
configs:
- config_name: default
data_files:
- split: train
path: train/data.jsonl
- split: test
path: test/test.jsonl
task_categories:
- text-classification
language:
- he
size_categories:
- 10K<n<100K
HebrewSentiment - A Sentiment-Analysis Dataset in Hebrew
Summary
HebrewSentiment is a Hebrew dataset for the sentiment analysis task.
Introduction
This dataset was constructed via [To Fill In].
Dataset Statistics
The table below shows the number of examples from each category in each of the splits:
split | total | positive | negative | neutral |
---|---|---|---|---|
train | 39,135 | 8,968 | 7,669 | 22,498 |
test | 2,170 | 503 | 433 | 1,234 |
Dataset Description
Each row in the dataset contains the following fields:
- id: A unique identifier for that training examples
- text: The textual content of the input sentence
- tag_ids: The label of the example (
Neutral
/Positive
/Negative
) - task_name: [To fill in]
- campaign_id: [To fill in]
- annotator_agreement_strength: [To fill in]
- survey_name: [To fill in]
- industry: [To fill in]
- type: [To fill in]
Models and Comparisons
In collaboration with DICTA we trained a model on this dataset and are happy to release it to the public: DictaBERT-Sentiment.
In addition, we compared the performance of the model to the previous existing sentiment dataset - Hebrew-Sentiment-Data from OnlpLab. We fine-tuned dictabert 3 times - once on the OnlpLab dataset, once on this dataset, and once on both datasets together and the results can be seen in the table below:
Training Corpus: | OnlpLab | HebrewSentiment | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
Accuracy | Macro F1 | F1 Positive | F1 Negative | F1 Neutral | Accuracy | Macro F1 | F1 Positive | F1 Negative | F1 Neutral | |
OnlpLab+HebrewSentiment | 87 | 61.7 | 93.2 | 74.6 | 17.4 | 83.9 | 82.7 | 79.8 | 81.8 | 86.4 |
OnlpLab | 88.2 | 63.3 | 93.8 | 72.1 | 24 | 41.3 | 42.2 | 48.1 | 56.3 | 22.2 |
HebrewSentiment | 69.9 | 51.7 | 82.2 | 62.9 | 10.2 | 84.4 | 83.2 | 81 | 82.1 | 86.6 |
Contributors
[To fill in]
Contributors: [To fill in]
Acknowledgments
We would like to express our gratitude to [To fill in]