# Fine-tuned XLM-R Model for hebrew Sentiment Analysis This is a fine-tuned XLM-R model for sentiment analysis in hebrew. ## Model Details - **Model Name**: XLM-R Sentiment Analysis - **Language**: hebrew - **Fine-tuning Dataset**: DGurgurov/hebrew_sa ## Training Details - **Epochs**: 20 - **Batch Size**: 32 (train), 64 (eval) - **Optimizer**: AdamW - **Learning Rate**: 5e-5 ## Performance Metrics - **Accuracy**: 0.92106 - **Macro F1**: 0.90782 - **Micro F1**: 0.92106 ## Usage To use this model, you can load it with the Hugging Face Transformers library: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("DGurgurov/xlm-r_hebrew_sentiment") model = AutoModelForSequenceClassification.from_pretrained("DGurgurov/xlm-r_hebrew_sentiment") ``` ## License [MIT]