Edit model card

🌟 Fine-tuned RoBERTa for Sentiment Analysis on Reviews 🌟

This is a fine-tuned version of cardiffnlp/twitter-roberta-base-sentiment-latest on the Amazon Reviews dataset for sentiment analysis.

πŸ“œ Model Details

  • πŸ†• Model Name: AnkitAI/reviews-roberta-base-sentiment-analysis
  • πŸ”— Base Model: cardiffnlp/twitter-roberta-base-sentiment-latest
  • πŸ“Š Dataset: Amazon Reviews
  • βš™οΈ Fine-tuning: This model was fine-tuned for sentiment analysis with a classification head for binary sentiment classification (positive and negative).

πŸ‹οΈ Training

The model was trained using the following parameters:

  • πŸ”§ Learning Rate: 2e-5
  • πŸ“¦ Batch Size: 16
  • βš–οΈ Weight Decay: 0.01
  • πŸ“… Evaluation Strategy: Epoch

πŸ‹οΈ Training Details

  • πŸ“‰ Eval Loss: 0.1049
  • ⏱️ Eval Runtime: 3177.538 seconds
  • πŸ“ˆ Eval Samples/Second: 226.591
  • πŸŒ€ Eval Steps/Second: 7.081
  • πŸƒ Train Runtime: 110070.6349 seconds
  • πŸ“Š Train Samples/Second: 78.495
  • πŸŒ€ Train Steps/Second: 2.453
  • πŸ“‰ Train Loss: 0.0858
  • ⏳ Eval Accuracy: 97.19%
  • πŸŒ€ Eval Precision: 97.9%
  • ⏱️ Eval Recall: 97.18%
  • πŸ“ˆ Eval F1 Score: 97.19%

πŸš€ Usage

You can use this model directly with the Hugging Face transformers library:

from transformers import RobertaForSequenceClassification, RobertaTokenizer

model_name = "AnkitAI/reviews-roberta-base-sentiment-analysis"
model = RobertaForSequenceClassification.from_pretrained(model_name)
tokenizer = RobertaTokenizer.from_pretrained(model_name)

# Example usage
inputs = tokenizer("This product is great!", return_tensors="pt")
outputs = model(**inputs) # 1 for positive, 0 for negative

πŸ“œ License

This model is licensed under the MIT License.

Downloads last month
43
Safetensors
Model size
125M params
Tensor type
F32
Β·
Inference API
This model can be loaded on Inference API (serverless).

Finetuned from

Space using AnkitAI/reviews-roberta-base-sentiment-analysis 1