metadata
license: mit
base_model: cardiffnlp/twitter-roberta-base-sentiment-latest
language:
- en
library_name: transformers
tags:
- Roberta
- Sentiment Analysis
widget:
- text: This product is great!
π Fine-tuned RoBERTa for Sentiment Analysis on Reviews π
This is a fine-tuned version of cardiffnlp/twitter-roberta-base-sentiment-latest on the Amazon Reviews dataset for sentiment analysis.
π Model Details
- π Model Name:
AnkitAI/reviews-roberta-base-sentiment-analysis
- π Base Model:
cardiffnlp/twitter-roberta-base-sentiment-latest
- π Dataset: Amazon Reviews
- βοΈ Fine-tuning: This model was fine-tuned for sentiment analysis with a classification head for binary sentiment classification (positive and negative).
ποΈ Training
The model was trained using the following parameters:
- π§ Learning Rate: 2e-5
- π¦ Batch Size: 16
- β³ Epochs: 3
- βοΈ Weight Decay: 0.01
- π Evaluation Strategy: Epoch
ποΈ Training Details
- π Eval Loss: 0.1049
- β±οΈ Eval Runtime: 3177.538 seconds
- π Eval Samples/Second: 226.591
- π Eval Steps/Second: 7.081
- π Epoch: 3.0
- π Train Runtime: 110070.6349 seconds
- π Train Samples/Second: 78.495
- π Train Steps/Second: 2.453
- π Train Loss: 0.0858
π Usage
You can use this model directly with the Hugging Face transformers
library:
from transformers import RobertaForSequenceClassification, RobertaTokenizer
model_name = "AnkitAI/reviews-roberta-base-sentiment-analysis"
model = RobertaForSequenceClassification.from_pretrained(model_name)
tokenizer = RobertaTokenizer.from_pretrained(model_name)
# Example usage
inputs = tokenizer("This product is great!", return_tensors="pt")
outputs = model(**inputs) # 1 for positive, 0 for negative
# Get sentiment
logits = outputs.logits
print(logits)
Note: LABEL_1 indicates positive sentiment and LABEL_0 indicates negative sentiment.
π License
This model is licensed under the MIT License.