AnkitAI's picture
Update README.md
649a9fc verified
|
raw
history blame
No virus
2.28 kB
---
license: mit
base_model: cardiffnlp/twitter-roberta-base-sentiment-latest
language:
- en
library_name: transformers
tags:
- Roberta
- Sentiment Analysis
widget:
- text: This product is really great!
- text: This product is really bad!
---
# 🌟 Fine-tuned RoBERTa for Sentiment Analysis on Reviews 🌟
This is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on the [Amazon Reviews dataset](https://www.kaggle.com/datasets/bittlingmayer/amazonreviews) for sentiment analysis.
## πŸ“œ Model Details
- **πŸ†• Model Name:** `AnkitAI/reviews-roberta-base-sentiment-analysis`
- **πŸ”— Base Model:** `cardiffnlp/twitter-roberta-base-sentiment-latest`
- **πŸ“Š Dataset:** [Amazon Reviews](https://www.kaggle.com/datasets/bittlingmayer/amazonreviews)
- **βš™οΈ Fine-tuning:** This model was fine-tuned for sentiment analysis with a classification head for binary sentiment classification (positive and negative).
## πŸ‹οΈ Training
The model was trained using the following parameters:
- **πŸ”§ Learning Rate:** 2e-5
- **πŸ“¦ Batch Size:** 16
- **βš–οΈ Weight Decay:** 0.01
- **πŸ“… Evaluation Strategy:** Epoch
### πŸ‹οΈ Training Details
- **πŸ“‰ Eval Loss:** 0.1049
- **⏱️ Eval Runtime:** 3177.538 seconds
- **πŸ“ˆ Eval Samples/Second:** 226.591
- **πŸŒ€ Eval Steps/Second:** 7.081
- **πŸƒ Train Runtime:** 110070.6349 seconds
- **πŸ“Š Train Samples/Second:** 78.495
- **πŸŒ€ Train Steps/Second:** 2.453
- **πŸ“‰ Train Loss:** 0.0858
- **⏳ Eval Accuracy:** 97.19%
- **πŸŒ€ Eval Precision:** 97.9%
- **⏱️ Eval Recall:** 97.18%
- **πŸ“ˆ Eval F1 Score:** 97.19%
## πŸš€ Usage
You can use this model directly with the Hugging Face `transformers` library:
```python
from transformers import RobertaForSequenceClassification, RobertaTokenizer
model_name = "AnkitAI/reviews-roberta-base-sentiment-analysis"
model = RobertaForSequenceClassification.from_pretrained(model_name)
tokenizer = RobertaTokenizer.from_pretrained(model_name)
# Example usage
inputs = tokenizer("This product is great!", return_tensors="pt")
outputs = model(**inputs) # 1 for positive, 0 for negative
```
## πŸ“œ License
This model is licensed under the [MIT License](LICENSE).