Model Description
This is a fine-tuned DistilBART model for sequence classification on CNN news articles for text classification. The model was fine-tuned using a batch size of 32, a learning rate of 6e-5, and for 1 epoch.
Dataset
The CNN News dataset was used for fine-tuning the model. The dataset consists of news articles from various categories such as sports, entertainment, politics, etc.
Performance
The following performance metrics were achieved after fine-tuning the model:
- Accuracy: 0.9597114707952147
- F1-score: 0.9589247895703302
- Recall: 0.9597114707952147
- Precision: 0.9589649408501851
Usage
You can use this model to classify CNN news articles into different categories such as sports, entertainment, politics, etc. You can load the model using the Hugging Face Transformers library and use it to predict the class of a new news article.
from transformers import AutoModelForSequenceClassification, AutoTokenizer
# Load the fine-tuned model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("IT-community/distilBART_cnn_news_text_classification")
model = AutoModelForSequenceClassification.from_pretrained("IT-community/distilBART_cnn_news_text_classification")
# Classify a news article
news_article = "A new movie is set to release this weekend"
inputs = tokenizer(news_article, padding=True, truncation=True, return_tensors="pt")
outputs = model(**inputs)
predicted_class = outputs.logits.argmax().item()
About Us
We are a scientific club from Saad Dahleb Blida University named IT Community, created in 2016 by students. We are interested in all IT fields, This work was done by IT Community Club.
Contributions
Added preprocessing code for CNN news articles
Improved model performance with additional fine-tuning on a larger dataset
- Downloads last month
- 7