--- language: - en --- # Model Description This model was created for a thesis. This model was created with a dataset obtained from social media X, with hashtag #IsraelPalestineWar, collected from October - November 2023. This model is meant to classify sentiment of tweets and suitable for english language. - **Reference Paper** : [Coming soon](comingsoon) - **Github** : [Coming soon](comingsoon) ## Model Labels: - Label 0: **Negative** - Label 1: **Neutral** - Label 2: **Positive** ## How to use model - **Loading the model** ```python from transformers import RobertaForSequenceClassification, RobertaTokenizer import torch output_model_dir = 'RappyProgramming/IPW-DistilBERT-uncased' model = RobertaForSequenceClassification.from_pretrained(output_model_dir) tokenizer = RobertaTokenizer.from_pretrained(output_model_dir) ``` - **Sample Input** ```python input_texts = [ "this meeting is scheduled for next week", "drop dead", "you're the best friend i could ever have in this whole wide world!!" ] ``` - **Output** ```python inputs = tokenizer(input_texts, return_tensors="pt", padding=True, truncation=True) with torch.no_grad(): outputs = model(**inputs) predicted_class_indices = torch.argmax(outputs.logits, dim=1).tolist() probs = torch.softmax(outputs.logits, dim=1).tolist() labels = ["Negative", "Neutral", "Positive"] for i, input_text in enumerate(input_texts): predicted_label = labels[predicted_class_indices[i]] predicted_probabilities = {label: prob for label, prob in zip(labels, probs[i])} print(f"Input text {i+1}: {input_text}") print(f"Predicted label: {predicted_label}") print("Predicted probabilities:") for label, prob in predicted_probabilities.items(): print(f"{label}: {prob:.4f}") print() ``` ``` Output: Input text 1: this meeting is scheduled for next week Predicted label: Neutral Predicted probabilities: Negative: 0.0012 Neutral: 0.9978 Positive: 0.0010 Input text 2: drop dead Predicted label: Negative Predicted probabilities: Negative: 0.9973 Neutral: 0.0017 Positive: 0.0010 Input text 3: you're the best friend i could ever have in this whole wide world!! Predicted label: Positive Predicted probabilities: Negative: 0.0004 Neutral: 0.0003 Positive: 0.9992 ```