RappyProgramming commited on
Commit
0293862
1 Parent(s): 4e7ad78

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -0
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ ---
5
+ # Model Description
6
+ This model was created for a thesis. This model was created with a dataset obtained from social media X, with hashtag #IsraelPalestineWar, collected from October - November 2023. This model is meant to classify sentiment of tweets and suitable for english language.
7
+
8
+ - **Reference Paper** : [Coming soon](comingsoon)
9
+ - **Github** : [Coming soon](comingsoon)
10
+
11
+ ## Model Labels:
12
+ - Label 0: **Negative**
13
+ - Label 1: **Neutral**
14
+ - Label 2: **Positive**
15
+
16
+
17
+ ## How to use model
18
+ - **Loading the model**
19
+ ```python
20
+ from transformers import RobertaForSequenceClassification, RobertaTokenizer
21
+ import torch
22
+
23
+ output_model_dir = 'RappyProgramming/IPW-DistilBERT-cased'
24
+ model = RobertaForSequenceClassification.from_pretrained(output_model_dir)
25
+ tokenizer = RobertaTokenizer.from_pretrained(output_model_dir)
26
+
27
+ ```
28
+ - **Sample Input**
29
+ ```python
30
+ input_texts = [
31
+ "this meeting is scheduled for next week",
32
+ "drop dead",
33
+ "you're the best friend i could ever have in this whole wide world!!"
34
+ ]
35
+ ```
36
+ - **Output**
37
+ ```python
38
+ inputs = tokenizer(input_texts, return_tensors="pt", padding=True, truncation=True)
39
+ with torch.no_grad():
40
+ outputs = model(**inputs)
41
+
42
+ predicted_class_indices = torch.argmax(outputs.logits, dim=1).tolist()
43
+ probs = torch.softmax(outputs.logits, dim=1).tolist()
44
+ labels = ["Negative", "Neutral", "Positive"]
45
+
46
+ for i, input_text in enumerate(input_texts):
47
+ predicted_label = labels[predicted_class_indices[i]]
48
+ predicted_probabilities = {label: prob for label, prob in zip(labels, probs[i])}
49
+
50
+ print(f"Input text {i+1}: {input_text}")
51
+ print(f"Predicted label: {predicted_label}")
52
+ print("Predicted probabilities:")
53
+
54
+ for label, prob in predicted_probabilities.items():
55
+ print(f"{label}: {prob:.4f}")
56
+
57
+ print()
58
+ ```
59
+ ```
60
+ Output:
61
+
62
+ Input text 1: this meeting is scheduled for next week
63
+ Predicted label: Neutral
64
+ Predicted probabilities:
65
+ Negative: 0.0011
66
+ Neutral: 0.9979
67
+ Positive: 0.0010
68
+
69
+ Input text 2: drop dead
70
+ Predicted label: Negative
71
+ Predicted probabilities:
72
+ Negative: 0.9958
73
+ Neutral: 0.0028
74
+ Positive: 0.0014
75
+
76
+ Input text 3: you're the best friend i could ever have in this whole wide world!!
77
+ Predicted label: Positive
78
+ Predicted probabilities:
79
+ Negative: 0.0007
80
+ Neutral: 0.0003
81
+ Positive: 0.9990
82
+ ```