File size: 3,979 Bytes
078cc55
5268bc5
078cc55
 
 
 
 
 
 
 
 
 
 
5268bc5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
078cc55
 
 
 
33dd414
 
 
 
078cc55
 
 
 
 
 
ee7ad2b
 
74d3d72
 
6d28ee7
 
b736ec6
 
ee7ad2b
078cc55
 
 
fd52bbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
078cc55
 
 
 
 
 
 
c4a4c55
078cc55
 
af55c1f
078cc55
da010c8
 
 
 
 
 
078cc55
1ecb98a
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- emotion
- pytorch
license: apache-2.0
datasets:
- emotion
metrics:
- Accuracy, F1 Score
model-index:
- name: bhadresh-savani/bert-base-uncased-emotion
  results:
  - task:
      type: text-classification
      name: Text Classification
    dataset:
      name: emotion
      type: emotion
      config: default
      split: test
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.9265
      verified: true
    - name: Precision Macro
      type: precision
      value: 0.8859601677706858
      verified: true
    - name: Precision Micro
      type: precision
      value: 0.9265
      verified: true
    - name: Precision Weighted
      type: precision
      value: 0.9265082039990273
      verified: true
    - name: Recall Macro
      type: recall
      value: 0.879224648382427
      verified: true
    - name: Recall Micro
      type: recall
      value: 0.9265
      verified: true
    - name: Recall Weighted
      type: recall
      value: 0.9265
      verified: true
    - name: F1 Macro
      type: f1
      value: 0.8821398657055098
      verified: true
    - name: F1 Micro
      type: f1
      value: 0.9265
      verified: true
    - name: F1 Weighted
      type: f1
      value: 0.9262425173620311
      verified: true
    - name: loss
      type: loss
      value: 0.17315374314785004
      verified: true
---
# bert-base-uncased-emotion

## Model description:

[Bert](https://arxiv.org/abs/1810.04805) is a Transformer Bidirectional Encoder based Architecture trained on MLM(Mask Language Modeling) objective

[bert-base-uncased](https://huggingface.co/bert-base-uncased) finetuned on the emotion dataset using HuggingFace Trainer with below training parameters
```
 learning rate 2e-5, 
 batch size 64,
 num_train_epochs=8,
```

## Model Performance Comparision on Emotion Dataset from Twitter:

| Model | Accuracy | F1 Score |  Test Sample per Second |
| --- | --- | --- | --- |
| [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) | 93.8 | 93.79 | 398.69 |
| [Bert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/bert-base-uncased-emotion) | 94.05 | 94.06 | 190.152 |
| [Roberta-base-emotion](https://huggingface.co/bhadresh-savani/roberta-base-emotion) | 93.95 | 93.97| 195.639 |
| [Albert-base-v2-emotion](https://huggingface.co/bhadresh-savani/albert-base-v2-emotion) | 93.6 | 93.65 | 182.794 |

## How to Use the model:
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='bhadresh-savani/bert-base-uncased-emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)

"""
output:
[[
{'label': 'sadness', 'score': 0.0005138228880241513}, 
{'label': 'joy', 'score': 0.9972520470619202}, 
{'label': 'love', 'score': 0.0007443308713845909}, 
{'label': 'anger', 'score': 0.0007404946954920888}, 
{'label': 'fear', 'score': 0.00032938539516180754}, 
{'label': 'surprise', 'score': 0.0004197491507511586}
]]
"""
```

## Dataset:
[Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion).

## Training procedure
[Colab Notebook](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithDistilbert.ipynb)
follow the above notebook by changing the model name from distilbert to bert

## Eval results
```json
{
 'test_accuracy': 0.9405,
 'test_f1': 0.9405920712282673,
 'test_loss': 0.15769127011299133,
 'test_runtime': 10.5179,
 'test_samples_per_second': 190.152,
 'test_steps_per_second': 3.042
 }
```

## Reference:
* [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/)