File size: 2,878 Bytes
32d181a
 
1230cf1
32d181a
1230cf1
 
cfd84f5
 
 
 
32d181a
 
a33c163
 
 
2d3d342
82e9b27
 
 
 
 
 
 
32d181a
 
 
 
 
 
 
 
 
 
8773eba
32d181a
 
 
 
 
 
8773eba
32d181a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
---
license: apache-2.0
language: "en"
tags:
- financial-sentiment-analysis
- sentiment-analysis
- generated_from_trainer
- financial
- stocks
- sentiment
metrics:
- f1
datasets:
- financial_phrasebank
- Kaggle Self label
- financial-classification
widget:
- text: "The USD rallied by 10% last night"
  example_title: "Bullish Sentiment"
- text: "Covid-19 cases have been increasing over the past few months"
  example_title: "Bearish Sentiment"
- text: "the USD has been trending lower"
  example_title: "Mildly Bearish Sentiment"
model-index:
- name: distilroberta-finetuned-finclass
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# distilroberta-finetuned-finclass

This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the [financial-phrasebank + Kaggle Dataset](https://huggingface.co/datasets/nickmuchi/financial-classification). The Kaggle dataset includes Covid-19 sentiment data and can be found here: [sentiment-classification-selflabel-dataset](https://www.kaggle.com/percyzheng/sentiment-classification-selflabel-dataset).
It achieves the following results on the evaluation set:
- Loss: 0.4463
- F1: 0.8835

## Model description

Model determines the financial sentiment of given text. Given the unbalanced distribution of the class labels, the weights were adjusted to pay attention to the less sampled labels which should increase overall performance.

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | F1     |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7309        | 1.0   | 72   | 0.3671          | 0.8441 |
| 0.3757        | 2.0   | 144  | 0.3199          | 0.8709 |
| 0.3054        | 3.0   | 216  | 0.3096          | 0.8678 |
| 0.2229        | 4.0   | 288  | 0.3776          | 0.8390 |
| 0.1744        | 5.0   | 360  | 0.3678          | 0.8723 |
| 0.1436        | 6.0   | 432  | 0.3728          | 0.8758 |
| 0.1044        | 7.0   | 504  | 0.4116          | 0.8744 |
| 0.0931        | 8.0   | 576  | 0.4148          | 0.8761 |
| 0.0683        | 9.0   | 648  | 0.4423          | 0.8837 |
| 0.0611        | 10.0  | 720  | 0.4463          | 0.8835 |


### Framework versions

- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3