PavanNeerudu commited on
Commit
c7c07b3
·
1 Parent(s): b546015

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ datasets:
6
+ - glue
7
+ metrics:
8
+ - pearsonr
9
+ model-index:
10
+ - name: t5-base-finetuned-stsb
11
+ results:
12
+ - task:
13
+ name: Text Classification
14
+ type: text-classification
15
+ dataset:
16
+ name: GLUE STS-B
17
+ type: glue
18
+ args: stsb
19
+ metrics:
20
+ - name: Pearson Correlation
21
+ type: pearson_correlation
22
+ value: 0.8937
23
+ ---
24
+
25
+
26
+ # T5-base-finetuned-stsb
27
+
28
+ <!-- Provide a quick summary of what the model is/does. -->
29
+
30
+ This model is T5 fine-tuned on GLUE STS-B dataset. It acheives the following results on the validation set
31
+ - Pearson Correlation Coefficient: 0.8937
32
+
33
+
34
+ ## Model Details
35
+ T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.
36
+
37
+ ## Training procedure
38
+
39
+ ### Tokenization
40
+ Since, T5 is a text-to-text model, the labels of the dataset are converted as follows:
41
+ For each example, a sentence as been formed as **"stsb sentence1: " + stsb_sent1 + "sentence2: " + stsb_sent2** and fed to the tokenizer to get the **input_ids** and **attention_mask**.
42
+ Unlike other **GLUE** tasks, STS-B is a regression task where the goal is to predict a similarity score between 1 and 5. I have used the same stratey as descibed in the T5 paper for fine-tuning. In the paper, it is mentioned as
43
+ ``` We found that most of these scores were annotated in increments of 0.2, so we simply rounded any score to the nearest increment of 0.2 and converted the result to a literal string representation of the number (e.g. the floating-point value 2.57 would be mapped to the string “2.6”). At test time, if the model outputs a string corresponding to a number between 1 and 5, we convert it to a floating-point value; otherwise, we treat the model’s prediction as incorrect. This effectively recasts the STS-B
44
+ regression problem as a 21-class classification problem. ```
45
+
46
+
47
+ ### Training hyperparameters
48
+
49
+ The following hyperparameters were used during training:
50
+ - learning_rate: 3e-4
51
+ - train_batch_size: 32
52
+ - eval_batch_size: 32
53
+ - seed: 42
54
+ - optimizer: epsilon=1e-08
55
+ - num_epochs: 3.0
56
+
57
+ ### Training results
58
+
59
+
60
+ |Epoch | Training Loss | Validation Pearson Correlation Coefficient |
61
+ |:----:|:-------------:|:-------------------:|
62
+ | 1 | 0.8623 | 0.8200 |
63
+ | 2 | 0.7782 | 0.8675 |
64
+ | 3 | 0.7040 | **0.8937** |