--- language: - en license: apache-2.0 datasets: - glue metrics: - accuracy model-index: - name: t5-finetuned-rte results: - task: name: Text Classification type: text-classification dataset: name: GLUE RTE type: glue args: rte metrics: - name: Accuracy type: accuracy value: 0.5634 --- # T5-finetuned-rte This model is T5 fine-tuned on GLUE RTE dataset. It acheives the following results on the validation set - Accuracy: 0.7690 ## Model Details T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. ## Training procedure ### Tokenization Since, T5 is a text-to-text model, the labels of the dataset are converted as follows: For each example, a sentence as been formed as **"rte sentence1: " + rte_sent1 + "sentence 2" + rte_sent2** and fed to the tokenizer to get the **input_ids** and **attention_mask**. For each label, label is choosen as **"entailment"** if label is 1, else label is **"not_entailment"** and tokenized to get **input_ids** and **attention_mask** . During training, these inputs_ids having **pad** token are replaced with -100 so that loss is not calculated for them. Then these input ids are given as labels, and above attention_mask of labels is given as decoder attention mask. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-4 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: epsilon=1e-08 - num_epochs: 3.0 ### Training results |Epoch | Training Loss | Validation Accuracy | |:----:|:-------------:|:-------------------:| | 1 | 0.1099 | 0.7617 | | 2 | 0.0573 | 0.7617 | | 3 | 0.0276 | 0.7690 |