dmen24's picture
End of training
3809706 verified
metadata
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
  - generated_from_trainer
datasets:
  - big_patent
metrics:
  - rouge
model-index:
  - name: my_T5_summarization_model
    results:
      - task:
          name: Sequence-to-sequence Language Modeling
          type: text2text-generation
        dataset:
          name: big_patent
          type: big_patent
          config: f
          split: validation
          args: f
        metrics:
          - name: Rouge1
            type: rouge
            value: 0.2277

my_T5_summarization_model

This model is a fine-tuned version of t5-small on the big_patent dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9477
  • Rouge1: 0.2277
  • Rouge2: 0.1286
  • Rougel: 0.1988
  • Rougelsum: 0.1988
  • Gen Len: 19.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
2.156 1.0 5348 2.0181 0.2264 0.1267 0.1971 0.1972 19.0
2.1095 2.0 10696 1.9737 0.227 0.1276 0.1977 0.1978 19.0
2.0867 3.0 16044 1.9545 0.2277 0.1285 0.1987 0.1988 19.0
2.0577 4.0 21392 1.9477 0.2277 0.1286 0.1988 0.1988 19.0

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.5.0+cu121
  • Datasets 3.1.0
  • Tokenizers 0.19.1