pszemraj's picture
Librarian Bot: Add base_model information to model (#4)
b578870
|
raw
history blame
5.76 kB
metadata
license:
  - apache-2.0
  - bsd-3-clause
tags:
  - summarization
  - summary
  - booksum
  - long-document
  - long-form
datasets:
  - kmfoda/booksum
metrics:
  - rouge
inference: false
base_model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
model-index:
  - name: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14
    results:
      - task:
          type: summarization
          name: Summarization
        dataset:
          name: samsum
          type: samsum
          config: samsum
          split: test
        metrics:
          - type: rouge
            value: 23.5177
            name: ROUGE-1
            verified: true
            verifyToken: >-
              eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjMxNGRmYjc0ZjNmZWY3YjFjNDEzYjRhYTAyNWNkOGQ3ODMzM2EwMzk0NTVkMzQ5MGMyYjYxMTkzYWQyMjZiMyIsInZlcnNpb24iOjF9.-PPUZc4Jr6EjNcQ-u9n814SfeviFEaddbFco5d1wbJNoECN_HqciNphSjXh7w99I_rQ6rPIXu8DA93u7aFj9CA
          - type: rouge
            value: 4.668
            name: ROUGE-2
            verified: true
            verifyToken: >-
              eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWMwYWI1NTg0ZTUwNGIwZjlhYWU2MzQwN2I0NzA1M2MyOTA5YjQ4MTEyYzU2MjRmZTA0NzNiYWM2ZDU0ZThhYyIsInZlcnNpb24iOjF9.CQfgJ3Lha9XR2-IudjfFuaUh_uphWPdYk6TMQOLriWM78_X-paqEIBZDh1Q-WbWoUf-CAyf6UvqXqELRDb3hBQ
          - type: rouge
            value: 16.6091
            name: ROUGE-L
            verified: true
            verifyToken: >-
              eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzhkODRjM2FiYTFhN2E5MWFiNzk3MjgzMGI0NmY0ODNkYjAxZWNmYmFmYmY0NDBmNjRmOTBkMGVhMGYzMmRkMCIsInZlcnNpb24iOjF9.Y66qsqvvGeAoiMCr1xa9enBMfcXt6a6I2i5s7VAJ3aoh3DtM2RlaMm4nuXG4uzWHedWW1NDivBWMZtycYed9DA
          - type: rouge
            value: 20.3174
            name: ROUGE-LSUM
            verified: true
            verifyToken: >-
              eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWJjODE5NDdlMjM2YjNkOWU3NDJiMWFlZGU2YTRkNzliMTg5MGNkNDQ3YWU3MTBmY2E5ZTUxMzEyMDc0ZTU5YiIsInZlcnNpb24iOjF9.Kc1accwOycbNU1F7AT8LV8jC3NpYKMmOsZmdLeDdIi5BqgMJcQSP8oNt3L-hGbscLb-D7iIvQBFtmmiGqpnGDQ
          - type: loss
            value: 3.2174887657165527
            name: loss
            verified: true
            verifyToken: >-
              eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTQ0MzQxYTQ1M2NiODcwNTBjOTRiMGEzZDE0NDFlZWJlNmNlOWI1M2M1N2Q2ZTVkNWFiMzQxNDhjODQxNDFkYSIsInZlcnNpb24iOjF9.Vat1Thj2t_1of477BCINeYoTeNZX1NIPG13qVskJ44ivKLJgMr4BCp0luYNEi2skNTv3kYK2orqBdDfxPZlfDw
          - type: gen_len
            value: 57.1966
            name: gen_len
            verified: true
            verifyToken: >-
              eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2I3MmZkMDY1YmM5N2VmMGE5OWQ0NDg5OWM0OWViMzU1ZTM1NjBmYWRmN2I2YmM5YTVkNGVlZGViNWEwOGYwMyIsInZlcnNpb24iOjF9.4c6j-ll2veK_PuCcXvRK8-8cPFdedKsqGHQsEhGpwj48uwI3PMqj9mF4HYvSGq5H3rVM_dvyPEEs9RhjCvSHBw
      - task:
          type: summarization
          name: Summarization
        dataset:
          name: kmfoda/booksum
          type: kmfoda/booksum
          config: kmfoda--booksum
          split: test
        metrics:
          - type: rouge
            value: 35.9884
            name: ROUGE-1
            verified: true
            verifyToken: >-
              eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWMzODFmZGYwZmNjY2FkZTJmZWYyMjI1NDdlZDI3OTdmY2MzNzU0M2FhYmMxNTdkOGRiNzIwMTEyMTMwMTgyMSIsInZlcnNpb24iOjF9.pbT1OOzVOjnUp4q6lcpUPunDYTQqOiwQeRLRV69699SoMI3ay4bfd_hbWZUvXOuivoJ5JiDd9KBhEqYUninNCA
          - type: rouge
            value: 6.0596
            name: ROUGE-2
            verified: true
            verifyToken: >-
              eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODNkNTE1NjU5ZmY3MmVmOGQxMjhhODRhZjIzMDRhMzJlYTY3YTkyNzM4YTAyMGI2YzRlMzljMDM5YzFjNzIyOCIsInZlcnNpb24iOjF9.NevkOank_Ou1u2ZfkEa3o4FF4DapvpFK_ucxLRm-xL-ZWGl9cLdLTOxVECrTn8Yasi_sWrjZUhGRWPkCKlJADQ
          - type: rouge
            value: 16.1419
            name: ROUGE-L
            verified: true
            verifyToken: >-
              eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzNkM2M4YjQxM2IwNzIzMjAyMjM3MThlMGQwMDgzMGI3NDU0NWVlOTFhMzBlOGQ3ZTQzOGNkNmE5ZGI5NTkzOCIsInZlcnNpb24iOjF9.8DYhyJAiKIK2aIkQSwMy4NEiBSC4v8_h_3feA-TFBdd-icGg5YvKMQR7_IOa1-9AHBe6PphVSFjl82-nDp6lDA
          - type: rouge
            value: 32.9992
            name: ROUGE-LSUM
            verified: true
            verifyToken: >-
              eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTEwMmViZmZjMzA3OWYxNWMxZjFhMDBmMDRjOGViYzdiYzllNzk0YWZmNTU0NWIwMmVmZjQzNjlkZTBmZTU0YyIsInZlcnNpb24iOjF9.KFwuSVaUXx958EWZctKpK1wawA0EH4yxBJdp3Zru4Sn97oSyP_s5m-jjZiLfP6kmSajd3849dna-Uw77s3sVBg
          - type: loss
            value: 2.9468588829040527
            name: loss
            verified: true
            verifyToken: >-
              eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzhmODMwMDM2ZDFmNTMzZmFjMmJjMGQ0MmQzMDcyMmFiNmFjMzBhY2RjMTI1MDhiMjI4NTEyYWFlYTNlMzE0ZSIsInZlcnNpb24iOjF9.PHX9VAAgiUGOR8Uxuam4otU65hIzW6hBapaf1KY8o1FDfaoHWAKbSnpjZ3nXKYYeVV6LyGRny_7RdRbbbM8LAA
          - type: gen_len
            value: 298.2593
            name: gen_len
            verified: true
            verifyToken: >-
              eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGYzMDAzNTQyMzgxM2RhNjY3MDIyNGEyMWZjYmYyYmJlNWM3MTFkYzRjMDhkMmZhNDZiN2FhYTY3MGI2NDcxNyIsInZlcnNpb24iOjF9.ax3H6LohHUodVGhSMUWMZZZ-bCTXHEaGpK4jXuOdZkGsewYrX8fO1oRA0uDjACM-eceKFfVnMveHoU9EdMaeCA

long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14

This model is a fine-tuned version of pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13 on the kmfoda/booksum dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0006
  • train_batch_size: 4
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.02
  • num_epochs: 2

Framework versions

  • Transformers 4.22.0
  • Pytorch 1.12.1
  • Datasets 2.4.0
  • Tokenizers 0.12.1