lorenzoscottb's picture
Update README.md
280f88c
|
raw
history blame
3.09 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
metrics:
  - rouge
model-index:
  - name: t5-base-DreamBank-Generation-NER-Char
    results: []
language:
  - en
widget:
  - text: >-
      I'm in an auditorium. Susie S is concerned at her part in this disability
      awareness spoof we are preparing. I ask, 'Why not do it? Lots of AB's
      represent us in a patronizing way. Why shouldn't we represent ourselves in
      a good, funny way?' I watch the video we all made. It is funny. I try to
      sit on a folding chair. Some guy in front talks to me. Merle is in the
      audience somewhere. [BL]

t5-base-DreamBank-Generation-NER-Char

This model is a fine-tuned version of t5-base on the DremBan dataset to detect which characters are present in a given report, following the Hall & Van de Castle (HVDC) framework. Please note that, during training: i) it was not specified to which features the characters were associated with; ii) in accordance with the HVDC system, the presence of the dreamer is not assessed.

It achieves the following results on the evaluation set:

  • Loss: 0.4674
  • Rouge1: 0.7853
  • Rouge2: 0.6927
  • Rougel: 0.7564
  • Rougelsum: 0.7565

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
No log 1.0 93 0.6486 0.5936 0.4495 0.5705 0.5701
No log 2.0 186 0.5363 0.7196 0.6020 0.6990 0.6983
No log 3.0 279 0.4391 0.7568 0.6459 0.7235 0.7244
No log 4.0 372 0.4223 0.7751 0.6748 0.7473 0.7477
No log 5.0 465 0.4266 0.7789 0.6746 0.7512 0.7522
0.6336 6.0 558 0.4296 0.7810 0.6790 0.7537 0.7539
0.6336 7.0 651 0.4400 0.7798 0.6808 0.7537 0.7543
0.6336 8.0 744 0.4497 0.7749 0.6821 0.7471 0.7481
0.6336 9.0 837 0.4661 0.7828 0.6910 0.7554 0.7563
0.6336 10.0 930 0.4674 0.7853 0.6927 0.7564 0.7565

Framework versions

  • Transformers 4.25.1
  • Pytorch 1.12.1
  • Datasets 2.5.1
  • Tokenizers 0.12.1